I agree that we can implement Cloud without VM. If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications. I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.
Yours inquisitively,
Gabor
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group
or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC,
http://www.amazon.com/gp/product/B002H0IW1U
or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups
"Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to
cloud-computi...@googlegroups.com
I agree that we can implement Cloud without VM. If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications. I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.
Yours inquisitively,
Gabor
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of smith jack
Sent: Sunday, December 27, 2009 6:39 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
i think this is a quite open question.
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Hi Jan,
Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.
You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application?
Scott McNeely had the right concept, but the platforms and applications weren’t there to support it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.
Thoughts?
Gabor
Hi Jan,
Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.
You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application?
Scott McNeely had the right concept, but the platforms and applications weren’t there to support it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.
Thoughts?
Gabor
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz
Sent: Monday, December 28, 2009 2:57 PM
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
Sent on the Sprint® Now Network from my BlackBerry®
“Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.”
If you are from the business side, why bother about virtualization at all. If you are from the buisness side your primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in addition to Security and Risk to business assets. It shuould not matter as to how CC is accomplished with virtualization or bare-metal provisioning. Virtualization is not necessarily a way to split a machine to serve multiple very different puposes. If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in muptiple CRM VMs, isolated and separated for security reasons. In speciality clouds, clouds specializing in vertcial markets, many clients will be running their similar applilcations in different virtual machines.
“You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? “
Without taking credit away from Scott McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+ years and so are the utility clouds. Why would private data enters host one server per major application?...Isn’t server consolidation in the enterprise all about NOT having one server per major application. Infact private cloud is nothing but virtualization/server consolidation+live migration of virtual machines for efficent real-time resource utilization and optimization+chargeback models.
“Scott McNeely had the right concept, but the platforms and applications weren’t there to support it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.”
The most fundamental problem/roadblock with (public) CC today is Security, Security, Security….that is why in the near future as forecasted by gartner and others, it is going to be private clouds. This does not mean hybrid and public clouds will not have a market. Non-critical applications & SMBs will be the primary markets. You may want to read the following excellent article published recently by MIT Technology Review.
http://www.technologyreview.com/web/24166/
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com/
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net/
http://www..amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5..html
Thanks for all the comments. Perhaps I should stick to CORE, but I’m on the business side of IT and this is such an interesting topic.
Great article Rao, thanks! I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.
Jeanne, thanks for your insight too. Great discussion =)
@Rao Yes, lots of things “can” be done, such as “If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in multiple CRM VMs, isolated and separated for security reasons.” But are we truly maximizing the efficiency of that equipment? Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine? Doesn’t each VM need its own OS, etc?
Best regards,
Thanks for all the comments. Perhaps I should stick to CORE, but I’m on the business side of IT and this is such an interesting topic.
Great article Rao, thanks! I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.
Jeanne, thanks for your insight too. Great discussion =)
@Rao Yes, lots of things “can” be done, such as “If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in multiple CRM VMs, isolated and separated for security reasons.” But are we truly maximizing the efficiency of that equipment? Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine? Doesn’t each VM need its own OS, etc?
Best regards,
Gabor
Some great points as usual Jan.
>The simple fact is, that given the way things are right now, the
>easiest and most efficient way to maximize hardware resources for
>MOST common-off-the-shelf applications is to run them on VMs using
>commonly known hypervisors. It is proven, can be cheap (or free)
>and there is a huge body of moderately-trained people who know how
>to do it. Commodity solutions are often the best choice given all
>the parameters, despite their being more optimal solutions
>"theoretically" available.
Using virtualization is a *lot* easier than figuring out how
the industry can
truly take advantage of multi-core. Its just a stepping
stone imho until we all
figure out how to program all these cores well.
I've asked very well-known computer scientists about this
problem over the past
few years and the consensus is that programming multicore
hardware effectively is
a very, very hard problem (to solve
generically). Virtualization is just a simpler
way to use the cores since we haven't gotten that problem
licked yet. Probably a
remnant of the single-user/single-app mentality that the PC
bequeathed to us a while
ago.
Imo, if you have mediocre software, you don't multiply it by
N (cores) to try to
improve it...
Btw, there's a reason that Google doesn't use virtualization
to solve search.
Frank G.
- Happy New Decade everyone! [gotta be better than the last one...]
“Perhaps I should stick to CORE”
I wasn’t suggesting that you should stick to the CORE principles. Your statement about virtualization came across a bit negative on value of virtualization to CC, so I suggested then you should focus on the business apects of CC. As far as I am concerned, it doesn’t matter what gas (petrol, ethanol, battery) I use in my car as long as it gets me 100+ mpg and does not screw up the environment.
“I thought we were farther along in Security, with all the financial transactions conducted daily, but the article clearly points out the new threats related to the Cloud.”
Yes and No. IMO, secuirty in CC is 50% FUD and 50% real threats. The CIOs/businesses have to make a major leap in their psyche with respect to TRUST in the cloud. In addition, there might be some new larger threat issues as with respect to clouds being single point of concentration of business/wealth due to multi-tenancy, hence an attacker (more likely a rogue organization or a country) can focus on this one entity to cause major damage. It is the cumulative, aggregated threat due to multi-tenancy on a large scale.
“But are we truly maximizing the efficiency of that equipment? Isn’t it more efficient to use the same code with configurations and customizations for each user or organization sitting on the same instance separated for security purposes through identity and organization management as opposed to having a completely separate virtual machine? Doesn’t each VM need its own OS, etc?”
It is lot more insecure/risky if you use the same code to host multiple tenants. In separate virtual machines, you are isolated and more secure than sharing the same code with others. Yes, security and performance/efficiency are inversely proportional. By automating security, you can make it more efficient.
Here is an interesting article on how Xen hypervisor was breached through simple DMA and a backdoor was opened to get control. The guy has done a pretty creative hack.
http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf
This might have been fixed by Intel’s VT-d???....technology.
Second, much of security is about sandboxes. A VM is a pretty good
sandbox, but is quite expensive with regard to wasted memory, wasted
cycles, and hypervisor overhead. Non-VM sandboxes can be just as good --
or better -- and more efficient and easier to administer. The cost, as
it were, is forgoing the ability to introduce machine level code. A VM,
on the other hand, has an host operating system that must be maintained
like any other operating system. If you've ever run a server of any ilk,
you will understand the number of security patches issued per month, and
should understand that an unmaintained OS, on a hard server or in a
cloud, is a very insecure beast indeed.
The computing world has not yet agreed on an application platform
sufficiently good enough to standardize. Sooner or later, it will. At
that point (or maybe earlier), VMs will in all likelihood be regarded as
hopelessly inefficient, insecure anachronisms.
> �But are we truly maximizing the efficiency of that equipment? Isn�t
> it more efficient to use the same code with configurations and
> customizations for each user or organization sitting on the same
> instance separated for security purposes through identity and
> organization management as opposed to having a completely separate
> virtual machine? Doesn�t each VM need its own OS, etc?�
>
> It is lot more insecure/risky if you use the same code to host
> multiple tenants. In separate virtual machines, you are isolated and
> more secure than sharing the same code with others. Yes, security and
> performance/efficiency are inversely proportional. By automating
> security, you can make it more efficient.
>
Inversely proportional? Where did that come from? Do you have a proof?
>
> Here is an interesting article on how Xen hypervisor was breached
> through simple DMA and a backdoor was opened to get control. The guy
> has done a pretty creative hack.
>
> http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf
>
> This might have been fixed by Intel�s VT-d???....technology.
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:40 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Thanks for all the comments. Perhaps I should stick to CORE, but I�m
> on the business side of IT and this is such an interesting topic.
>
> Great article Rao, thanks! I thought we were farther along in
> Security, with all the financial transactions conducted daily, but the
> article clearly points out the new threats related to the Cloud.
>
> Jeanne, thanks for your insight too. Great discussion =)
>
> @Rao Yes, lots of things �can� be done, such as �If you have multiple
> tenants who all use CRM, then all of them can be hosted on the same
> physical hardware in multiple CRM VMs, isolated and separated for
> security reasons.� But are we truly maximizing the efficiency of that
> equipment? Isn�t it more efficient to use the same code with
> configurations and customizations for each user or organization
> sitting on the same instance separated for security purposes through
> identity and organization management as opposed to having a completely
> separate virtual machine? Doesn�t each VM need its own OS, etc?
>
> Best regards,
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Rao Dronamraju
> *Sent:* Tuesday, December 29, 2009 9:39 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> �Keeping in mind I�m on the business not technical side of Cloud
> Computing, I�m saying that virtualization is the lowest common
> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce.�
>
> If you are from the business side, why bother about virtualization at
> all. If you are from the buisness side your primary focus would be
> CORE � CapEx, OpEx, Revenues and Earnings in addition to Security and
> Risk to business assets. It shuould not matter as to how CC is
> accomplished with virtualization or bare-metal provisioning.
> Virtualization is not necessarily a way to split a machine to serve
> multiple very different puposes. If you have multiple tenants who all
> use CRM, then all of them can be hosted on the same physical hardware
> in muptiple CRM VMs, isolated and separated for security reasons. In
> speciality clouds, clouds specializing in vertcial markets, many
> clients will be running their similar applilcations in different
> virtual machines.
>
> �You are probably right that server virtualization won�t go anywhere
> �soon�, but in the long-term, isn�t it possible that a majority of
> organizations won�t see any benefit to having their own servers
> (remember Scott McNeely�s Network Computer concept?) and eventually as
> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application? �
>
> Without taking credit away from Scott McNealy, his concept/vision was
> nothing new. Telecom Cloud has been around for 50+ years and so are
> the utility clouds. Why would private data enters host one server per
> major application?...Isn�t server consolidation in the enterprise all
> about NOT having one server per major application. Infact private
> cloud is nothing but virtualization/server consolidation+live
> migration of virtual machines for efficent real-time resource
> utilization and optimization+chargeback models.
>
> �Scott McNeely had the right concept, but the platforms and
> applications weren�t there to support it. Now that there are platforms
> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without
> a single server on their books.�
>
> The most fundamental problem/roadblock with (public) CC today is
> Security, Security, Security�.that is why in the near future as
> forecasted by gartner and others, it is going to be private clouds.
> This does not mean hybrid and public clouds will not have a market.
> Non-critical applications & SMBs will be the primary markets. You may
> want to read the following excellent article published recently by MIT
> Technology Review.
>
> *http://www.technologyreview.com/web/24166/*
>
> * *
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:48 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind I�m on the business not technical side of Cloud
> Computing, I�m saying that virtualization is the lowest common
> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce.
>
> You are probably right that server virtualization won�t go anywhere
> �soon�, but in the long-term, isn�t it possible that a majority of
> organizations won�t see any benefit to having their own servers
> (remember Scott McNeely�s Network Computer concept?) and eventually as
> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application?
>
> Scott McNeely had the right concept, but the platforms and
> applications weren�t there to support it. Now that there are platforms
> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without
> a single server on their books.
>
> Thoughts?
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Jan Klincewicz
> *Sent:* Monday, December 28, 2009 2:57 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* Re: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> @Gabor:
>
> Do you think SalesForce.com provides a PHYSICAL machine for every
> instance they offer customers ?? Or do you mean end customers may give
> up running their OWN VMs (and calling them "Private Clouds?"
>
> In any event, I don't think sever virtualization is going anywhere
> soon ...
>
> On Mon, Dec 28, 2009 at 1:46 AM, Gabor Fulop
> <Gabor...@cloudharbor.com <mailto:Gabor...@cloudharbor.com>> wrote:
>
> I agree that we can implement Cloud without VM. If you are familiar
> with SalesForce.com (a SaaS example of Cloud Computing) then you may
> be familiar with Force.com (by the same company), which I consider a
> PaaS or BPaaS (Business Platform-as-a-Service), because it is a
> platform for creating applications. I wonder if any would agree that
> VM is the obsolete version of cloud because there is so much power
> available directly from shared service platforms that virtual sharing
> will soon be a thing of the past.
>
> Yours inquisitively,
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *smith jack
> *Sent:* Sunday, December 27, 2009 6:39 PM
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> *Subject:* [ Cloud Computing ] what role did virtual machine play in
> Cloud Computing?
>
> i think this is a quite open question.
>
> we can implement Cloud Computing without VM indeed, and i am sure lot
> of Cloud Systems are implemented without VMs,
>
> then what is the advantage of VMs in Cloud Computing?
>
> (easy deployment?)
>
> any reply is appreciated
>
> --
>
--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376
"A couple things. First, security with regard to vendor/client relationships
is not about trust, it's about consequences"
Jim, another way to spell SECURITY is TRUST!., Ask any security expert they
will tell you.
It does not matter whether it is client/vendor relationship or not. If you
have enough TRUST in the SECURITY that cloud provides in terms of
facilities, HW, SW, people and processes (which includes SLAs) then the
cloud is TRUSTWORTHY. Consequences are related to RISK which is a major
aspect of SECURITY/TRUST.
"I haven't made a study of SLAs, but the ones I have read shouldn't make an
enterprise user very happy."
This is a different issue. It all depends on what you move to the cloud. Eli
Lily ran their cluster in the cloud for $6.40, because they did not need a
whole lot of securty for their application.
That is why I said, non-critical (from a security perspective) workloads
will move to public clouds first. Do you need a lot of SLAs for moving
enterprise (content) websites to clouds to begin with. How many websites
have been created in the last 15+ years?...hundreds of thousands if not
millions! Most test and dev groups of enterprises can move to clouds without
the consequences as you put it. Clouds can host DR&BC sites for many
enterprises.
"Second, much of security is about sandboxes."
Poppycock!. Much of the security is about segregation, isolation,
confidentaility, integrity, authentication, auhorization, auditing,
non-repudiation, attestation, visibility, control, dynamic mutation,
artificial diversification etc etc....all fall under TRUST.
"A VM is a pretty good sandbox, but is quite expensive with regard to wasted
memory, wasted
cycles, and hypervisor overhead. Non-VM sandboxes can be just as good -- or
better -- and more efficient and easier to administer."
Yeah, try live migration of non-VM sandboxes?...why do you think Solaris
containers are not migratable?...In a cloud live migration is a very
efficient way to utilize resources. May be you can migrate your ACID
database in a JVM sandbox:-)
"Inversely proportional? Where did that come from? Do you have a proof?"
Sure, have you ever travelled through an airport?....The more the security
checks you go through the longer is your travel time and lesses is your
travelling efficeincy....
May be I should suggest a database security technique. Try encrypting your
database data and try without encryption and see which is more efficient to
process.
-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Wednesday, December 30, 2009 12:21 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?
Rao Dronamraju wrote:
>
> "Perhaps I should stick to CORE"
>
> I wasn't suggesting that you should stick to the CORE principles. Your
> statement about virtualization came across a bit negative on value of
> virtualization to CC, so I suggested then you should focus on the
> business apects of CC. As far as I am concerned, it doesn't matter
> what gas (petrol, ethanol, battery) I use in my car as long as it gets
> me 100+ mpg and does not screw up the environment.
>
> "I thought we were farther along in Security, with all the financial
> transactions conducted daily, but the article clearly points out the
> new threats related to the Cloud."
>
> "But are we truly maximizing the efficiency of that equipment? Isn't
> it more efficient to use the same code with configurations and
> customizations for each user or organization sitting on the same
> instance separated for security purposes through identity and
> organization management as opposed to having a completely separate
> virtual machine? Doesn't each VM need its own OS, etc?"
>
> It is lot more insecure/risky if you use the same code to host
> multiple tenants. In separate virtual machines, you are isolated and
> more secure than sharing the same code with others. Yes, security and
> performance/efficiency are inversely proportional. By automating
> security, you can make it more efficient.
>
Inversely proportional? Where did that come from? Do you have a proof?
>
> Here is an interesting article on how Xen hypervisor was breached
> through simple DMA and a backdoor was opened to get control. The guy
> has done a pretty creative hack.
>
> http://invisiblethingslab.com/bh08/papers/part1-subverting_xen.pdf
>
> This might have been fixed by Intel's VT-d???....technology.
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:40 PM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Thanks for all the comments. Perhaps I should stick to CORE, but I'm
> on the business side of IT and this is such an interesting topic.
>
> Great article Rao, thanks! I thought we were farther along in
> Security, with all the financial transactions conducted daily, but the
> article clearly points out the new threats related to the Cloud.
>
> Jeanne, thanks for your insight too. Great discussion =)
>
> @Rao Yes, lots of things "can" be done, such as "If you have multiple
> tenants who all use CRM, then all of them can be hosted on the same
> physical hardware in multiple CRM VMs, isolated and separated for
> security reasons." But are we truly maximizing the efficiency of that
> equipment? Isn't it more efficient to use the same code with
> configurations and customizations for each user or organization
> sitting on the same instance separated for security purposes through
> identity and organization management as opposed to having a completely
> separate virtual machine? Doesn't each VM need its own OS, etc?
>
> Best regards,
>
> Gabor
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Rao Dronamraju
> *Sent:* Tuesday, December 29, 2009 9:39 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> "Keeping in mind I'm on the business not technical side of Cloud
> Computing, I'm saying that virtualization is the lowest common
> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce."
>
> If you are from the business side, why bother about virtualization at
> all. If you are from the buisness side your primary focus would be
> CORE - CapEx, OpEx, Revenues and Earnings in addition to Security and
> Risk to business assets. It shuould not matter as to how CC is
> accomplished with virtualization or bare-metal provisioning.
> Virtualization is not necessarily a way to split a machine to serve
> multiple very different puposes. If you have multiple tenants who all
> use CRM, then all of them can be hosted on the same physical hardware
> in muptiple CRM VMs, isolated and separated for security reasons. In
> speciality clouds, clouds specializing in vertcial markets, many
> clients will be running their similar applilcations in different
> virtual machines.
>
> "You are probably right that server virtualization won't go anywhere
> "soon", but in the long-term, isn't it possible that a majority of
> organizations won't see any benefit to having their own servers
> (remember Scott McNeely's Network Computer concept?) and eventually as
> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application? "
>
> Without taking credit away from Scott McNealy, his concept/vision was
> nothing new. Telecom Cloud has been around for 50+ years and so are
> the utility clouds. Why would private data enters host one server per
> major application?...Isn't server consolidation in the enterprise all
> about NOT having one server per major application. Infact private
> cloud is nothing but virtualization/server consolidation+live
> migration of virtual machines for efficent real-time resource
> utilization and optimization+chargeback models.
>
> "Scott McNeely had the right concept, but the platforms and
> applications weren't there to support it. Now that there are platforms
> for building apps in the cloud like Force.com, CloudHarbor.com, Cordys
> Process Factory, MS Azure, and many more, organizations can integrate,
> customize and mash-up almost anything to create powerful apps without
> a single server on their books."
>
> The most fundamental problem/roadblock with (public) CC today is
> Security, Security, Security..that is why in the near future as
> forecasted by gartner and others, it is going to be private clouds.
> This does not mean hybrid and public clouds will not have a market.
> Non-critical applications & SMBs will be the primary markets. You may
> want to read the following excellent article published recently by MIT
> Technology Review.
>
> *http://www.technologyreview.com/web/24166/*
>
> * *
>
> ------------------------------------------------------------------------
>
> *From:* cloud-c...@googlegroups.com
> [mailto:cloud-c...@googlegroups.com] *On Behalf Of *Gabor Fulop
> *Sent:* Tuesday, December 29, 2009 1:48 AM
> *To:* cloud-c...@googlegroups.com
> *Subject:* RE: [ Cloud Computing ] what role did virtual machine play
> in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind I'm on the business not technical side of Cloud
> Computing, I'm saying that virtualization is the lowest common
> denominator i.e. a way to split a machine to serve multiple very
> different purposes rather than many users or organizations using the
> same server for a similar purpose e.g. CRM, but in a secure way
> without unauthorized access like SalesForce.
>
> You are probably right that server virtualization won't go anywhere
> "soon", but in the long-term, isn't it possible that a majority of
> organizations won't see any benefit to having their own servers
> (remember Scott McNeely's Network Computer concept?) and eventually as
> PaaS reaches its tipping point and all custom applications can be
> developed in the cloud, private data centers will only be
> cost-effective for the very largest of organizations that need at
> least one server per major application?
>
> Scott McNeely had the right concept, but the platforms and
> applications weren't there to support it. Now that there are platforms
--
"Rao Dronamraju" <rao.dro...@sbcglobal.net> Dec 30 11:30AM -0600 ^
"Perhaps I should stick to CORE"
I wasn't suggesting that you should stick to the CORE principles. Your
statement about virtualization came across a bit negative on value of
virtualization to CC, so I suggested then you should focus on the business
apects of CC. As far as I am concerned, it doesn't matter what gas (petrol,
ethanol, battery) I use in my car as long as it gets me 100+ mpg and does
not screw up the environment.
"I thought we were farther along in Security, with all the financial
transactions conducted daily, but the article clearly points out the new
threats related to the Cloud."
Yes and No. IMO, secuirty in CC is 50% FUD and 50% real threats. The
CIOs/businesses have to make a major leap in their psyche with respect to
TRUST in the cloud. In addition, there might be some new larger threat
issues as with respect to clouds being single point of concentration of
business/wealth due to multi-tenancy, hence an attacker (more likely a rogue
organization or a country) can focus on this one entity to cause major
damage. It is the cumulative, aggregated threat due to multi-tenancy on a
large scale.
"But are we truly maximizing the efficiency of that equipment? Isn't it
more efficient to use the same code with configurations and customizations
for each user or organization sitting on the same instance separated for
security purposes through identity and organization management as opposed to
having a completely separate virtual machine? Doesn't each VM need its own
OS, etc?"
It is lot more insecure/risky if you use the same code to host multiple
tenants. In separate virtual machines, you are isolated and more secure than
sharing the same code with others. Yes, security and performance/efficiency
are inversely proportional. By automating security, you can make it more
efficient.
Not sure if I understand your question correctly, but it shouldn’t matter if the code is a compiled C or Fortran binary. Since Hypervisor runs at a privileged level, it can pretty much do what it wants. As we all know, the fundamental hacking technique that most hackers use is escalation of privileges as soon as they get access to a system, here hacking into a hypervisor (most probably this will be done by an internal threat), you do not even need to escalate. You hacked into one. So it doesn’t matter even if you have a sandbox, hypervisor got good enough shovel!:-)
So, yes, it is possible to unvetted binary apps on a shared server, but
nobody in his right mind would want to.
I wouldn't get hot and bothered about past bugs in Xen. All software
has bugs, and eventually they get fixed, hopefully before the software
wears out. There are certainly no theoretic problems with hypervisors
as a class.
>
> A really tough challenge, I think.
>>
>> "Rao Dronamraju" <rao.dro...@sbcglobal.net> Dec 30 11:30AM
>> -0600 ^ <#digest_top>
> --
> ~~~~~
> Register Today for Cloud Slam 2010 at official website -
> http://cloudslam10.com
> Posting guidelines:
> http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
> Follow us on Twitter http://twitter.com/cloudcomp_group or
> @cloudcomp_group
> Post Job/Resume at http://cloudjobs.net
> Buy 88 conference sessions and panels on cloud computing on DVD at
> http://www.amazon.com/gp/product/B002H07SEC,
> http://www.amazon.com/gp/product/B002H0IW1U or get instant access to
> downloadable versions at
> http://cloudslam09.com/content/registration-5.html
>
> ~~~~~
> You received this message because you are subscribed to the Google
> Groups "Cloud Computing" group.
> To post to this group, send email to cloud-c...@googlegroups.com
> To unsubscribe from this group, send email to
> cloud-computi...@googlegroups.com
If we don't put those applications in hypervisors(for performance
reasons), could we keep the security in such a system ? Is it in
opposite, it's much easier to manage because it's just un application
like another?
This is very important, in the high performance field, where we still
have codes written in C or FORTRAN and using libraries like MPI, and we
want to deploy them on the cloud, to take advantage of the
infrastructure available to us.
I hope you have not mis-understood my postings about hypervisor and its
security. I did not post them to promote any FUD. I posted it to create some
discussion about hypervisor security in particular and cloud security in
general. So you do not need to be worried that much about the possibility of
someone hacking into the hypervisor in a public cloud etc. The probabilty of
some one hacking into a hypervisor in a cloud could be low. So I wouldn't
worry about everything I read about cloud insecurity.
If you do not use a virtualized (hypervised) environment, then you are back
to the regular data center environement. It appears your application is a
HPC application, so you are probably running it in a grid environment. If
you want to run them in today's clouds, you may have no choice but to use a
virtualized/hypervised environment. You cannot run some applications in a
virtualized environment and some outside a virtual environment in a cloud,
atlesat AFAIK.
May be there are some grid clouds available out there that you may want to
run your application in.
-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Benmerar Tarik
Zakaria
Sent: Friday, January 01, 2010 10:48 AM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] what role did virtual machine play in Cloud
Computing?
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Worse yet, the laid-off admins working as baggage handlers for the CEOs private jetJ
There is also the well-known BOINC approach to running certain types of parallel (or embarrassingly so) codes. It combines some cloud-like aspects of computing with some grid-like aspects (e.g. schedulers, batches of tasks, etc.)
Rob
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Miha Ahronovitz
Sent: Friday, January 01, 2010 6:21 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
--
“Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.”
If you are from the business side, why bother about virtualization at all. If you are from the buisness side your primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in addition to Security and Risk to business assets. It shuould not matter as to how CC is accomplished with virtualization or bare-metal provisioning. Virtualization is not necessarily a way to split a machine to serve multiple very different puposes. If you have multiple tenants who all use CRM, then all of them can be hosted on the same physical hardware in muptiple CRM VMs, isolated and separated for security reasons. In speciality clouds, clouds specializing in vertcial markets, many clients will be running their similar applilcations in different virtual machines.
“You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application? “
Without taking credit away from Scott McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+ years and so are the utility clouds. Why would private data enters host one server per major application?...Isn’t server consolidation in the enterprise all about NOT having one server per major application. Infact private cloud is nothing but virtualization/server consolidation+live migration of virtual machines for efficent real-time resource utilization and optimization+chargeback models.
“Scott McNeely had the right concept, but the platforms and applications weren’t there to support it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.”
The most fundamental problem/roadblock with (public) CC today is Security, Security, Security….that is why in the near future as forecasted by gartner and others, it is going to be private clouds. This does not mean hybrid and public clouds will not have a market. Non-critical applications & SMBs will be the primary markets. You may want to read the following excellent article published recently by MIT Technology Review.
http://www.technologyreview.com/web/24166/
From: cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop
Sent: Tuesday, December 29, 2009
1:48 AM
To: cloud-c...@googlegroups.com
Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
Hi Jan,
Keeping in mind I’m on the business not technical side of Cloud Computing, I’m saying that virtualization is the lowest common denominator i.e. a way to split a machine to serve multiple very different purposes rather than many users or organizations using the same server for a similar purpose e.g. CRM, but in a secure way without unauthorized access like SalesForce.
You are probably right that server virtualization won’t go anywhere “soon”, but in the long-term, isn’t it possible that a majority of organizations won’t see any benefit to having their own servers (remember Scott McNeely’s Network Computer concept?) and eventually as PaaS reaches its tipping point and all custom applications can be developed in the cloud, private data centers will only be cost-effective for the very largest of organizations that need at least one server per major application?
Scott McNeely had the right concept, but the platforms and applications weren’t there to support it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more, organizations can integrate, customize and mash-up almost anything to create powerful apps without a single server on their books.
Thoughts?
Gabor
From:
cloud-c...@googlegroups.com [mailto:
cloud-c...@googlegroups.com ] On Behalf Of Jan Klincewicz
Sent: Monday, December 28, 2009
2:57 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ]
what role did virtual machine play in Cloud Computing?
@Gabor:
Do you think SalesForce.com provides a PHYSICAL machine for every instance they
offer customers ?? Or do you mean end customers may give up running their
OWN VMs (and calling them "Private Clouds?"
In any event, I don't think sever virtualization is going anywhere soon ...
On Mon, Dec 28, 2009 at 1:46 AM, Gabor Fulop <Gabor...@cloudharbor.com> wrote:
I agree that we can implement Cloud without VM. If you are familiar with SalesForce.com (a SaaS example of Cloud Computing) then you may be familiar with Force.com (by the same company), which I consider a PaaS or BPaaS (Business Platform-as-a-Service), because it is a platform for creating applications. I wonder if any would agree that VM is the obsolete version of cloud because there is so much power available directly from shared service platforms that virtual sharing will soon be a thing of the past.
Yours inquisitively,
Gabor
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com]
On Behalf Of smith jack
Sent: Sunday, December 27, 2009
6:39 PM
To: cloud-c...@googlegroups.com
Subject: [ Cloud Computing ] what
role did virtual machine play in Cloud Computing?
i think this is a quite open question.
we can implement Cloud Computing without VM indeed, and i am sure lot of Cloud Systems are implemented without VMs,
then what is the advantage of VMs in Cloud Computing?
(easy deployment?)
any reply is appreciated
--
Ray Nugent schrieb:
> Using different VMs does not buy you anything security wise if the
> provider can access either your VMs or you storage or both (Hint: They Can).
> So this is not a viable security solution. It should be pointed out that
> Salesforce, eBay, Google and the rest of the SaaS crowd can also access
> your data and some even use it and sell it. Virtually all SaaS providers
> collect data about you and use or sell that.
yes, but some jurisdictions (not the USA afaik) do have legal recourses
against misuse of personal data, including a legal right to have incorrect data
corrected or deleted. That is why it is so important that any cloud framework
needs to have controls on where data physically resides (i.e. which jurisdiction
applies).
-- Roland
>
>
>
>
> ________________________________
> From: Rao Dronamraju <rao.dro...@sbcglobal.net>
> To: cloud-c...@googlegroups.com
> Sent: Tue, December 29, 2009 9:38:38 AM
> Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
>
>
> �Keeping in mind I�m on the
> business not technical side of Cloud Computing, I�m saying that
> virtualization is the lowest common denominator i.e. a way to split a machine
> to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.�
>
> If you are from the business side, why
> bother about virtualization at all. If you are from the buisness side your
> primary focus would be CORE � CapEx, OpEx, Revenues and Earnings in
> addition to Security and Risk to business assets. It shuould not matter as to
> how CC is accomplished with virtualization or bare-metal provisioning. Virtualization
> is not necessarily a way to split a machine to serve multiple very different
> puposes. If you have multiple tenants who all use CRM, then all of them can be
> hosted on the same physical hardware in muptiple CRM VMs, isolated and
> separated for security reasons. In speciality clouds, clouds specializing in
> vertcial markets, many clients will be running their similar applilcations in
> different virtual machines.
>
> �You are probably
> right that server virtualization won�t go anywhere �soon�,
> but in the long-term, isn�t it possible that a majority of organizations
> won�t see any benefit to having their own servers (remember Scott
> McNeely�s Network Computer concept?) and eventually as PaaS reaches its
> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application? �
>
> Without taking credit away from Scott
> McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+
> years and so are the utility clouds. Why would private data enters host one
> server per major application?...Isn�t server consolidation in the enterprise
> all about NOT having one server per major application. Infact private cloud is
> nothing but virtualization/server consolidation+live migration of virtual
> machines for efficent real-time resource utilization and optimization+chargeback
> models.
>
> �Scott McNeely had the right
> concept, but the platforms and applications weren�t there to support
> it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more,
> organizations can integrate, customize and mash-up almost anything to create
> powerful apps without a single server on their books.�
>
> The most fundamental problem/roadblock
> with (public) CC today is Security, Security, Security�.that is why in
> the near future as forecasted by gartner and others, it is going to be private
> clouds. This does not mean hybrid and public clouds will not have a market. Non-critical
> applications & SMBs will be the primary markets. You may want
> to read the following excellent article published recently by MIT Technology
> Review.
>
> http://www.technologyreview.com/web/24166/
>
>
>
>
> ________________________________
>
> From:cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop
> Sent: Tuesday, December 29, 2009
> 1:48 AM
> To: cloud-c...@googlegroups.com
> Subject: RE: [ Cloud Computing ]
> what role did virtual machine play in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind
> I�m on the business not technical side of Cloud Computing, I�m
> saying that virtualization is the lowest common denominator i.e. a way to split
> a machine to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.
>
> You are probably
> right that server virtualization won�t go anywhere �soon�,
> but in the long-term, isn�t it possible that a majority of organizations
> won�t see any benefit to having their own servers (remember Scott
> McNeely�s Network Computer concept?) and eventually as PaaS reaches its
> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application?
>
> Scott McNeely had
> the right concept, but the platforms and applications weren�t there to
Ray,
Ray Nugent schrieb:
> Using different VMs does not buy you anything security wise if theyes, but some jurisdictions (not the USA afaik) do have legal recourses
> provider can access either your VMs or you storage or both (Hint: They Can).
> So this is not a viable security solution. It should be pointed out that
> Salesforce, eBay, Google and the rest of the SaaS crowd can also access
> your data and some even use it and sell it. Virtually all SaaS providers
> collect data about you and use or sell that.
against misuse of personal data, including a legal right to have incorrect data
corrected or deleted. That is why it is so important that any cloud framework
needs to have controls on where data physically resides (i.e. which jurisdiction
applies).
-- Roland
>
>
>
>
> ________________________________
> From: Rao Dronamraju <rao.dro...@sbcglobal.net>
> To: cloud-c...@googlegroups.com
> Sent: Tue, December 29, 2009 9:38:38 AM
> Subject: RE: [ Cloud Computing ] what role did virtual machine play in Cloud Computing?
>
>
> “Keeping in mind I’m on the
> business not technical side of Cloud Computing, I’m saying that
> virtualization is the lowest common denominator i.e. a way to split a machine
> to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.”
>
> If you are from the business side, why
> bother about virtualization at all. If you are from the buisness side your
> primary focus would be CORE – CapEx, OpEx, Revenues and Earnings in
> addition to Security and Risk to business assets. It shuould not matter as to
> how CC is accomplished with virtualization or bare-metal provisioning. Virtualization
> is not necessarily a way to split a machine to serve multiple very different
> puposes. If you have multiple tenants who all use CRM, then all of them can be
> hosted on the same physical hardware in muptiple CRM VMs, isolated and
> separated for security reasons. In speciality clouds, clouds specializing in
> vertcial markets, many clients will be running their similar applilcations in
> different virtual machines.
>
> “You are probably
> right that server virtualization won’t go anywhere “soon”,
> but in the long-term, isn’t it possible that a majority of organizations
> won’t see any benefit to having their own servers (remember Scott
> McNeely’s Network Computer concept?) and eventually as PaaS reaches its
> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application? “
>
> Without taking credit away from Scott
> McNealy, his concept/vision was nothing new. Telecom Cloud has been around for 50+
> years and so are the utility clouds. Why would private data enters host one
> server per major application?...Isn’t server consolidation in the enterprise
> all about NOT having one server per major application. Infact private cloud is
> nothing but virtualization/server consolidation+live migration of virtual
> machines for efficent real-time resource utilization and optimization+chargeback
> models.
>
> “Scott McNeely had the right
> concept, but the platforms and applications weren’t there to support
> it. Now that there are platforms for building apps in the cloud like Force.com, CloudHarbor.com, Cordys Process Factory, MS Azure, and many more,
> organizations can integrate, customize and mash-up almost anything to create
> powerful apps without a single server on their books.”
>
> The most fundamental problem/roadblock
> with (public) CC today is Security, Security, Security….that is why in
> the near future as forecasted by gartner and others, it is going to be private
> clouds. This does not mean hybrid and public clouds will not have a market. Non-critical
> applications & SMBs will be the primary markets. You may want
> to read the following excellent article published recently by MIT Technology
> Review.
>
> http://www.technologyreview.com/web/24166/
>
>
>
>
> ________________________________
>
> From:cloud-c...@googlegroups.com [mailto: cloud-c...@googlegroups.com ] On Behalf Of Gabor Fulop
> Sent: Tuesday, December 29, 2009
> 1:48 AM
> To: cloud-c...@googlegroups.com
> Subject: RE: [ Cloud Computing ]
> what role did virtual machine play in Cloud Computing?
>
> Hi Jan,
>
> Keeping in mind
> I’m on the business not technical side of Cloud Computing, I’m
> saying that virtualization is the lowest common denominator i.e. a way to split
> a machine to serve multiple very different purposes rather than many users or
> organizations using the same server for a similar purpose e.g. CRM, but in a
> secure way without unauthorized access like SalesForce.
>
> You are probably
> right that server virtualization won’t go anywhere “soon”,
> but in the long-term, isn’t it possible that a majority of organizations
> won’t see any benefit to having their own servers (remember Scott
> McNeely’s Network Computer concept?) and eventually as PaaS reaches its
> tipping point and all custom applications can be developed in the cloud,
> private data centers will only be cost-effective for the very largest of
> organizations that need at least one server per major application?
>
> Scott McNeely had
> the right concept, but the platforms and applications weren’t there to
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Truer words were never spoken. :) And that's why its a
very hard problem.
But its a problem that eventually needs to be
solved. Perhaps the HotSpot
runtime of the future would inspect code as its running and
instead make
algorithmic alterations instead of just byte/opcode replacement.
>What I would challenge you and Jan on is that your only providing
>answers to the supply management. Ie - abstract and stack apps by vm.
>
>What is missing in your solution formula is the demand management
>equation. To ensure both performance and maximum efficiency you must
>include with the VM's run time managers that drive the workload to the core/vm.
But workloads vary. If you assume workloads are equal or
quantiz-able, perhaps
you got a shot. I had more luck using javaspaces without
virtualization
to drive work to workers based on workload affinity, ie,
machines with gpu's,
fast caches, 4 cores, x86 performance vs sparc throughput,
database connectivity,
et al.
>There are mature managers from Appistry, Tibco and IBM that manage
>the runtime demand containers (app servers, web servers, event
>managers, rules engines, message queues, etc...) which COTS and
>custom apps can be controlled without code changes. By inserting
>such control you can optimally ensure QoS in terms of performance,
>cost and efficiency AND exploit the VM/core strategy.
While this can be effective for certain enterprise
applications, I posit this
can be done more effectively with tuple-spaces rather than
coarse-grained "demand
containers". But either way, its still coarse-grained "QoS"
optimization.
>Now perhaps you got stuck in the batch management days at Lehman and
>forgot to look outside the HPC/job space to general purpose computing! :)
The Cloud notion is a refinement of
batch/services-grid/hpc/jobs-submission/parallelism/network-arch.
Understanding these components gives you a deeper
understanding of general purpose
computing. Perhaps you should try enterprise services
management and look inside
to see how its done? :)
Frank G.
When I worked at Fermilab I was amazed at the amount of time spent
managing applications due to the lack of an effective virtualization
mechanism. Different applications are going to require different OS and
library patches. There is simply no way around that. VMs allow those
requirements to be bundled together and tested as a unit rather than
have a quicksand of OS changes occurring underneath an application.
rw2
It also enables key benefits in terms of machine image portability and
reuse.
I vote that computing virtualization has a big role in the IaaS model of
both public and private cloud computing.
Stephen
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
Greg Pfister
http://perilsofparallel.blogspot.com/
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization. I suspect
> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>
> >> --
>
> >> ~~~~~
> >> Register Today for Cloud Slam 2010 at official website -
> >>http://cloudslam10.com
> >> Posting guidelines:
> >>http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
> >> Follow us on Twitterhttp://twitter.com/cloudcomp_groupor
> >> @cloudcomp_group
> >> Post Job/Resume athttp://cloudjobs.net
> >> Buy 88 conference sessions and panels on cloud computing on DVD at
> >>http://www.amazon.com/gp/product/B002H07SEC,
> >>http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to
> >> downloadable versions at
> >>http://cloudslam09.com/content/registration-5.html
>
> >> ~~~~~
> >> You received this message because you are subscribed to the Google Groups
> >> "Cloud Computing" group.
> >> To post to this group, send email to cloud-c...@googlegroups.com
> >> To unsubscribe from this group, send email to
> >> cloud-computi...@googlegroups.com
>
> > --
> > ~~~~~
> > Register Today for Cloud Slam 2010 at official website -
> >http://cloudslam10.com
> > Posting guidelines:
> >http://groups.google.ca/group/cloud-computing/web/frequently-asked-qu...
> > Follow us on Twitterhttp://twitter.com/cloudcomp_groupor
> > @cloudcomp_group
> > Post Job/Resume athttp://cloudjobs.net
> > Buy 88 conference sessions and panels on cloud computing on DVD at
> >http://www.amazon.com/gp/product/B002H07SEC,
> >http://www.amazon.com/gp/product/B002H0IW1Uor get instant access to
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
�������� It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers.� Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ?� I think not.
��������� Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.)�
��������� Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory.� Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own.� Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them.� Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.
���������� As Greg states, why is it it be so popular ??� Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.
��� So let's hear Intelicloud's "better way."
***************************************************************************
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 12, 3:50�pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers. � Many "facts" are incorrect, and the suppositions even
> more so. �Simple arithmetic can bear this out, and in the absence of any
> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor�s ability to move
> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server�s
> > resources, depending on the number of guests. That means up to 25% of the
> > physical server�s total cost of ownership produces nothing, and growth
> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between �human� administrators
> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of �too much� oversubscription within the infrastructure.
> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the �too much�
> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users� application performance. So when an application begins to
> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the �too-much� oversubscription in the network? Is it in the
> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon � fix or squeeze
> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It�s further exacerbated by power and cooling
> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization. �I suspect
> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>
Completely agree with Jim on the effect of virtualization. The effect on disk and network resources is palpable in large virtual farms. This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit. Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.
Rob
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Wednesday, January 13, 2010 12:23 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
I found the post not only insightful, but one that changed my ideas about private clouds.
Sure, compute cycles are compute cycles whether on hard iron or VMs. But disk and network traffic is something else again. Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources. Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.
And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required. Pre-VM, it was necessary to administer each of the applications and the dedicated server. Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.
None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.
So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.
Too close to eyeball for me.
(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)
Jan Klincewicz wrote:
It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers. Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ? I think not.
Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.)
Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory. Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own. Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them. Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.
As Greg states, why is it it be so popular ?? Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.
So let's hear Intelicloud's "better way."
***************************************************************************
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers. Many "facts" are incorrect, and the suppositions even
> more so. Simple arithmetic can bear this out, and in the absence of any
> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor’s ability to move
> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server’s
> > resources, depending on the number of guests. That means up to 25% of the
> > physical server’s total cost of ownership produces nothing, and growth
> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between “human” administrators
> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of “too much” oversubscription within the infrastructure.
> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the “too much”
> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users’ application performance. So when an application begins to
> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the “too-much” oversubscription in the network? Is it in the
> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze
> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It’s further exacerbated by power and cooling
> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization. I suspect
> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>
A lot pf people can live with "palpable" compared to "irretrievable" or "devastating." Again. I do not offer hypervisor-based virtualization as the ultimate answer, but in the absence of viable alternatives (and I do not see a plethora of these being offered) they seem pretty popular.
Boot Storms, to my knowledge, typically occur after disasters of some sort (in a well-designed DC.) If disasters are a daily occurrence, I would suggest there is something amiss in the original architecture of such a DC. Aside from that (post-Katrina FEMA notwithstanding) I think most organizations are cut a little slack after an act-of-god type occurrence.
I will be happy to change my position 180 degrees when presented with a working alternative, but until then, I maintain the position that hypervisor-based virtualization is and will continue to be the primary means of deploying Cloud servers for the foreseeable future.
****************************************************************************************************************************************************************************************************************
Completely agree with Jim on the effect of virtualization. The effect on disk and network resources is palpable in large virtual farms. This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit. Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.
Rob
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
As for boot storms, this is an interesting phenomenon. Consider a cloud provider getting a request to stand up 1,500 VMs in the next 10 minutes. That’s a boot storm. Or, a VDI implementation where the clients all come in and either boot or login (I’ve seen VDI now that stands up/tears down the VD on login/logout, very interesting) and the network & storage get blasted with a billion (yes with a B) or more I/Os. Of course, everyone wants their server or desktop to boot ASAP. It’s a huge strain on legacy storage and/or cheap disk-in-server.
Rob
>
> On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> As for boot storms, this is an interesting phenomenon. Consider a
> cloud provider getting a request to stand up 1,500 VMs in the next
> 10 minutes. That�s a boot storm. Or, a VDI implementation where
> the clients all come in and either boot or login (I�ve seen VDI
> now that stands up/tears down the VD on login/logout, very
> interesting) and the network & storage get blasted with a billion
> (yes with a B) or more I/Os. Of course, everyone wants their
> server or desktop to boot ASAP. It�s a huge strain on legacy
> storage and/or cheap disk-in-server.
>
>
>
> Rob
>
>
>
> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jan
> Klincewicz
> *Sent:* Wednesday, January 13, 2010 1:28 PM
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual
> machine play in Cloud Computing?
>
>
>
> A lot pf people can live with "palpable" compared to
> "irretrievable" or "devastating." Again. I do not offer
> hypervisor-based virtualization as the ultimate answer, but in the
> absence of viable alternatives (and I do not see a plethora of
> these being offered) they seem pretty popular.
>
> Boot Storms, to my knowledge, typically occur after disasters of
> some sort (in a well-designed DC.) If disasters are a daily
> occurrence, I would suggest there is something amiss in the
> original architecture of such a DC. Aside from that (post-Katrina
> FEMA notwithstanding) I think most organizations are cut a little
> slack after an act-of-god type occurrence.
>
> I will be happy to change my position 180 degrees when presented
> with a working alternative, but until then, I maintain the
> position that hypervisor-based virtualization is and will continue
> to be the primary means of deploying Cloud servers for the
> foreseeable future.
>
>
>
>
>
> ****************************************************************************************************************************************************************************************************************
>
> Completely agree with Jim on the effect of virtualization. The
> effect on disk and network resources is palpable in large virtual
> farms. This is why much R&D is being placed into highly efficient
> and intelligent disk elements to relieve the �squeezed balloon�
> effect that hypervisors tend to exhibit. Boot storms are one
> example of such, especially in virtual desktop environments, and
> this is highly germane to many cloud providers.
>
>
>
> Rob
>
>
>
> On Wed, Jan 13, 2010 at 1:44 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> Completely agree with Jim on the effect of virtualization. The
> effect on disk and network resources is palpable in large virtual
> farms. This is why much R&D is being placed into highly efficient
> and intelligent disk elements to relieve the �squeezed balloon�
> effect that hypervisors tend to exhibit. Boot storms are one
> example of such, especially in virtual desktop environments, and
> this is highly germane to many cloud providers.
>
>
>
> Rob
>
>
>
>
> Xiotech Website <http://www.xiotech.com>*
> Robert Peglar*
> Vice President, Technology, Storage Systems Group
> Xiotech Corporation | Toll-Free: 866.472.6764
> o 952 983 2287 m 314 308 6983 f 636 532 0828
> Robert...@xiotech.com <mailto:Robert...@xiotech.com> |
> *www.xiotech.com <http://www.xiotech.com/>*
>
> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jim Starkey
> *Sent:* Wednesday, January 13, 2010 12:23 PM
>
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
>
> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual
> > > additions. For virtualized servers, it is the hypervisor�s
> server�s
> > > resources, depending on the number of guests. That means up to
> 25% of the
> > > physical server�s total cost of ownership produces nothing,
> and growth
> > > requires up to 25% more physical servers.
> > > A much stickier issue is the lack of automatic integration
> between the
> > > virtual servers, virtual storage, SAN storage, NAS,
> infrastructure,
> > > networks, power, cooling, etc. Whenever there is a change or
> growth in one
> > > part of the total infrastructure, it most likely requires
> change or growth
> > > in other parts, meaning that there must be extraordinary planning,
> > > communication, coordination and cooperation between �human�
> administrators
> > > of applications, servers, storage, storage networks, TCP/IP
> networks, plant,
> > > cables, etc. It may sound difficult, and it is even more
> difficult than it
> > > sounds.
> > > Issues and problems, especially about performance, crop up all
> the time.
> > > Because there is no automatic integration within the
> infrastructure,
> > > troubleshooting is a blood-chilling nightmare. Take, for
> example, the all
> > > too common issue of �too much� oversubscription within the
> infrastructure.
> > > Besides requiring unprecedented levels of multi-departmental
> cooperation,
> > > attempting to isolate the root cause of an application slow
> down or failure
> > > provides no easy way or guarantee of determining where the
> �too much�
> > > oversubscription is occurring.
> >
> > > Too much oversubscription is not just a storage phenomenon; it
> can and will
> > > occur in the TCP/IP network, leading to severe congestion
> events that
> > > decimate users� application performance. So when an
> application begins to
> > > fall below SLA performance requirements, how will the admin
> know where to
> > > look first? Is the �too-much� oversubscription in the network?
> Is it in the
> > > physical server? Is it in the SAN? Is it in the virtualized
> storage? Is it
> > > in the volume? Is it in the storage system? Is it in all the
> above? This is
> > > a troubling problem with this type of discrete architectural
> (a.k.a.
> > > best-of-breed) approach that has no simple answers.
> > > The integration problems get much worse and even more
> difficult as the
> > > virtualization approach scales. It reaches a point when there
> are just not
> > > enough service provider IT professionals or hours in the day
> to manage the
> > > ongoing integration issues. It is like squeezing a balloon �
> fix or squeeze
> > > something here, and it bulges out there.
> > > What becomes all too apparent is that the previously discussed
> issue of
> > > greater than expected capital expenditures is just the tip of
> the iceberg.
> > > The operating expenditures in time, maintenance and human
> assets turn out to
> > > far beyond all expectations. It�s further exacerbated by power
> <sfle...@tmforum.org <mailto:sfle...@tmforum.org>>wrote:
Actually 1500 VMs with10 minutes is a pretty common use case for things like automated test and fail over. A well known cloud test vendor recently tried to get 1500 from AWS and could not get the request filled.
Ray
Sent: Wed, January 13, 2010 11:45:15 AM
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
Well, if a Cloud Provider cheaps out on storage (or any component for that matter) and are not able to provide a customer's needs, they will lose customers. 1500 VMs with 10 minute notice might indicate some poor planning on the part of a customer, though, don't you think ?? Then again, that IS the premise of CC <g>.
On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert <Robert...@xiotech.com> wrote:
As for boot storms, this is an interesting phenomenon. Consider a cloud provider getting a request to stand up 1,500 VMs in the next 10 minutes. That’s a boot storm. Or, a VDI implementation where the clients all come in and either boot or login (I’ve seen VDI now that stands up/tears down the VD on login/logout, very interesting) and the network & storage get blasted with a billion (yes with a B) or more I/Os. Of course, everyone wants their server or desktop to boot ASAP. It’s a huge strain on legacy storage and/or cheap disk-in-server.
Rob
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz
Sent: Wednesday, January 13, 2010 1:28 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
A lot pf people can live with "palpable" compared to "irretrievable" or "devastating." Again. I do not offer hypervisor-based virtualization as the ultimate answer, but in the absence of viable alternatives (and I do not see a plethora of these being offered) they seem pretty popular.
Boot Storms, to my knowledge, typically occur after disasters of some sort (in a well-designed DC.) If disasters are a daily occurrence, I would suggest there is something amiss in the original architecture of such a DC. Aside from that (post-Katrina FEMA notwithstanding) I think most organizations are cut a little slack after an act-of-god type occurrence.
I will be happy to change my position 180 degrees when presented with a working alternative, but until then, I maintain the position that hypervisor-based virtualization is and will continue to be the primary means of deploying Cloud servers for the foreseeable future.
****************************************************************************************************************************************************************************************************************
Completely agree with Jim on the effect of virtualization. The effect on disk and network resources is palpable in large virtual farms. This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit. Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.
Rob
On Wed, Jan 13, 2010 at 1:44 PM, Peglar, Robert <Robert...@xiotech.com> wrote:
Completely agree with Jim on the effect of virtualization. The effect on disk and network resources is palpable in large virtual farms. This is why much R&D is being placed into highly efficient and intelligent disk elements to relieve the ‘squeezed balloon’ effect that hypervisors tend to exhibit. Boot storms are one example of such, especially in virtual desktop environments, and this is highly germane to many cloud providers.
Rob
Vice President, Technology, Storage Systems Group
Xiotech Corporation | Toll-Free: 866.472.6764
o 952 983 2287 m 314 308 6983 f 636 532 0828
Robert...@xiotech.com | www.xiotech.com
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jim Starkey
Sent: Wednesday, January 13, 2010 12:23 PMSubject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
I found the post not only insightful, but one that changed my ideas about private clouds.
Sure, compute cycles are compute cycles whether on hard iron or VMs. But disk and network traffic is something else again. Yes, there is an overhead induced by the hypervisor, but there is also an effect, probably a great deal most significant, of contention among the VMs for disk and network resources. Combine disk bound, network bound, or CPU bound applications on a single server, and everyone is going to suffer.
And yes, there are ways around those problems, but the solutions are different from the solutions on hard iron, there's a learning curve to pay for, and ultimately more, not less, administration may be required. Pre-VM, it was necessary to administer each of the applications and the dedicated server. Post-VM, those administration costs are still there, but now there are additional administration expenses to contention among the servers, now virtual, as well as managing more sophisticated storage.
None of this should be surprising, since the world went to dedicated servers to save on administration costs in the first place.
So I guess the bottom line is the balance the gain by reducing the number of physical servers versus the incremental cost of administrating more complex servers.
Too close to eyeball for me.
(Jan, not everyone who disagrees with you has a hidden nefarious agenda.)
Jan Klincewicz wrote:It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers. Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ? I think not.
Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.)
Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory. Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own. Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them. Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.
As Greg states, why is it it be so popular ?? Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.
So let's hear Intelicloud's "better way."
***************************************************************************
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers. Many "facts" are incorrect, and the suppositions even
> more so. Simple arithmetic can bear this out, and in the absence of any
> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor’s ability to move
> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server’s
> > resources, depending on the number of guests. That means up to 25% of the
> > physical server’s total cost of ownership produces nothing, and growth
> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between “human” administrators
> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of “too much” oversubscription within the infrastructure.
> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the “too much”
> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users’ application performance. So when an application begins to
> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the “too-much” oversubscription in the network? Is it in the
> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze
> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It’s further exacerbated by power and cooling
--
Cheers,
Jan
--
Cheers,
Jan
-----Original Message-----
From: Jim Starkey <jsta...@nimbusdb.com>
Sent: Wednesday, January 13, 2010 04:23 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ] Re: what role did virtual machine play in Cloud Computing?
Jan Klincewicz wrote:
> Well, if a Cloud Provider cheaps out on storage (or any component for
> that matter) and are not able to provide a customer's needs, they will
> lose customers. 1500 VMs with 10 minute notice might indicate some
> poor planning on the part of a customer, though, don't you think ??
> Then again, that IS the premise of CC <g>.
Whether or not it's poor planning on the part a few customers, aren't
all customers going to get hammered as the disk arms melt and the
network I/O backs up?
>
> On Wed, Jan 13, 2010 at 2:35 PM, Peglar, Robert
> <Robert...@xiotech.com <mailto:Robert...@xiotech.com>> wrote:
>
> As for boot storms, this is an interesting phenomenon. Consider a
> cloud provider getting a request to stand up 1,500 VMs in the next
> 10 minutes. That’s a boot storm. Or, a VDI implementation where
> the clients all come in and either boot or login (I’ve seen VDI
> now that stands up/tears down the VD on login/logout, very
> interesting) and the network & storage get blasted with a billion
> (yes with a B) or more I/Os. Of course, everyone wants their
> server or desktop to boot ASAP. It’s a huge strain on legacy
> storage and/or cheap disk-in-server.
>
>
>
> Rob
>
>
>
> *From:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> [mailto:cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>] *On Behalf Of *Jan
> Klincewicz
> *Sent:* Wednesday, January 13, 2010 1:28 PM
>
> *To:* cloud-c...@googlegroups.com
> <mailto:cloud-c...@googlegroups.com>
> *Subject:* Re: [ Cloud Computing ] Re: what role did virtual
> machine play in Cloud Computing?
>
>
>
> A lot pf people can live with "palpable" compared to
> "irretrievable" or "devastating." Again. I do not offer
> hypervisor-based virtualization as the ultimate answer, but in the
> absence of viable alternat
[The entire original message is not included]
It would be possible to draw such a biased conclusion by taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each having 2% overhead compared to bare-metal servers. Does that mean running in that fashion is equal to purchasing, installing, maintaining and paying maintenance on 19 PHYSICAL servers ? I think not.
Whenever I see arguments with extreme data (in either direction) I think it important to see if the author has an agenda of any sort (ie. a product which mitigates the downside.)
Certainly, to get the most out of virtualization,shared storage is necessary (for high-availability, failover, live-migration etc.) but it is not mandatory. Additionally, most data centers I have encountered in the past decade already OWN some SAN or NAS device and are merely leveraging what they already own. Sharing of infrastructure such as HBAs and high-end offloading NICs by hosting VMs is FAR more efficient than purchasing individual cards for physical servers, and providing port density to accommodate them. Virtualization, by use of internal virtual switches radically decreases the complexity and failure exposure of physical cabling.
As Greg states, why is it it be so popular ?? Sure it is complex, and probably more so than constructing a purely physical counterpart, from a software perspective, but I suspect this balances out by requiring fewer (but smarter) bodies to maintain the environment.
So let's hear Intelicloud's "better way."
***************************************************************************
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Unfortunately, I completely agree with Jan.
Just to pick one item: Virtualization overhead of 10-30%? I could
concoct an example like that, hand-picking some worst cases of app
characteristics and bad hypervisor support. But were it generally the
case, virtualization just wouldn't be used as much as it is.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com> wrote:
> Much of this is just plain wrong.....specifically hypervisor costs vs.
> physical servers. Many "facts" are incorrect, and the suppositions even
> more so. Simple arithmetic can bear this out, and in the absence of any
> specific examples, I would tend to discount much of that is proposed here.
>
>
>
>
>
> On Tue, Jan 12, 2010 at 3:47 PM, Khazret Sapenov <sape...@gmail.com> wrote:
> > Here's an interesting perspective on virtualisation pro and cons from Mr.
> > Staimer and Intelicloud guys:
>
> > Virtualization Approach Strengths
> > For an online services provider, the fundamental appeal of the
> > virtualization approach comes from the perception that it significantly
> > reduces both server and storage hardware costs. It can actually reduce some
> > costs, albeit to a much lesser extent than touted in the hype.
> > Operationally, both server and hardware virtualization considerably reduce
> > scheduled downtime for upgrades, moves, changes, data migration and
> > additions. For virtualized servers, it is the hypervisor’s ability to move
> > OS guests around on different physical servers live, online and even in
> > mid-transaction with no application downtime. For virtual storage, the
> > abstraction of the storage image from the actual storage (both SAN and NAS)
> > allows maintenance, changes, moves and, most importantly, data migration to
> > occur non-disruptively online. These capabilities greatly improve SLA
> > (service level agreement) management and simplify data protection disaster
> > recovery procedures.
>
> > Virtualization Approach Gotchas
> > The first weakness to the virtualization approach is that the
> > infrastructure costs are always much higher than expected, usually exceeding
> > the savings in hardware costs. Most server virtualization implementations
> > require networked storage, with the preferred storage being SAN-based
> > storage. SAN storage infrastructure means storage, switches, adapters,
> > cables, interfaces, power, cooling, rack space and floor space. The costs
> > are far from trivial.
> > Then there are the hidden hypervisor infrastructure costs. There is no such
> > thing as a free lunch, and hypervisors are no exception. Hypervisors have
> > overhead. The overhead commonly ranges from 10% to 30% of the server’s
> > resources, depending on the number of guests. That means up to 25% of the
> > physical server’s total cost of ownership produces nothing, and growth
> > requires up to 25% more physical servers.
> > A much stickier issue is the lack of automatic integration between the
> > virtual servers, virtual storage, SAN storage, NAS, infrastructure,
> > networks, power, cooling, etc. Whenever there is a change or growth in one
> > part of the total infrastructure, it most likely requires change or growth
> > in other parts, meaning that there must be extraordinary planning,
> > communication, coordination and cooperation between “human” administrators
> > of applications, servers, storage, storage networks, TCP/IP networks, plant,
> > cables, etc. It may sound difficult, and it is even more difficult than it
> > sounds.
> > Issues and problems, especially about performance, crop up all the time.
> > Because there is no automatic integration within the infrastructure,
> > troubleshooting is a blood-chilling nightmare. Take, for example, the all
> > too common issue of “too much” oversubscription within the infrastructure.
> > Besides requiring unprecedented levels of multi-departmental cooperation,
> > attempting to isolate the root cause of an application slow down or failure
> > provides no easy way or guarantee of determining where the “too much”
> > oversubscription is occurring.
>
> > Too much oversubscription is not just a storage phenomenon; it can and will
> > occur in the TCP/IP network, leading to severe congestion events that
> > decimate users’ application performance. So when an application begins to
> > fall below SLA performance requirements, how will the admin know where to
> > look first? Is the “too-much” oversubscription in the network? Is it in the
> > physical server? Is it in the SAN? Is it in the virtualized storage? Is it
> > in the volume? Is it in the storage system? Is it in all the above? This is
> > a troubling problem with this type of discrete architectural (a.k.a.
> > best-of-breed) approach that has no simple answers.
> > The integration problems get much worse and even more difficult as the
> > virtualization approach scales. It reaches a point when there are just not
> > enough service provider IT professionals or hours in the day to manage the
> > ongoing integration issues. It is like squeezing a balloon – fix or squeeze
> > something here, and it bulges out there.
> > What becomes all too apparent is that the previously discussed issue of
> > greater than expected capital expenditures is just the tip of the iceberg.
> > The operating expenditures in time, maintenance and human assets turn out to
> > far beyond all expectations. It’s further exacerbated by power and cooling
> > requirements of discrete systems not designed to cooperate in optimizing
> > energy consumption.
> > Service providers with either of these approaches attempt to manage their
> > problems by limiting them to what has become known as the traditional silo
> > model. The silo model limits the size or scale of each silo so that any
> > individual silo does not become unmanageable. The problem with the
> > traditional silo model is that contrary to conventional wisdom, adding silos
> > does not merely increase management requirements linearly. It actually
> > increases management requirements exponentially. This is because each
> > additional silo requires some level of load balancing for application
> > access, data, networks and/or data protection between silos, usually
> > requiring ongoing data migration. As the numbers of silos grows, so does the
> > complexity. The formulas for load balancing and data migration become
> > unwieldy, eventually becoming unreliable and unsustainable.
> > Service providers grossly underestimating cost and complexity will often
> > mean the difference between profit and loss. There has to be a better way.
>
> > Source:http://www.intelicloud.com/next-gen/white-papers.html
>
> > On Tue, Jan 12, 2010 at 2:14 PM, Stephen Fleece <sfle...@tmforum.org>wrote:
>
> >> Though I don't have research to prove it, I expect a more expensive form
> >> of overhead is human labor without computing virtualization. I suspect
> >> the computing cycle overhead of virtualization is inexpensive to most
> >> businesses, compared to the incremental human labor costs to provision
> >> and administer operating system software directly against a physical
> >> machines without virtualization.
>
> >> It also enables key benefits in terms of machine image portability and
> >> reuse.
>
> >> I vote that computing virtualization has a big role in the IaaS model of
> >> both public and private cloud computing.
>
> >> Stephen
>
But "the industry" means X86 and VMware or Xen or Hyper-V or
something, with PCIe busses, which can't get out of the way on
virtualization.
So, "it depends" primarily on the amount of IO done by the workload.
Industry average? I'd ***guess*** <10%. Total SWAG, though. I'd like
to hear if anybody has real data.
Gonna go to zero if PCIe virtualization takes off, which will be a
while, I'd guess.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 13, 11:02 am, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>
> Looks like you were referring to the statement below (in quotes). Ok. Fair
> enough.
>
> When I hear generic statements like, "industry wide server utilization is
> 10-30%", we know there are some good IT shops that are highly efficient
> running at 80% and many others that are highly inefficient running at
> 5-10%.
>
> It may or may not average out, but generally speaking for IT shops that have
> not looked at server consolidation, and are not exactly models of
> efficiency, that generic statement of 10-30% resonates with me.
>
> What do you guys think is a reasonable estimate for virtualization overhead
> in the industry? I'd be interested to know your perspectives.
>
> Best Regards,
>
> Ray DePena, MBA, PMP
> +1.916.941.5558
> Twitter: @RayDePena
> LinkedIn:http://www.linkedin.com/in/raydepena
>
> "Hypervisors have overhead. The overhead commonly ranges from 10% to 30% of
> the server’s resources, depending on the number of guests. That means up to
> 25% of the physical server’s total cost of ownership produces nothing, and
> growth
> requires up to 25% more physical servers."
>
> > > >>http://www.amazon.com/gp/product/B002H0IW1Uorget instant access to
> > > >> downloadable versions at
> > > >>http://cloudslam09.com/content/registration-5.html
>
> > > >> ~~~~~
> > > >> You received this message because you are subscribed to the Google
> > Groups
> > > >> "Cloud Computing" group.
> > > >> To post to this group, send email to cloud-c...@googlegroups.com
> > > >> To unsubscribe from this group, send email to
> > > >> cloud-computi...@googlegroups.com
>
> > > > --
> > > > ~~~~~
> > > > Register Today for
>
> ...
>
> read more »
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
It will be interesting to see how both HP and Dell are going to balance their Linux & Windows strategy.
Although they have done this all these years, Linux could have far more advantageous position vis a vis windows 200X (or 20XX?) in the clouds purely from pricing & volume position unless of course MS adopts GPLJJ
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Jan Klincewicz
Sent: Wednesday, January 13, 2010
7:43 PM
To: cloud-c...@googlegroups.com
You do need to host more than one JVM on the box if it can't max out
memory, CPU, disk, etc, even if you deploy all your applications into
one JVM.
On the other hand, all those SPECjs demonstrate that the amount of
administration needed to max out the resources by the VMs smaller than
the horse they share the ride on, is non-trivial: you start to play
with CPU affinity, assign one NW interface to each
JVM, ...errrmmmm....and what do they do with sharing disk?..
So we've got to embrace virtualization; and yes, to utilize all the
resources, every datacenter becomes "a SPECj" of its own.
Sassa
On Jan 13, 6:23 pm, Jim Starkey <jstar...@nimbusdb.com> wrote:
> I found the post not only insightful, but one that changed my ideas
> about private clouds.
>
> Sure, compute cycles are compute cycles whether on hard iron or VMs.
> But disk and network traffic is something else again. Yes, there is an
> overhead induced by the hypervisor, but there is also an effect,
> probably a great deal most significant, of contention among the VMs for
> disk and network resources. Combine disk bound, network bound, or CPU
> bound applications on a single server, and everyone is going to suffer.
>
> And yes, there are ways around those problems, but the solutions are
> different from the solutions on hard iron, there's a learning curve to
> pay for, and ultimately more, not less, administration may be required.
> Pre-VM, it was necessary to administer each of the applications and the
> dedicated server. Post-VM, those administration costs are still there,
> but now there are additional administration expenses to contention among
> the servers, now virtual, as well as managing more sophisticated storage.
>
> None of this should be surprising, since the world went to dedicated
> servers to save on administration costs in the first place.
>
> So I guess the bottom line is the balance the gain by reducing the
> number of physical servers versus the incremental cost of administrating
> more complex servers.
>
> Too close to eyeball for me.
>
> (Jan, not everyone who disagrees with you has a hidden nefarious agenda.)
>
> Jan Klincewicz wrote:
> > It would be possible to draw such a biased conclusion by
> > taking the AGGREGATE overhead of say, 20 Linux VMs running on Xen each
> > having 2% overhead compared to bare-metal servers. Does that mean
> > running in that fashion is equal to purchasing, installing,
> > maintaining and paying maintenance on 19 PHYSICAL servers ? I think not.
>
> > Whenever I see arguments with extreme data (in either
> > direction) I think it important to see if the author has an agenda of
> > any sort (ie. a product which mitigates the downside.)
>
> > Certainly, to get the most out of virtualization,shared
> > storage is necessary (for high-availability, failover, live-migration
> > etc.) but it is not mandatory. Additionally, most data centers I have
> > encountered in the past decade already OWN some SAN or NAS device and
> > are merely leveraging what they already own. Sharing of
> > infrastructure such as HBAs and high-end offloading NICs by hosting
> > VMs is FAR more efficient than purchasing individual cards for
> > physical servers, and providing port density to accommodate them.
> > Virtualization, by use of internal virtual switches radically
> > decreases the complexity and failure exposure of physical cabling.
>
> > As Greg states, why is it it be so popular ?? Sure it is
> > complex, and probably more so than constructing a purely physical
> > counterpart, from a software perspective, but I suspect this balances
> > out by requiring fewer (but smarter) bodies to maintain the environment.
>
> > So let's hear Intelicloud's "better way."
>
> > ***************************************************************************
> > Unfortunately, I completely agree with Jan.
>
> > Just to pick one item: Virtualization overhead of 10-30%? I could
> > concoct an example like that, hand-picking some worst cases of app
> > characteristics and bad hypervisor support. But were it generally the
> > case, virtualization just wouldn't be used as much as it is.
>
> > On Tue, Jan 12, 2010 at 9:14 PM, Greg Pfister <greg.pfis...@gmail.com
> > <mailto:greg.pfis...@gmail.com>> wrote:
>
> > Unfortunately, I completely agree with Jan.
>
> > Just to pick one item: Virtualization overhead of 10-30%? I could
> > concoct an example like that, hand-picking some worst cases of app
> > characteristics and bad hypervisor support. But were it generally the
> > case, virtualization just wouldn't be used as much as it is.
>
> > Greg Pfister
> > http://perilsofparallel.blogspot.com/
>
> > On Jan 12, 3:50 pm, Jan Klincewicz <jan.klincew...@gmail.com
> > <mailto:jan.klincew...@gmail.com>> wrote:
> > > Much of this is just plain wrong.....specifically hypervisor
> > costs vs.
> > > physical servers. Many "facts" are incorrect, and the
> > suppositions even
> > > more so. Simple arithmetic can bear this out, and in the
> > <sfle...@tmforum.org <mailto:sfle...@tmforum.org>>wrote:
>
> > > >> Though I don't have research to prove it, I expect a more
> > expensive form
> > > >> of overhead is human labor without computing virtualization.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 13, 8:38 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>
> Excluding mainframes what do you think that SWAG looks like?
>
> On Wed, Jan 13, 2010 at 4:52 PM, Greg Pfister <greg.pfis...@gmail.com>wrote:
>
>
>
> > What's reasonable? Close to zero is possible, but not with PCIe 2.x,
> > due to IO overhead. *Mainframes -- with built-in hardware support for
> > not just CPU and memory, but IO virtualization -- have been running
> > less than 3% for decades.*
>
> > *But "the industry" means X86 and VMware or Xen or Hyper-V or
> > something, with PCIe busses, which can't get out of the way on
> > virtualization.
>
> > So, "it depends" primarily on the amount of IO done by the workload.
> > Industry average? I'd ***guess*** <10%. Total SWAG, though.* I'd like
> ...
>
> read more »
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
I concur with Ray. One look at the headhunter lists indicates same.
Sent via my PDA. Please forgive any typos, as thumbs are funny things.
I have absolutely no data to back this up, but I would guess industry wide it's higher. More companies virtualizing their compute resources than virtualization professionals who know how to do it efficiently. Just a "gut feel".
On Fri, Jan 15, 2010 at 2:49 PM, Greg Pfister <greg.p...@gmail.com> wrote:
Hm, apparently my syntax got garbled. The SWAG was <10% per VM,
excluding mainframes.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 13, 8:38 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> Greg,
>
> Excluding mainframes what do you think that SWAG looks like?
>
> On Wed, Jan 13, 2010 at 4:52 PM, Greg Pfister <greg.pfis...@gmail.com>wrote:
>
>
>
> > What's reasonable? Close to zero is possible, but not with PCIe 2.x,
> > due to IO overhead. *Mainframes -- with built-in hardware support for
> > not just CPU and memory, but IO virtualization -- have been running
> > less than 3% for decades.*
>
> > *But "the industry" means X86 and VMware or Xen or Hyper-V or
> > something, with PCIe busses, which can't get out of the way on
> > virtualization.
>
> > So, "it depends" primarily on the amount of IO done by the workload.
> > Industry average? I'd ***guess*** <10%. Total SWAG, though.* I'd like
> > to hear if anybody has real data..
> > > > > > networks, power, cooling, etc.. Whenever there is a change or growth
http://www.amazon.com/gp/product/B002H07SEC, http://www..amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
I am not sure I agree with folks who think virtualization has shifted the paradigm in terms of skill set and experience necessary to do the job.
Considering that virtualization is nothing but re-inventing of what has been done in the last 20 to 30 years....(virtual) desktop, (virtual)server, (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual) FT, (virtual)security...why is it such a big deal for anyone whether it is a systems administrators, networks administrators, developers, architects, technical managers to transfer their non-virtualization based fundamental knowledge to virtualized environment. Hypervisor itself nothing but what existed as micro-kernels before (how many remember MACH from Carnagie Mellon in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not be 100% but atleast 95%+, OS did not have virtual networks to deal with). Even live migration is not new, process migartion was tried in many environments like TCF and DCE etc. Even the management is very similar. Just a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to cover the virtual world. Will the fundamental abstractions change just because of the virtual entities. Not at all. It is the sameO, sameO in a different way.
From: cloud-c...@googlegroups.com [mailto:cloud-c...@googlegroups.com] On Behalf Of Ray DePena
Sent: Friday, January 15, 2010
8:55 PM
To: cloud-c...@googlegroups.com
Subject: Re: [ Cloud Computing ]
Re: what role did virtual machine play in Cloud Computing?
Jan,
“A server doesn't GIVE a crap what its wallpaper looks like, and to the best of my knowledge, they don't play Solitaire when nobody is looking. That's just my assumption though ..”
(with desktop virtualization) you can also play Solitaire on both Linux and Windows at the same timeJ (even better) without installing it on the clientJ
What I'm thinking of writing it, though, are all the middle-tier app
engines (10X the DB back end), all running Java or C# or the like,
therefore all using gobs of compute cycles to do any IO. (Have to move
data out of GC space first.)
I'm assuming that IO inefficiency drowns out the VM/PCIe-based IO
inefficiency. Then Intel or AMD hardware eliminates the intrinsic
CPU / memory inefficiency.
All bets off, though, if (a) you include old hardware that requires
trap&emulate for CPU/memory; (b) bad stuff happens in the hypervisor
scheduler; or (c) there's not enough real memory to back all the VMs
without undue paging. Any of those could cause overhead to skyrocket.
Greg Pfister
http://perilsofparallel.blogspot.com/
On Jan 15, 4:26 pm, Ray DePena <ray.dep...@gmail.com> wrote:
> I have absolutely no data to back this up, but I would guess industry wide
> it's higher. More companies virtualizing their compute resources than
> virtualization professionals who know how to do it efficiently. Just a "gut
> feel".
>
> ...
>
> read more »
Within Mac OS X, there is a Mach core, and a BSD kernel above it. Mac
OS X environment on the top. A sophiscated structure. A kind of thing
to make an OS working on another OS, similar to virtualization
technology.
VMWare and Xen may go further to provide virtual machines, which
completely encapsulate the environment of a machine.
I haven't used Mac OS X so far, but read a book of it. There is a good
historic story in it.
Scott
On Jan 16, 12:50 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> I am not sure I agree with folks who think virtualization has shifted the
> paradigm in terms of skill set and experience necessary to do the job.
>
> Considering that virtualization is nothing but re-inventing of what has been
> done in the last 20 to 30 years....(virtual) desktop, (virtual)server,
> (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual)
> FT, (virtual)security...why is it such a big deal for anyone whether it is a
> systems administrators, networks administrators, developers, architects,
> technical managers to transfer their non-virtualization based fundamental
> knowledge to virtualized environment. Hypervisor itself nothing but what
> existed as micro-kernels before (how many remember MACH from Carnagie Mellon
> in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not
> be 100% but atleast 95%+, OS did not have virtual networks to deal with).
> Even live migration is not new, process migartion was tried in many
> environments like TCF and DCE etc. Even the management is very similar. Just
> a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to
> cover the virtual world. Will the fundamental abstractions change just
> because of the virtual entities. Not at all. It is the sameO, sameO in a
> different way.
>
> _____
>
> On Fri, Jan 15, 2010 at 5:38 PM, Jan Klincewicz <jan.klincew...@gmail.com>
> wrote:
>
> I would like to know how everyone is thinking of "overhead" ?? Is it per VM
> vs. a physical machine, or per host (percentage utilized ?) Either way, it
> looks like a pretty good deal to me versus NOT virtualizing, and doing
> things like it were 1982 ....
>
> There is no rocket science to "tweaking" VMs ... you allocate the necessary
> resources and Bob's your Uncle. Yes, you can spread different workloads
> more efficiently across hosts, but that can be calculated (and done) for you
> automatically. CPU sharing is VERY efficient ... for some hypervisors (soon
> ALL) memory sharing is pretty efficient as well.
>
> Again, I am am still waiting for someone to propose alternatives (for
> bread-and-butter apps) .. I am not getting a lot of takers ....
>
> On Fri, Jan 15, 2010 at 8:19 PM, Peglar, Robert <Robert_Peg...@xiotech.com>
> wrote:
>
> I concur with Ray. One look at the headhunter lists indicates same.
>
> Sent via my PDA. Please forgive any typos, as thumbs are funny things.
>
> On Jan 15, 2010, at 4:31 PM, "Ray DePena" <ray.dep...@gmail.com> wrote:
>
> I have absolutely no data to back this up, but I would guess industry wide
> it's higher. More companies virtualizing their compute resources than
> virtualization professionals who know how to do it efficiently. Just a "gut
> feel".
>
> On Fri, Jan 15, 2010 at 2:49 PM, Greg Pfister <
>
> <mailto:greg.pfis...@gmail.com> greg.pfis...@gmail.com> wrote:
>
> Hm, apparently my syntax got garbled. The SWAG was <10% per VM,
> excluding mainframes.
>
> Greg Pfister
> <http://perilsofparallel.blogspot.com/>http://perilsofparallel.blogspot.com/
> ...
>
> read more »
(i.e. run JVM on a host OS, NOT JVM on a host VM on a host OS/
hypervisor)
Sassa
On Jan 17, 3:38 am, Greg Pfister <greg.pfis...@gmail.com> wrote:
> I have absolutely no data, either. That's the SWAG part.
>
> What I'm thinking of writing it, though, are all the middle-tier app
> engines (10X the DB back end), all running Java or C# or the like,
> therefore all using gobs of compute cycles to do any IO. (Have to move
> data out of GC space first.)
>
> I'm assuming that IO inefficiency drowns out the VM/PCIe-based IO
> inefficiency. Then Intel or AMD hardware eliminates the intrinsic
> CPU / memory inefficiency.
>
> All bets off, though, if (a) you include old hardware that requires
> trap&emulate for CPU/memory; (b) bad stuff happens in the hypervisor
> scheduler; or (c) there's not enough real memory to back all the VMs
> without undue paging. Any of those could cause overhead to skyrocket.
>
> Greg Pfisterhttp://perilsofparallel.blogspot.com/
A JVM is primarily an interpreter of Java language/byte codes. Yes it does
create a sandbox to do this job, but it is a misnomer to call a JVM a VM in
the traditional VM sense. The reason a JVM is called a VM is because it
abstracts the underlying processor architecture - primarily instruction set
- in the context of an interpreter/compiler but not in the context of an OS.
A JVM does not do any OS specific functions primarily process management,
virual memory management, file systems management, hardware abstraction and
I/O, networking etc.
Whereas a traditional VM does all the above but does NOT play the role of an
interpreter like the JVM, although some VMs like QEMU do emulation.
So when we are talking about JVM and VM they are two different beasts
entirely. So when you run Java applications in a VM, it is perfectly OK and
needed/required to run a VM (JVM) within a VM (traditional VM).
-----Original Message-----
From: cloud-c...@googlegroups.com
[mailto:cloud-c...@googlegroups.com] On Behalf Of Sassa
Sent: Monday, January 18, 2010 2:27 PM
To: Cloud Computing
Subject: [ Cloud Computing ] Re: what role did virtual machine play in Cloud
Computing?
On Sat, Jan 16, 2010 at 3:50 PM, Rao Dronamraju <rao.dro...@sbcglobal.net> wrote:
I am not sure I agree with folks who think virtualization has shifted the paradigm in terms of skill set and experience necessary to do the job.
Considering that virtualization is nothing but re-inventing of what has been done in the last 20 to 30 years....(virtual) desktop, (virtual)server, (virtual)network, (virtual)storage, (virtual) SMP, (virtual) HA, (virtual) FT, (virtual)security...why is it such a big deal for anyone whether it is a systems administrators, networks administrators, developers, architects, technical managers to transfer their non-virtualization based fundamental knowledge to virtualized environment. Hypervisor itself nothing but what existed as micro-kernels before (how many remember MACH from Carnagie Mellon in the late 80s and early 90s) & nothing but 100% OS technology (OK, may not be 100% but atleast 95%+, OS did not have virtual networks to deal with). Even live migration is not new, process migartion was tried in many environments like TCF and DCE etc. Even the management is very similar. Just a matter of time SNMP/WEBM/CIM/.DMTF/SNIA/IETF standards will be enhanced to cover the virtual world. Will the fundamental abstractions change just because of the virtual entities. Not at all. It is the sameO, sameO in a different way.
--
http://www.amazon.com/gp/product/B002H07SEC, http://www..amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
--
http://www.amazon.com/gp/product/B002H07SEC, http://www..amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups
"Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
--
Cheers,
Jan
Not sure if there are some architectures which run Java bytecode
directly. If yes, JVMs on these architecture would be the VMs with the
same sense as VMware or Xen products: para-virtualization or pure-
virtualization.
Scott
On Jan 18, 1:15 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> ...
>
> read more »
The only vendor I know of in this space is Azul Systems, and I don't
think virtualization as the term is discussed here really figures much
into what they do. In a way it's the opposite - aggregating many
resources to solve a single problem instead of splitting a resource up
to solve multiple problems. I certainly wouldn't compare them to VMware
or Xen.
I saw from some book that there are people even talking of aggregating
several physical units into one logic unit as virtual machines. By
virtual, they mean some logic things not identical to physical ones.
Some may reserve the phrase to the slicing one physical machine into
many logic machines.
Scott
As Rao points out below, a JVM is a different level/kind of
virtualization than providing a virtual "physical" machine.
That said:
1) Running a virtual physical machine inside another virtual physical
machine is useful for debugging the hypervisor itself. An old friend
of mine who developed VM/370 did that regularly. He used to say that
you actually have to do three levels to get everything right; once you
have 3, you can do any number. (No, I never quite understood why, but
it had to do with paging & virtual memory.)
2) Many virtual physical machines run a JVM - think of consolidating a
bunch of middle-tier application systems all written in Java. (Or C#,
or whatever.) Why not just run multiple JVMs on one OS? Different apps
on different JVMs may require different OS tuning that the OS isn't
able to isolate to just its applications.
3) There have been efforts to do the opposite: Run the JVM directly on
the hardware, hoping to reap efficiency benefits by eliminating a
level of (sort of) simulation. I know there was one in IBM Research,
and maybe there are others. Not sure what Azure does, for example;
probably they run JVM on the iron.
Greg Pfister
http://perilsofparallel.blogspot.com/
http://randomgorp.blogspot.com/
> ...
>
> read more »
This is a good real life use case on the role of virtualization in cloud computing.
http://www.computing.co.uk/computing/analysis/2256208/cern-takes-virtual-server-turn
Cheers
--
~~~~~
Register Today for Cloud Slam 2010 at official website - http://cloudslam10.com
Posting guidelines: http://groups.google.ca/group/cloud-computing/web/frequently-asked-questions
Follow us on Twitter http://twitter.com/cloudcomp_group or @cloudcomp_group
Post Job/Resume at http://cloudjobs.net
Buy 88 conference sessions and panels on cloud computing on DVD at
http://www.amazon.com/gp/product/B002H07SEC, http://www.amazon.com/gp/product/B002H0IW1U or get instant access to downloadable versions at http://cloudslam09.com/content/registration-5.html
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com
Jeanne,
Good write up.
You and I are approaching this issue from two diametrically opposite ends. I am primarily a data center guy and your expertise seems to be desktop.
I will answer your questions but let me explain why I think it is not a paradigm shift when we talk about the skills and competencies necessary to do a job in the virtual world.
If you look at a data center and its componets, Applications, Middleware, OS, HW, Networks, Storage, Security, Management etc etc, what has really changed RADICALLY to call it a paradigm shift?....It is the same components in a CONTAINER called Virtual Machine. Yes, the container is fluid and moves around a bit, so what?, has the adminsitration of AD in a VM changed?....Has the DNS, DHCP, .NET, Java, Linux, Windows, the entire networks, storage components, security and management changed FUNDAMENTALLY?....NO!!!. I think the most affected components are security and management. Even here, Firewall works the same way, same is the IDS, Management of a VM is no different than a physical machine, have they invented new protocols to manage the virtual machines or are they being managed with the same SNMP, WEBM etc?...A Virtual Machine is a CONTAINER!. But what is CONTAINED which makes up the 90%+ of the data center components is SAME. So why is administering, developing, deploying and operating so different?....
For instance, take the job of a newtork adminstrator in a data center, have the network topologies changed radically?....NO!, have the network interfaces, switches, protocols, routers, links, software/algorithms, technology itself – layer 2, layer 3 etc any of it changed because of virtualization…NO!, if you have configured a HW network interface, is it radically different than configuring a software network interface in the VM/Hypervisor?....has the (V)VLAN concept changed in virtual networks?...NO, same with any software entity that is basically a representation of HW entity in whether it is a bridge, switch, router etc etc. So how has the network admins job changed?...Infact, the virtual networks are a subset of physical networks with lot less capabilities, so the job is even easier. OK, it is a little more hard because now you not only have to deal with the physical networks+virtual networks. But it is application of same principles in software instead of HW.
Same with Systems Admins, they have to do the backups, configurations, administration etc etc the same way for a virtual machine as they have done with physical machines. User account administration, security adminsitration, NFS, SAMBA, DNS, DHCP etc etc all are same. Yes, their job got some more work added to them, now they have to adminster a bunch of hypervisors in addition to physical machines and VMs. So I do not see the job functions itself changing radically to call it a paradigm shift. Hypervisor admisntration and migration may be one major change/addition but not a paradigm shift!.
“1) Systems Management Vendors have had to re-think and retool how they deal with virtual environments for the Data Center. Everything from Monitoring, Capacity Planning, Discovery, Change, and Configuration Management. One of the biggest early inhibitors to implementing BSM back between 2002-2007 was virtual sprawl and inability of systems to detect VMs if they were offline. Thus efficiency in how patching is done, discovery is tracked etc had to be built in with tools like ESX, Run Book Automation, CMDBs, Discovery tools, etc. Most of these were built out to fit the requirements of Machine Virtualization implemented in the datacenter.”
Sure, you will have to make some changes/modification when you have created an entire infrastructure in software. But my point is have the capacity planning, monitoring, discovery, change management etc etc have they changed in a FUNDAMENTAL way in the virtual world?...No!. In systems management if you discover a physical device by its IP address, you discover a VM also by its IP address. If you keep track of the availability of a physical device with a periodic ping to it, you also ping a VM peridically. Similarly, capacity planning algorithms have not changed in a FUNDAMENTAL way. Yes, they have to take into consideration the virtual sprawl which is numbers and scalability issue. So yes, the paradigm shift is in the sheer numbers you deal with and the associated scalability of the solution. But, monitoring, capacity planning, discovery, configuration management etc have not changed FUNDAMENTALLY. For change management, are the folks continue to use CMDB?....Yes…you will have more number of CIs in CMDB, more relationships etc etc. but the very architecture of CMDB has not been changed radically for the virtual entities.
“Question - What happens for Type 1 Hypervisors on a Desktop? What about traditional tools for Workstation on a Desktop? What about virtual applications that don't show up in the registry for traditional discovery tools to pull?”
I am not sure what you mean by “What happens for Type 1 Hypervisors on a Desktop?”…Workstation on a Desktop?...are you talking about VMWare Workstation?....Virtual Applications?...you are talking about desktop virtualization is it not?....again I do not take desktop virtualization seriouslyJ…just kidding!.
2) DMTF and other standards all had to shift
and be enhanced to address adding requirements for virtual formats.
The current OVF standard just added in the Fall - was driven by the President
Winston Bumpus (also architect in office of CTO of VMware) - if there were no
paradigm shift - they why would the standards need to be updated?
Additional standards will also need to be taken into consideration - such as
looking at the User Data file for drift versus application (because most
virtual applications are read only - user data is where plugins, and other
significant changes could occur for audit - like viral infections, wall paper,
other components mentioned).
Yes, I have met Winston before. He is a very nice guy. Before he moved to VMware he was in Dell and is and was working with DMTF and OASIS standards for a long time. Since you brought up the topic of systems management and standards, I was an architect on a highly successful systems management product - HP Insight Manager and have worked with DMTF standards and the folks there. Just because OVF standard being worked on, it does not mean paradigm has shifted. Anytime new things evolve, you have to either come up with new standards. OVF is just a representation of VMs for portability, packaging and distribution….just a standardization of representation of a VM.
“Desktops are moving into the data center which requires different skills, processes, and updates”
I do not know what you mean by desktops are moving into data centers?....are all the employees of a company who use desktops and laptops for say accessing their email and ERP applications are now going to be relocated into the data center perimeter and so there is a paradigm shift?....
“ESX Expertise for their Support Desk for Desktops - Today when a call comes in the IT staff can ask a user for basic information like machine name or have it appear in their systems management directory to identify, remote into the machine and trouble shoot an application issue. For Virtual Desktops - an additional step is needed - identifying what server the desktop is being manifested from (could be one of many from VMotion, or other broker solutions), determining is it an application issue (Virtual or Physical), Is it a connection issue?, Is it an ESX or other Virtual Server Host issue, Storage issue (capacity, throughput for access), etc. If it is a VM issue - they will need someone that is authorized to troubleshoot, create, and work with the virtual machine.”
As you said “an additional step is needed” is NOT paradigm shift!.
Enhanced Discovery and Delivery - Additional changes will need to be implemented to enhanced how Systems Management tools both deliver and audit applications to virtual environments. For persistent implementation - for example - when a typical endpoint comes up the agent runs and tells it to pull discovery and/or application updates that are set at the primary distribution point. What happens when all the VMs spin up at a given time and go to check for an update and download? What about tools that do not provide the ability to vary their update functionality because they assume a distributed environment? Or those that look for network detection and throttle accordingly that detect full capacity and open up the connection full throttle? Now multiply that by 40 or more? Will offline patching be the cure? Maybe - but what about those that need to be sent via ECO during blackout hours? These and many more questions have cropped up with the early implementations and cause delay in large scale production deployments.
Again you said it “Additional changes will be needed….” Additional changes is by no means paradigm shift!.
Now you are bringing up scalability issues. Yes, with virtualization and VM sprawl, as I mentioned before, there will be some scalability issues. But none that are of paradigm shift magnitude. How many VMs do you expect a desktop/laptop user to have?...100s?....probably a couple on each…and how many of them simultaneously, all at the same time pull the updates. Most times a few at a time. Some times like email being checked at 8:00am by all employees at the same time, updates can be pulled simultaneously. So this is a scalability problem and there are many different ways scalability is already addressed unless it is of internet/web scale in which case it is a paradigm shift. But most SMBs, enterprises do not have internet/web scale systems unless they are hybrid (consumer/enterprise) environments like say amazon.com, not employee/enterprise environments.
Even scalability has been blown out of proportions unless you dealing with an internet/web scale application. If you have a data center with 1000 machines and have consolidated 10:1, you would end up with 100 HW systems and 1000 VMs, so instead of managing 1000 HW entities, now you are managing 1100 entities (1000 VMs+100 HW). Your management solution was managing 1000 systems before any way, so how big a paradigm shift/scaling it is to manage 1100 systems?....
So in summary, I do not think it is a paradgm shift. I think it is just a re-invention of HW wheel in SW. So if you have worked with the wheel before, it should be very similar in the virtual world also.
“But we need something to help us manage and understand what is going on in that virtual environment...."”
There aren’t many management solutions that scale to 80,000 entities with a single instance.
OpenNMS claims that an installation some where in Europe (Switzerland??) manages 50,000 devices.
But OpenNMS folks are physical world management folks.
Can OpenNebula Manage 80,000 VMs?...I think this question is wide OPEN!!:-) is it not?...
Actually managing a VM is lot easier than managing an OS/HW. The number
of operations/objects supported by libvirt is quite/very limited at this time.
That's a problem of the OS, just like two guest OSes might want
different hypervisor tuning; not a conceptual problem.
> 3) There have been efforts to do the opposite: Run the JVM directly on
> the hardware, hoping to reap efficiency benefits by eliminating a
> level of (sort of) simulation. I know there was one in IBM Research,
> and maybe there are others. Not sure what Azure does, for example;
> probably they run JVM on the iron.
yes, there's JRockit VE, too, running on baremetal.
Sassa
> Greg Pfisterhttp://perilsofparallel.blogspot.com/http://randomgorp.blogspot.com/> Sassa
I don't care what the "traditional" VM does, and what it asks the BIOS
to do. I don't care if my program is being run on a physical CPU
shared with other VMs or being interpreted by a VM. All of that
doesn't make the VM less of a VM.
Besides, the JVMs don't interpret much these days. They will often
compile CPU-specific code that calls OS-specific routines to interact
with the outside world.
let's not start the flame about the definition of the cloud, but if
you run the code in the cloud, do you care if it uses a grid
underneath? ;-)
I think there are more similarities than discrepancies, if you look
from the app designer, developer, or deployer point of view.
Sassa
On Jan 18, 9:15 pm, "Rao Dronamraju" <rao.dronamr...@sbcglobal.net>
wrote:
> I think we are mixing up JVM and (OS/Hypervisor based) VMs as if they are
> the same just because they are called virtual machines.
>
> A JVM is primarily an interpreter of Java language/byte codes. Yes it does
> create a sandbox to do this job, but it is a misnomer to call a JVM a VM in
> the traditional VM sense. The reason a JVM is called a VM is because it
> abstracts the underlying processor architecture - primarily instruction set
> - in the context of an interpreter/compiler but not in the context of an OS.
> A JVM does not do any OS specific functions primarily process management,
> virual memory management, file systems management, hardware abstraction and
> I/O, networking etc.
>
> Whereas a traditional VM does all the above but does NOT play the role of an
> interpreter like the JVM, although some VMs like QEMU do emulation.
>
> So when we are talking about JVM and VM they are two different beasts
> entirely. So when you run Java applications in a VM, it is perfectly OK and
> needed/required to run a VM (JVM) within a VM (traditional VM).-----Original Message-----
Oh yes, it does:
process management: java.lang.Thread - create new processes, interrupt
processes, assign priorities; suspend, stop are deemed unsafe (and is
so for any old "kill"), but that's still implemented
virtual memory management: don't care if it is virtual; "new <class
name>" allocates more memory; in fact, it is so virtualized, you
shouldn't worry about managing memory at all. Garbage Collection is a
feature that allows to reuse memory that can be proven to be no longer
used, on machines with a finite amount of memory. GC will even do
defragmentation of memory for you. Does malloc or OS do that?
file systems management: java.io.File, java.io.* - create, list,
permissions, read, write, delete files in a virtualized filesystem
with hierarchical namespace
hardware abstraction: what hardware do you access directly from any
given C program? If you do, your app is extremely platform-specific. I
would say normally a program interacts with the hardware via a driver
API. You can have a hardware and OS-specific JNI plug installed for
the hardware that you need to access from a Java program, too.
networking: yes, it is abstracted, too. The network interfaces can
have no direct relationship to the hardware interfaces - they don't
have to, even if they normally do.
I/O: what is it? :-)
The fact that JVMs are extremely ubiquitous is another proof just how
well it does the job of abstracting and virtualizing.
Sassa