Researchers uncover data vulnerability from running multiple virtual machines (VMs)

362 views
Skip to first unread message

Ken North

unread,
Nov 11, 2012, 8:19:44 PM11/11/12
to cloud-c...@googlegroups.com
Researchers from the University of North Carolina, University of
Wisconsin and RSA Laboratories have uncovered a vulnerability in
computers that run two virtual machines with a shared processor cache.

They were able to flood the cache from one VM to expose cryptographic
keys in use by the other VM.

"Cross-VM Side Channels and Their Use to Extract Private Keys"
http://bit.ly/UCgHM8


Ken North
________________
www.kncomputing.com
@knorth2

 

Greg Pfister

unread,
Nov 12, 2012, 10:52:56 PM11/12/12
to cloud-c...@googlegroups.com
I don't understand how the techniques discussed in this paper find the actual keys being used.

I understand how, by flooding the cache and then discovering which cache lines were evicted by the victim, they can, with clever statistics and noise reduction, figure out the paths the victim is taking through its code.

Given that you know the library being used, are paths through the code sufficient to figure out what the key is?

It also seems like the victim has to be spending a whole lot of time encrypting or decrypting for this to work - enough time for sufficient samples to be taken to find out the code paths.

Greg Pfister

Jim Oreilly

unread,
Nov 13, 2012, 11:00:59 AM11/13/12
to cloud-c...@googlegroups.com
Greg,
You hit some valid points. This attack is a bit artificial.
However, there are a couple of places heavy with encryption that are potentially vulnerable. Any cloud storage portal should be encrypting everything that goes to the cloud. This meets the criterion for heavy use of encryption that the attack requires.
Similarly any long-haul portal would encrypt all the transmitted data.

Both of these are vulnerable to this type of attack, though usually multiple keys are in use,
There are worse vulnerabilities, including poor encryption approaches to stored data.


Jim OReilly


From: Greg Pfister <greg.p...@gmail.com>
To: cloud-c...@googlegroups.com
Sent: Mon, November 12, 2012 8:08:37 PM
Subject: Cloud Computing Google Group Re: Researchers uncover data vulnerability from running multiple virtual machines (VMs)
--
-- ~~~~~
Posting guidelines: http://bit.ly/bL3u3v
UP 2012 is the biggest, brightest and best established cloud event. A world class lineup of speakers, covering solutions for today and tomorrow will look at the evolution of the cloud, and how it helps business and society to thrive. Listen, debate and enjoy the very latest trends and innovations, while networking with the movers and shakers of the cloud industry. Register at http://up-con.com/register-now
~~~~~
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
---
You received this message because you are subscribed to the Google Groups "Cloud Computing" group.
To post to this group, send email to cloud-c...@googlegroups.com.
To unsubscribe from this group, send email to cloud-computi...@googlegroups.com.
Visit this group at http://groups.google.com/group/cloud-computing?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Sassa

unread,
Nov 13, 2012, 5:45:10 PM11/13/12
to cloud-c...@googlegroups.com
Have I missed how cache invalidation can affect VMs that don't share any physical memory?

I.e. is this an attack against PVM (where it is not much different from earlier known attacks on root account using cache flooding), or will it also work for a Hardware-assisted virtualization?


Sassa

istimsak abdulbasir

unread,
Nov 13, 2012, 9:07:31 PM11/13/12
to cloud-c...@googlegroups.com
This will an interesting read. In logical sense, I can see this happening. Both VMs are sharing one CPU and its cache mem. Read the cache, you see what both VMs are up to or have done.


--

Vic Winkler

unread,
Nov 14, 2012, 7:19:13 AM11/14/12
to cloud-c...@googlegroups.com
The most important "revelation" in that paper is in "Section 7" where the authors re-invented "gravity" by reminding us NOT to co-locate ("Avoiding co-residency") functions that really should be isolated from each other. ...On this point, try convincing any bank that actually handles money or entity that uses classified data that they should use VMs to effect ***that*** sort of isolation. (Not likely, eh?)

One test for separating the children from the adults in data centers is seeing who separates their traffic according to control plane data versus public network data all the way from the edge of the infrastructure down to individual servers. Surely virtualization has an impact on that, but the prudent will maintain isolation and separation down to physical units and devices based on function.

I love these papers, they give insight into evolving attacks and exploit vectors. But if the authors and readers do not understand best practices, then they will not have a realistic view of the limits of such an attack. I'm just reminding that: "Cloud security is largely applied security".

-- Vic Winkler
    My Cloud Security book:  http://amzn.to/gRY1Bp 

p.s., I covered these topics in my book, "Securing the Cloud" ( http://amzn.to/gRY1Bp ). Much in comes from best practices. The material stands after many discussions with other groups that build clouds and manage them.  I've moved on from cloud, and am at https://Covata.com

_________________________________________ 

On Nov 13, 2012, at 11:00 AM, Jim Oreilly wrote:

Greg,
You hit some valid points. This attack is a bit artificial.
However, there are a couple of places heavy with encryption that are potentially vulnerable. Any cloud storage portal should be encrypting everything that goes to the cloud. This meets the criterion for heavy use of encryption that the attack requires.
Similarly any long-haul portal would encrypt all the transmitted data.

Both of these are vulnerable to this type of attack, though usually multiple keys are in use,
There are worse vulnerabilities, including poor encryption approaches to stored data.


Jim OReilly

From: Greg Pfister <greg.p...@gmail.com>

Todd Bezenek

unread,
Nov 14, 2012, 7:49:27 PM11/14/12
to cloud-c...@googlegroups.com
This relies on knowing the source code executed.  Randomly modifying the code used in the cipher library will eliminate this method of attack.

One of my PhD advisors called this a "straw man" problem.

-Todd

Ken North

unread,
Nov 14, 2012, 4:53:09 PM11/14/12
to cloud-c...@googlegroups.com
If VM1 is doing encryption or decryption and VM2 floods the shared processor cache with its own data, can't it read the cache, eliminate the data it inserted, and then, by process of elimination, know what data VM1 is operating on?

 


-----Original Message-----
From: Greg Pfister <greg.p...@gmail.com>
To: cloud-computing <cloud-c...@googlegroups.com>
Sent: Mon, Nov 12, 2012 8:08 pm
Subject: Cloud Computing Google Group Re: Researchers uncover data vulnerability from running multiple virtual machines (VMs)

Greg Pfister

unread,
Nov 15, 2012, 6:07:32 PM11/15/12
to cloud-c...@googlegroups.com
Istimsak,

Just making sure the basics are known here:

Just because two VMs use the same cache does not mean that one VM (A) can read data from the other VM (B). This is true even if B's data is in the cache when A is running. Part of the hardware-enforced isolation between the VMs makes sure that no VM can issue an address that will fetch another VM's data. That's just as true for VMs as it is for separate processes.

The issue in this paper is whether one can indirectly imply the data another VM is using by tracking, through speed of cache refills, patterns of addresses the other VM is using.

Greg Pfister

Sassa

unread,
Nov 15, 2012, 6:08:39 PM11/15/12
to cloud-c...@googlegroups.com
You can't read cache. You can only read memory addresses. Since memory addresses are translated into different physical addresses, your memory accesses are never going to hit the same cache line.

However, what they do, is a fairly well known trick. What makes the paper original, is that the setting is in the VM, new unknowns, more noise, harder to force the CPU do what you want.

The process is as follows:

* determine how the cryptographic library is _likely_ to be laid out in memory. This is important.

* flood instruction cache with specially laid out code and measure how much time it took to do that

* if you can trick the VM manager to run your code on the same CPU as the victim, you can measure which cache lines had to be evicted, and which hadn't

This is where I was wrong; I thought I'd need to reference to the actual physical address, which is not possible in VM, but remember that because of cache associativity, the cache line number can be predicted from the physical address, and _many_ physical addresses match the same cache line. So, knowing which cache lines are filled slower, you know which cache lines were evicted by the victim VM.

By knowing which lines of code the victim executes, you can predict which branches in the cryptographic library the code took. If the branching in the algorithm depends on key material, you can work out the secret key by monitoring which branches the algorithm took.

That's the sketch. There are lots of gotchas to look out for, these are outlined in the paper, and you can never know the key for sure, but this attack permits to reduce the scope of the search for the key.


I am not expert enough to judge how real the threat is outside the lab.


Sassa

Greg Pfister

unread,
Nov 15, 2012, 6:10:33 PM11/15/12
to cloud-c...@googlegroups.com
No, it cannot. One VM cannot directly access any data owned by another VM. There is hardware protection that prohibits this -- no VM can issue an address that resolves to an address in another VMs memory space.

Also see also my reply to someone else in this topic thread who's effectively asking the same question.

Greg Pfister

Greg Pfister

unread,
Nov 15, 2012, 6:11:39 PM11/15/12
to cloud-c...@googlegroups.com
I do believe I agree with that advisor.

Greg Pfister

Abhishek Pamecha

unread,
Nov 16, 2012, 9:27:44 AM11/16/12
to <cloud-computing@googlegroups.com>, cloud-c...@googlegroups.com
>>>* determine how the cryptographic library is _likely_ to be laid out in memory. This is important.


This means the attacker has to know beforehand a lot about the build of the executable - the compiler that is used, the target architecture of the other VM, the order in which the linker linked the crypto library.

and that is to even come close to be able to determine the memory range where the library could be found at runtime.

Or even without knowing that, is it possible to predict where this library is laid out?

To me it seems the attacker already has access to the victim executable and enough opportunity to analyze it before launching the attack. Is this a right assumption here?

Thanks
Abhishek


i Sent from my iPad with iMstakes

Gilad Parann-Nissany

unread,
Nov 16, 2012, 9:53:17 AM11/16/12
to cloud-c...@googlegroups.com
RE the question

the attacker already has access to the victim executable and enough opportunity to analyze it before launching the attack.  Is this a right assumption here?

Yes it is. It is a standard assumption in security analysis - the attacker knows your code exactly, and you still need to be secure.

Regards
Gilad
__________________
Gilad Parann-Nissany
http://www.porticor.com/


On Fri, Nov 16, 2012 at 4:27 PM, Abhishek Pamecha <abhishe...@gmail.com> wrote:
>>>* determine how the cryptographic library is _likely_ to be laid out in memory. This is important.


This means the attacker has to know beforehand a lot about the build of the executable - the compiler that is used, the target architecture of the other VM, the order in which the linker linked the crypto library.

and that is to even come close to be able to determine the memory range where the library could be found at runtime.

Or even without knowing that, is it possible to predict where this library is laid out?

To me it seems the attacker already has access to the victim executable and enough opportunity to analyze it before launching the attack.  Is this a right assumption here?

Thanks
Abhishek


i Sent from my iPad with iMstakes

On Nov 15, 2012, at 16:22, "Sassa" <sass...@gmail.com> wrote:

> * determine how the cryptographic library is _likely_ to be laid out in memory. This is important.

Phil Abraham

unread,
Nov 16, 2012, 10:50:12 AM11/16/12
to cloud-c...@googlegroups.com
Yes, security analysis, or prepping the target will give you the best results.
--
Phil Abraham
Cloud Face LLC
Innovative Cloud Solutions

LEGAL CONFIDENTIAL: The information in this e-mail and in any attachment may contain information which is legally privileged. It is intended only for the attention and use of the named recipient. If you are not the intended recipient, you are not authorized to retain, disclose, copy or distribute the message and/or any of its attachments. If you received this e-mail in error, please notify me and delete this message. Thank-you.

Zahid Ahmed

unread,
Nov 16, 2012, 11:31:56 AM11/16/12
to cloud-c...@googlegroups.com
What is the impact of this on public cloud service providers, if any?

Abhishek Pamecha

unread,
Nov 16, 2012, 12:23:27 PM11/16/12
to cloud-c...@googlegroups.com
In that case too, as I understand, cache line latency measurements will only reveal whether a specific path in the crypto library is in use or not. So, it may reveal the code being run at that moment but how does that reveal the key? 


That was Greg's initial question too, IMO. But it seems somewhere down the discussion, it was assumed to be possible. I still can't figure out how?

thanks
abhishek

Greg Pfister

unread,
Nov 16, 2012, 6:12:12 PM11/16/12
to cloud-c...@googlegroups.com
On Friday, November 16, 2012 10:32:42 AM UTC-7, heptagon wrote:
In that case too, as I understand, cache line latency measurements will only reveal whether a specific path in the crypto library is in use or not. So, it may reveal the code being run at that moment but how does that reveal the key? 


That was Greg's initial question too, IMO. But it seems somewhere down the discussion, it was assumed to be possible.

Yep!
 
I still can't figure out how?

Me neither.

Todd Bezenek

unread,
Nov 16, 2012, 6:17:04 PM11/16/12
to cloud-c...@googlegroups.com
Abhishek,

It is all about communications channels.  Remember, you only need to be able to provide a single bit of information to communicate.  Once you can do that, you can take a long time to get the second bit and then the third bit.

If you can set the system up into a known state which you can get back to, you can move forward and get one bit.  You can then go back to the original state and work on the second bit.

That's all you need.

The mechanism in the research paper is a "straw man" because it will happen if people are not careful at all.  However, from what I have seen, data breeches happen because of two things:

1. People are not careful.

2. The criminals are very smart.

This paper is talking about (2).  You can get around (1) easily.  That's why stuff the military buys costs more, because the people who design it make sure (1) does not happen.

If you want to know exactly how this breech can happen, send me email directly.

-Todd

p.s. There is so much noise here (on this channel) it is hard to find the real information.

Sassa

unread,
Nov 17, 2012, 1:44:08 PM11/17/12
to cloud-c...@googlegroups.com
Oh, that's not as difficult as you might think.

Layout is likely to be the same for most linkers, e.g. the library, especially dynamically linked one, may start on a page margin. Not sure to what extent this can be addressed by address randomization that the kernel does.

They specifically talk about a particular very popular library used by a number of popular software.


Sassa

Sassa

unread,
Nov 17, 2012, 1:52:28 PM11/17/12
to cloud-c...@googlegroups.com


On Friday, 16 November 2012 23:12:12 UTC, Greg Pfister wrote:
On Friday, November 16, 2012 10:32:42 AM UTC-7, heptagon wrote:
In that case too, as I understand, cache line latency measurements will only reveal whether a specific path in the crypto library is in use or not. So, it may reveal the code being run at that moment but how does that reveal the key? 


That was Greg's initial question too, IMO. But it seems somewhere down the discussion, it was assumed to be possible.

Yep!
 
I still can't figure out how?

Me neither.

Well, long modulo arithmetic means that branching in the algorithm depends on the exponent and modulus, which is the value of the key. ElGamal will have something similarly revealing.


Sassa

Phil Abraham

unread,
Nov 17, 2012, 9:09:45 AM11/17/12
to cloud-c...@googlegroups.com
I just found a vulnerability in facebook when the user is also using multiple screens.

Todd Bezenek

unread,
Nov 20, 2012, 10:54:55 AM11/20/12
to cloud-c...@googlegroups.com
Solutions to the Problem

Note: There was some private discussion about this which I will keep private.  I wrote up this reply to a very good question sent to me by someone who read my earlier post.  I'm sharing this because it is a solution, not something which adds to understanding of the vulnerability.

>>Do you see a way to get below the hypervisor with this or another attack?

Any code running on the system is going to have this vulnerability unless one of several possible things is done:

1. The hypervisor has a separate I-cache or other mechanism to totally isolate the I-cache.

2. A separate compute engine is used for the hypervisor.

Keep in mind there might be other communication channels, but these become more and more "weak."  Here are a few to give you an idea:

Stronger to weaker:

o The method in the paper.
o Using branch predictor state for something similar.
o Using information about power consumed by circuits.
o Using RF radiation information (you need an RF detector you can sample for this--there are more detectors than people realize).

Another (much more secure) method is to never decode anything in the main processor, but to farm the work out to special decoder logic.  This may be the number one reason for building special decoder logic!  (Although it will likely be faster also.)

Examples of decoder logic can be found in IBM's recent processor announcement.  It had (I believe) 13 different decoder/encoders.  Another is the hardware "trust zone" suggested by Ruby Lee (Princeton) and recent announcements by ARM and Intel.

-Todd
Reply all
Reply to author
Forward
0 new messages