Opal Storage Specification

581 views
Skip to first unread message

Alex Dubois

unread,
Feb 23, 2013, 3:24:57 AM2/23/13
to qubes...@googlegroups.com
Hi,

I am dreaming looking at lenovo catalog for the T430.

In the storage section there is an option to select OPAL drive. Wikipedia describes it as a spec from the Trusted Computing Group...
http://en.wikipedia.org/wiki/Opal_Storage_Specification
http://en.wikipedia.org/wiki/Hardware-based_full_disk_encryption
http://www.trustedcomputinggroup.org/solutions/data_protection

If I am not worried about my laptop being stolen but only about remote attacks, am I right to assume it does not bring any value to me?

Thanks,
Alx


Radoslaw Szkodzinski

unread,
Feb 23, 2013, 5:44:43 AM2/23/13
to qubes...@googlegroups.com
It doesn't add much value even if you do - as Qubes by default enables
full disk encryption.
So as long as you use AEM and TPM for the disk keys, you should be
safe against the same set of attacks that a hardware disk encryption
prevents.

Well, maybe (big maybe) a bit more vulnerable to cold boot attacks,
until someone adds support for fully in-CPU AES.

Best wishes with the new laptop,
--
Radosław Szkodziński

cprise

unread,
Feb 23, 2013, 11:52:53 AM2/23/13
to qubes...@googlegroups.com
That is worrying. I wasn't aware there was a compromise with the
implementation of AES in recent CPUs.

Radoslaw Szkodzinski

unread,
Feb 23, 2013, 1:29:01 PM2/23/13
to qubes...@googlegroups.com
No, there's not - the issue is handling of keys.
The in-cache implementation is better in this regard, as the keys
don't have to be kept in RAM, preventing cold boot attacks on them.

Actually AES-NI is stronger than normal one. However it's not fully
in-CPU, as the key is still handled normally.
The best result would be keeping the key in CPU cache at all times
after downloading it from TPM.

--
Radosław Szkodziński

cprise

unread,
Feb 23, 2013, 3:21:51 PM2/23/13
to qubes...@googlegroups.com
OK, I'm still worried. I think there should be no disk keys sitting in
RAM. I remember reading about a Linux disk encryption module that worked
in-cache to avoid key exposure, but thought that it relied on AES-NI.

Some of us have drives that support encryption, but won't use it because
we have multiboot with another OS (usually Windows) and the Linux
encryption offers some protection.

Does key escrow only work with hardware encryption? And what are the
disk encryption options available?

Radoslaw Szkodzinski

unread,
Feb 23, 2013, 4:14:10 PM2/23/13
to qubes...@googlegroups.com
On Sat, Feb 23, 2013 at 9:21 PM, cprise <cpr...@gmail.com> wrote:
> OK, I'm still worried. I think there should be no disk keys sitting in
> RAM.

Sure, but if you're considering cold boot attacks and other RAM
attacks, Anti-Evil Maid is more for you. It will wipe the memory.
Then the problem remains the ram swap attack, which would likely allow
the attacker to copy any live VMs address space anyway, including any
loaded or cached files.

Of course you could mark some files as "no-cache", so as to limit
their lifetime in RAM - but Qubes (or rather Linux) doesn't have that
feature.
Great idea for an improvement though, could be done e.g. using
extended attributes and a relatively simple kernel patch.

> I remember reading about a Linux disk encryption module that worked
> in-cache to avoid key exposure, but thought that it relied on AES-NI.

It does. It can be done without AES-NI, but would be expensive in
terms of cache.

> Some of us have drives that support encryption, but won't use it because
> we have multiboot with another OS (usually Windows) and the Linux
> encryption offers some protection.
>
> Does key escrow only work with hardware encryption? And what are the
> disk encryption options available?

The problem is that TPMs are too slow for constant key upload and
there are hardware attacks to sniff their LPC bus or even completely
circumvent them.
Key escrow is an approach, not a technology - that only a party
authorized with some other means can access the key.

The options are RealCrypt (aka TrueCrypt) and dm-crypt. Qubes
currently supports dm-crypt.
The others for file-based encryption are ecryptfs and encfs

--
Radosław Szkodziński

Joanna Rutkowska

unread,
Feb 24, 2013, 6:04:35 AM2/24/13
to qubes...@googlegroups.com, Alex Dubois
Yeah, I believe this is a correct view.

Moreover, even if you were concerned about physical attacks (which I
believe people should be, at least Evil Maid attacks and perhaps Cold
Boot attacks), then I wouldn't count on hardware disk encryption. For me
it's obvious to assume that OEMs such as Lenovo, Toshiba, etc, must have
built in backdoors into their hardware encrypts to allow law enforcement
to extract the keys. I would be really surprised, in fact, if they
didn't... (One interesting question though is: the law enforcement of
which countries? But that's a non technical problem)

One might argue that the LUKS disk encryption we use in Qubes (and which
is deployed by default by the installer, BTW) also uses hardware
instructions (AESNI). It's thinkable that the keys that are passed to
those AES instructions got stored somewhere in flash memory on the
chipset, ideally in the MCH, which is part of the processor package
these days (Does MCH have flash memory yet? Or is it still only in ICH?).

However, even ignoring for some technical difficulties (what if I
execute a million of fake AES instructions with fake keys, just to
overflow the secret key storage?), there is still something that makes
for a good argument to trust Intel in this respect: it is the fact that
we really must trust Intel *anyway*, as they can always come up with
lots of various backdoors [1], and we're essentially defenseless against
them. So, the only reasonable choice is to... trust Intel. But it's nice
if we could limit the trust to Intel only, and not to any other OEM,
right? (E.g. everyone knows and assumes that BIOSes have backdoor
passwords, right? Let's not rely on them!).

[1]
http://theinvisiblethings.blogspot.com/2009/06/more-thoughts-on-cpu-backdoors.html

joanna.

signature.asc

Radoslaw Szkodzinski

unread,
Feb 24, 2013, 6:13:59 AM2/24/13
to qubes...@googlegroups.com
On Sat, Feb 23, 2013 at 10:14 PM, Radoslaw Szkodzinski
<astra...@gmail.com> wrote:
> On Sat, Feb 23, 2013 at 9:21 PM, cprise <cpr...@gmail.com> wrote:
>> OK, I'm still worried. I think there should be no disk keys sitting in
>> RAM.
>
> Sure, but if you're considering cold boot attacks and other RAM
> attacks, Anti-Evil Maid is more for you. It will wipe the memory.
> Then the problem remains the ram swap attack, which would likely allow
> the attacker to copy any live VMs address space anyway, including any
> loaded or cached files.
>
> Of course you could mark some files as "no-cache", so as to limit
> their lifetime in RAM - but Qubes (or rather Linux) doesn't have that
> feature.
> Great idea for an improvement though, could be done e.g. using
> extended attributes and a relatively simple kernel patch.

Hmm, actually could be done even simpler: on suspend or lock, flush
disk caches with that sysfs option and unload disk encryption keys. On
resume, ask for the disk password again.
I'm not sure if unloading disk encryption keys is possible in case of
an actually running system though - so perhaps it'd be best to split
Qubes VMs onto separate LVMs.

--
Radosław Szkodzińsk

Joanna Rutkowska

unread,
Feb 24, 2013, 6:42:22 AM2/24/13
to qubes...@googlegroups.com, Radoslaw Szkodzinski
On 02/23/13 22:14, Radoslaw Szkodzinski wrote:
> On Sat, Feb 23, 2013 at 9:21 PM, cprise <cpr...@gmail.com> wrote:
>> OK, I'm still worried. I think there should be no disk keys sitting in
>> RAM.
>
> Sure, but if you're considering cold boot attacks and other RAM
> attacks, Anti-Evil Maid is more for you. It will wipe the memory.

It won't. Read carefully what I write:

https://groups.google.com/group/qubes-devel/msg/6c5bcd67358e4247

> Then the problem remains the ram swap attack, which would likely allow
> the attacker to copy any live VMs address space anyway, including any
> loaded or cached files.
>
> Of course you could mark some files as "no-cache", so as to limit
> their lifetime in RAM - but Qubes (or rather Linux) doesn't have that
> feature.
> Great idea for an improvement though, could be done e.g. using
> extended attributes and a relatively simple kernel patch.
>
>> I remember reading about a Linux disk encryption module that worked
>> in-cache to avoid key exposure, but thought that it relied on AES-NI.
>
> It does. It can be done without AES-NI, but would be expensive in
> terms of cache.
>

Known implementation usually keep the keys in the CPU registers, not
cache (which would be quite difficult on x86 which lacks explicit cache
controls):

http://www1.informatik.uni-erlangen.de/tresor

But it's only a partial solution against Cold Boot attacks, because,
while the attacker might not find your LUKS key, you never know if your
recent HTTPS sessions, SSH key, or GPG private key, was not left in RAM
when the attacker came and removed the batter (to force quick shutdown)...

The somehow more complete solution against Cold Boot attack I'm aware of
is Intel TXT's SCLEAN module, I described in the post referenced above.
Yet, this method doesn't prevent against removing the physical DRAM dies
and putting them into some other machine for analysis, which is
prevented by the Tresor trick.

Tresor kernel patch has not been merged upstream most likely because
they (ab)use of the Debug Resisters for key storage. This, however is
not a big problem for Qubes dom0 -- one doesn't run debuggers there
anywhere, right. So, perhaps we should consider merging the Tresor patch
to our dom0 kernel sometime. One annoyance, from user perspective, is
that Tresor prompts for passphrase upon each system start/wakeup -- of
course they must do that in order to pre-load the Debug Register with a
key again. But this somehow doubles our "screensaver" passphrase prompt
on S3 wakeup, as well as the LUKS passhprase prompt on boot. So, ideally
we could merge all those three passphrase, i.e. the LUKS, the
screensaver, and the Tresor into one passphrase?

j.

signature.asc

Marek Marczykowski

unread,
Feb 24, 2013, 7:56:17 AM2/24/13
to qubes...@googlegroups.com, Joanna Rutkowska, Radoslaw Szkodzinski
Unless debugging some Qubes code...

> So, perhaps we should consider merging the Tresor patch
> to our dom0 kernel sometime.

What about interaction between Debug Registers and another domains? Doesn't
any VM (PV and/or HVM) have access to those DR?

> One annoyance, from user perspective, is
> that Tresor prompts for passphrase upon each system start/wakeup -- of
> course they must do that in order to pre-load the Debug Register with a
> key again. But this somehow doubles our "screensaver" passphrase prompt
> on S3 wakeup, as well as the LUKS passhprase prompt on boot. So, ideally
> we could merge all those three passphrase, i.e. the LUKS, the
> screensaver, and the Tresor into one passphrase?

Using the same passphrase for Tresor and screensaver could leak it into the
RAM. But perhaps the solution would be just disable "lock on suspend" feature
(and use different passphrase there).

--
Best Regards / Pozdrawiam,
Marek Marczykowski
Invisible Things Lab

signature.asc

Joanna Rutkowska

unread,
Feb 24, 2013, 8:15:10 AM2/24/13
to Marek Marczykowski, qubes...@googlegroups.com, Radoslaw Szkodzinski
I think they use only DR, so still there is space for setting up 3 other
h/w breakpoints, right?

>> So, perhaps we should consider merging the Tresor patch
>> to our dom0 kernel sometime.
>
> What about interaction between Debug Registers and another domains? Doesn't
> any VM (PV and/or HVM) have access to those DR?
>

That would, of course, constitute a bug in Xen, right? I think there
even used to be once such a bug, long time ago.

>> One annoyance, from user perspective, is
>> that Tresor prompts for passphrase upon each system start/wakeup -- of
>> course they must do that in order to pre-load the Debug Register with a
>> key again. But this somehow doubles our "screensaver" passphrase prompt
>> on S3 wakeup, as well as the LUKS passhprase prompt on boot. So, ideally
>> we could merge all those three passphrase, i.e. the LUKS, the
>> screensaver, and the Tresor into one passphrase?
>
> Using the same passphrase for Tresor and screensaver could leak it into the
> RAM. But perhaps the solution would be just disable "lock on suspend" feature
> (and use different passphrase there).
>

Yeah, definitely we don't want to use KDE screensaver for that :) Rather
something "pre-boot" like LUKS passphrase prompter.

j.

signature.asc

Marek Marczykowski

unread,
Feb 25, 2013, 5:30:47 PM2/25/13
to Joanna Rutkowska, qubes...@googlegroups.com, Radoslaw Szkodzinski
I'm still not certain if this approach will work on Xen. I'm almost sure that
VM can use debug registers (at least hardware breakpoints works in gdb under
Xen PV VM). So I see three possibilities:
1. I'm wrong and Debug Registers aren't available to VM (which could also
include dom0...).
2. Debug Registers are directly accessible to any VM (which would be bug in Xen)
3. Debug Registers are accessible to any VM, but its content is keep as part
of domain context. Which means there are copied to RAM, which makes Tresor no
more than current dm-crypt with keys in RAM.
signature.asc

Joanna Rutkowska

unread,
Feb 25, 2013, 6:50:04 PM2/25/13
to Marek Marczykowski, qubes...@googlegroups.com, Radoslaw Szkodzinski
Ah, yes, I think you might be quite right about #3. Anyway, this
discussion is already linked to the ticket, so hopefully we will get
back to it at some later stage...

BTW, I forgot to mention that the most effective protection against Cold
Boot Attacks is to... shutdown your system when you leave your hotel
room. Normally this would not be very wise, because this makes Evil Maid
attacks trivial, but for those of us who use Anti Evil Maid, this is
quite a reasonable option I think. For now, at least.

joanna.

signature.asc

Daniel Selifonov

unread,
Feb 28, 2013, 11:57:49 PM2/28/13
to qubes...@googlegroups.com
Hello all,
The last time TRESOR came up in the mailinglist discussion was shortly after 28c3, when one of the TRESOR authors was presenting a hypervisor variant to "transparently" bring debug-register AES-NI disk crypto called TreVisor. Though the presentation itself was in German, they've since published a paper on their implementation: http://www1.cs.fau.de/filepool/projects/trevisor/trevisor.pdf

I read the paper and in brief summary: TreVisor performs AES in registers in the same way as TRESOR (key storage in debugger breakpoint registers, AES-NI for encryption, with SSE registers/GPR used transiently in atomic crypto-operation sequences). TreVisor is built upon BitVisor, a thin hypervisor designed for one guest, which passes through the majority of hardware to that guest, but has the ability to "para-passthrough" hardware (like SATA disks) to intercept guest access. BitVisor protects the hypervisor memory region from external DMA by way of IOMMU/VT-d.

Regarding virtualization of debug register access: on x86, dr0-dr3 are only accessible by ring 0, so PV domains in ring 3 have no direct access. If GDB HW breakpoints work in Xen PV guests, it must be done by means of a hypercall. For HVM, TreVisor raises breakpoint register access to ring "-1" by (1, [optional]) intercepting cpuid instructions and unsetting the debugging extension capability feature, (2) intercepting/filtering cr4 (control register) writes to the bits that enable/disable debugging, (3) enable "MOV-DR exiting" virtualization control to generate a VMEXIT on any MOV to/from debug registers, and filtering them out. I imagine there are additional nuances with context switches between multiple virtual machines, with debug register values being part of the domain context RAM copy, as Marek mentioned.

Apparently doing the above is fairly compatible, with the expected inoperability of hardware breakpoints in guest debugger applications, except 64-bit Windows 7 fails to boot if debug extension is masked out in CPUID (though boots and operates fine if the bit is not masked).

I suspect it would be possible to modify TRESOR style in-register encryption, and preserve limited breakpoint access by using shorter AES keys -- a 128-bit AES key would fit into two breakpoint registers, potentially still leaving two fully usable HW breakpoints. Alternatively, it is likely possible to emulate (slow) hardware breakpoint behavior with page table manipulation, and hypervisor directed instruction stepping, a bit like how PaX implements W^X on pre-NX-bit hardware. However, both approaches are considerably more complex than disallowing HW breakpoints, and apparently disallowing breakpoints is of minimal impact to typical users.

Implementing this in Qubes would require making alterations to Xen to provide exclusive access to the debug registers to dom0 by filtering out access from HVMs, disabling any hypercalls that grant domU PV guests access, making sure they don't get swapped/stored as part of domain contexts, applying the TRESOR kernel patches to dom0, and implementing a usable set of userspace utilities to push keys into registers on startup/wakeup.

I've been interested in the topic of coldboot attack mitigation for some time, and exploring TRESOR on Qubes. So, I could take the initiative on actually implementing it, though I will need to familiarize myself more deeply with Xen code/design first.

Regards,
-Daniel Selifonov

Joanna Rutkowska

unread,
Mar 1, 2013, 4:53:14 AM3/1/13
to qubes...@googlegroups.com, Daniel Selifonov
I don't quite like an idea of "homebrew" hypervisor patching in order to
support Tresor hack^H^H^H^Hpatch. Patching the hypervisor, especially
its context switching code, is super security sensitive and I'm afraid
we might introduce inter-VM attack surface this way.

And then again -- Trasor, while nice to have (if for free), still only
protects *one* key against coldboot attacks -- i.e. the disk encryption
key. It doesn't protect any of the apps keys, such as my ssh keys, my
keepass passwords, my bitcoin private keys, https sessions keys, etc.
Perhaps we could even say that in this respect it offers a bit of
illusion of security -- a user thinks she is protected against cold boot
attacks, and proudly leaves her laptop in a hotel in s3 sleep, while in
fact she's not.

joanna.

signature.asc

cprise

unread,
Mar 1, 2013, 6:56:24 AM3/1/13
to qubes...@googlegroups.com
If I may wade back in (no doubt over my head) for a moment...

On 3/1/13 4:53 AM, Joanna Rutkowska wrote:
> On 03/01/13 05:57, Daniel Selifonov wrote:
>> ...
>>
>> Implementing this in Qubes would require making alterations to Xen to
>> provide exclusive access to the debug registers to dom0 by filtering out
>> access from HVMs, disabling any hypercalls that grant domU PV guests
>> access, making sure they don't get swapped/stored as part of domain
>> contexts, applying the TRESOR kernel patches to dom0, and implementing a
>> usable set of userspace utilities to push keys into registers on
>> startup/wakeup.
>>
>> I've been interested in the topic of coldboot attack mitigation for some
>> time, and exploring TRESOR on Qubes. So, I could take the initiative on
>> actually implementing it, though I will need to familiarize myself more
>> deeply with Xen code/design first.
>>
> I don't quite like an idea of "homebrew" hypervisor patching in order to
> support Tresor hack^H^H^H^Hpatch. Patching the hypervisor, especially
> its context switching code, is super security sensitive and I'm afraid
> we might introduce inter-VM attack surface this way.

So if this is done at all it would be as an optional experimental feature.
> And then again -- Trasor, while nice to have (if for free), still only
> protects *one* key against coldboot attacks -- i.e. the disk encryption
> key. It doesn't protect any of the apps keys, such as my ssh keys, my
> keepass passwords, my bitcoin private keys, https sessions keys, etc.
> Perhaps we could even say that in this respect it offers a bit of
> illusion of security -- a user thinks she is protected against cold boot
> attacks, and proudly leaves her laptop in a hotel in s3 sleep, while in
> fact she's not.
>
> joanna.
>
Excellent point. But it could have real value as a mitigation tactic.
Hard drives are very expansive these days, potentially holding vast
amounts of personal info that I think many/most of us who are
considering Qubes in the first place would be very reluctant to spread
online to any great extent. We Qubes adopters are probably more
'personal computing' oriented than the average person; We view PC
security as heavily compromised and want to regain a sense of control
over our own devices. So Tresor enables control over what is probably
the most sensitive internal system component owing to its great volume.

How many of these key-using programs do you think flush their keys and
passphrases (perhaps after saving to disk) when they are notified the
system is going to sleep? None maybe?

One technical strike against Tresor might be that the keys for secondary
hard drives do not fit (I don't know-- just guessing) on the processor,
and so they become vulnerable to cold boot attacks like the other key
types you mentioned. So there is a question of how many AES-128 keys can
fit, and I get the impression from the correspondence here that it is
one or two max. (BTW, 128 bit keys should not be considered a
compromise-- right? There is an attack on AES-256 that greatly weakens
its key strength, and maybe this could be cited as another reason not to
trust hardware HD encryption too much, as it all seems to use AES-256.)

Ultimately we should ask whether CB attacks can be successfully stopped
with software, and (if any) how inconvenient those solutions are. So,
there is AEM which handles the use case of leaving the laptop in a hotel
room for N hours, and once it is setup we make AEM work by shutting down
the laptop. With an eye towards efficiency and ease for the user, would
it not be possible to encrypt the contents of RAM whenever the system
goes to sleep? Perhaps even beyond that, hibernation could be
supported...Or maybe hibernate already handles this correctly for
anti-CB, and we just need to enable it? How about a variation on secure
hibernate that uses RAM instead of the HD, effectively giving us
encrypted RAM during sleep?


Daniel Selifonov

unread,
Mar 1, 2013, 12:56:36 PM3/1/13
to Joanna Rutkowska, qubes...@googlegroups.com
While minimizing patching to the hypervisor is a laudable goal, the alterations required to support TRESOR are really only to transform access policy to the debug registers so that only the administrative domain can manipulate them, and so they are not unintentionally swapped to RAM in domain context switches. Xen already performs non-trivial processing to debug register arguments on behalf of PV guests when they alter breakpoints by hypercall -- if anything, restricting dr# to dom0 will reduce exposed hypervisor surface to unprivileged guests. More care is required in the HVM case, but, again, Xen already enables MOV-DR VM exits for HVMs, so modifying the intercept to deny reads and null-op writes are not complex operations. Changes to the domain context switching would be solely to exclude dr# from read/write.

Conceptually, this sort of hardware isolation from guests seems precisely what Xen was designed to do, and is more akin to treating dr# as MSRs (e.g. for fan/ACPI control), which aren't exposed outside the admin domain anyway.

Your point about other application key exposure in RAM is a valid, but separate/larger, one. Coldboot style attacks changed the threat model for RAM in a way that still hasn't been fully reconciled with the lessons of disk encryption. Prior to widespread availability of FDE (esp. in Windows, where there were no free/open FDE systems until TrueCrypt 5.0 in early 2008), users would frequently store secret documents in encrypted containers, but work on an otherwise unencrypted drive. As a result, temporary, paged, or otherwise cached versions of their secret documents could be persisted unencrypted. With DRAM data remnants, we fall into the same trap with FDE: secrets are stored in unencrypted memory, which turns out to be less transient than anticipated in the original threat assessment.

Perhaps minimizing the amount of data stored unencrypted in RAM, and aggressively paging to (encrypted) disk, or an encrypted RAM would be a useful exercise for particularly sensitive processes/VMs. This paper may be of interest: http://tastytronic.net/~pedro/docs/ieee-hst-2010.pdf where the RAM hierarchy is extended to support a small "clear" RAM, and a larger encrypted RAM. The author was not able to resolve the issue of holding the RAM encryption key... rather speciously in RAM, but TRESOR style register key storage could address that. By necessity, some data will still need to be cleartext in RAM, so, it wouldn't be foolproof to protect your ssh/gpg/btc/etc. keys, but I think it's a mitigation strategy worth exploring. Maybe even extending software which handles cryptographic keys to aggressively page out of cleartext storage, except when those keys are required for an operation. Undoubtedly, there would be significant performance impact, but the author's "real-world" tests show it would likely be tolerable to use.

I'm still interested in exploring TRESOR on Qubes, so I will likely still attempt to implement it. Whether or not you decide to incorporate such patches into your releases is as always, your prerogative. :)

Regards,
-Daniel Selifonov

Joanna Rutkowska

unread,
Mar 1, 2013, 1:06:11 PM3/1/13
to Daniel Selifonov, qubes...@googlegroups.com
Do you think your (future) Xen patches will have chances of being
accepted upstream by Xen.org? I also remember some other project tried
to keep the key in some MSR reg instead of in one of the DRx regs --
perhaps such approach would be more likely to be accepted upstream by
Xen, because you don't handicap debuggers then.

I would definitely encourage you, if you're so determined, to try to
upstream your patches to Xen.

joanna.

signature.asc

Daniel Selifonov

unread,
Mar 1, 2013, 1:46:48 PM3/1/13
to Joanna Rutkowska, qubes...@googlegroups.com
I don't know enough about Xen's contribution process to know the answer to the first question. It is an "abuse" of the debug registers, so, perhaps it is unlikely to be accepted upstream, and my experiment will just be something I make and use in my own Qubes build. Frankly, I'm fine with that.

There have been several different approaches to implementing register key storage. Loop-Amnesia used MSRs as key-storage, but did so with two important caveats: (1) no SMP support, (2) no hardware performance monitoring (which I believe includes temperature/fan control). AESSE used SSE(2+) registers as both key-storage and encryption processing space -- by necessity this required disabling general SSE support for the operating system, and processes attempting to use it anyway would get killed for executing invalid instructions. I can't imagine AESSE playing well with HVM domains, since SSE are not generally considered privileged instructions, and even if they're disabled, there would likely be insurmountable compatibility problems with Windows, and many applications. While Loop-Amnesia has fewer tradeoffs from the perspective of the system (HW breakpoints still work, SSE still works), the user tradeoffs are much more severe.

Software breakpoints (which insert INT 3's) still work, so, it doesn't cripple debuggers for ordinary (e.g. userspace) use either, so TRESOR's approach is the most acceptable tradeoff to me.

Regards,
-Daniel Selifonov

Marek Marczykowski

unread,
Mar 1, 2013, 2:15:57 PM3/1/13
to qubes...@googlegroups.com, Daniel Selifonov, Joanna Rutkowska
Perhaps this will be accepted if it will be optional feature (switchable by by
cmdline option or so). IMHO you should ask xen-devel on this (perhaps even
forward/link this discussion).

> There have been several different approaches to implementing register key
> storage. Loop-Amnesia used MSRs as key-storage, but did so with two
> important caveats: (1) no SMP support, (2) no hardware performance
> monitoring (which I believe includes temperature/fan control). AESSE used
> SSE(2+) registers as both key-storage and encryption processing space -- by
> necessity this required disabling general SSE support for the operating
> system, and processes attempting to use it anyway would get killed for
> executing invalid instructions. I can't imagine AESSE playing well with HVM
> domains, since SSE are not generally considered privileged instructions,
> and even if they're disabled, there would likely be insurmountable
> compatibility problems with Windows, and many applications. While
> Loop-Amnesia has fewer tradeoffs from the perspective of the system (HW
> breakpoints still work, SSE still works), the user tradeoffs are much more
> severe.
>
> Software breakpoints (which insert INT 3's) still work, so, it doesn't
> cripple debuggers for ordinary (e.g. userspace) use either, so TRESOR's
> approach is the most acceptable tradeoff to me.

Yes, it looks like the best approach from the above. Anyway, as Joanna said,
it will be much better if it comes through Xen project, as it will get much
more accurate review (by people with deep knowledge of Xen code).
signature.asc

Radoslaw Szkodzinski

unread,
Mar 2, 2013, 1:29:13 AM3/2/13
to qubes...@googlegroups.com, Joanna Rutkowska
You don't really need encryption, just good scrubbing, and Xen has
that already - but it's only ran when the VM releases the memory to
another VM.
GPG and SSH keys are reasonably well handled by ssh-agent/gpg-agent -
and unlike disk keys, you can almost always purge them from memory.
Add scrubbing there and these should be fine. I'd be more concerned
about SSL keys kept in your browser and mail program - check if e.g.
NSPR (Firefox, Thunderbird) does the right thing and scrubs the memory
when the key is not in use.
Perhaps just turning the memory pool into fully dynamic tmem pool with
fast release of pages could be enough, as that'd make the scrubber
work much more often - with encrypted swap to make it not crash by
accident, yet keep secure. This probably won't work for HVMs though.
Maybe will for some PV-on-HVM.
And of course marking some sensitive files as "no-cache" or even
completely disabling Linux disk cache for very sensitive domains.
Encrypted tmpfs shouldn't be too hard either - I'd start with
modifying zramfs to support encryption.

--
Radosław Szkodziński

Eric Shelton

unread,
Aug 22, 2013, 10:49:41 AM8/22/13
to qubes...@googlegroups.com, Daniel Selifonov, Joanna Rutkowska, marmarek
My apologies for reviving an old thread, but Daniel gave a talk on this earlier this month at DEFCON:


It looks like Daniel worked up the necessary patches for Xen and Linux, and possibly now a couple of the registers will now be available for hardware debug (maybe not all code, because the slides say Xen uses DR0/DR1, and dom0 uses DR2/DR3).  It would be good to seriously investigate campaigning for this to be incorporated into upstream Xen, perhaps as a selectable feature as suggested by Marek.

The slides indicate this code is available at https://github.com/thyth/phalanx, but nothing is available there - the thyth github account exists, but has no accessible projects.  My emails have not succeed in contacting Daniel either.

Does anyone have any more details on this, or better yet have the code?
Reply all
Reply to author
Forward
0 new messages