I don't a real answer for you, but my 1st idea is that it would be an
attribute you could apply to a variable. How you *force* that to never
spill on the backend is an entirely different issue. You'd basically
force a split on a certain variable under certain conditions - would
it be possible - (why not) Unless I'm missing something it would be
hell to schedule though..
There's use cases in HPC and GPGPU codes which may benefit from
something similar, but for different reasons. (Some really hot
variable that you never want spilled for example)
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
That breaks the whole IR idea of using alloca to allocate/denote space for local variables, and then optimize those
into SSA values when optimization proves that is OK.
Also, for a lot of things, that attribute is simply impossible to implement. Any value that is live across a call needs to be spilled to memory.
You cannot put an unspillable value in a callee preserved register, because you cannot know whether the callee may save that or not.
And if it is in a caller-save register, then the caller has to spill it if it is live across a call.
Sounds like such a security-sensitive value would need to treat calls as barriers for any kind of reordering.
Also, at the end of the value's live range, it would not have to be merely dead, but dead-and-buried (i.e. overwritten) to avoid scavenging by subsequent callees. Same goes for merely copying the value from one register to another, the source would have to be erased.
--paulr
From: llvm-dev [mailto:llvm-dev...@lists.llvm.org]
On Behalf Of Stephen Crane via llvm-dev
Sent: Monday, November 02, 2015 4:08 PM
To: Smith, Kevin B
Cc: llvm...@lists.llvm.org; Per Larsen; Andrei Homescu
Subject: Re: [llvm-dev] How to prevent registers from spilling?
Thanks, I hadn't thought about the HPC applications. I'm not sure that the requirements for the HPC and security use-cases are compatible. Pinning for performance can tolerate spills if it is still fast, while security uses can tolerate slow rematerialization but not spills. Maybe a shared infrastructure is possible but with variable constraints?
I implemented something like this for MIPS a couple of years ago. A few things:
- Marking variables doesn’t make sense. You don’t know what temporaries will exist that are derived from that variable and can allow an attacker to materialise it. You really want to mark functions as sensitive.
- Preventing spills does not actually buy you anything. A lot of recent attacks exploit signal handlers. If you can deliver a signal in the middle of the sensitive code then all of its registers are spilled in the ucontext. You need to also modify the kernel to zero this region of the stack after signal delivery, at the very least, and also ideally use separate signal stacks with different page table mappings. I seem to recall that this was a possible attack vector for your Oakland paper too, as it will allow unmasking the PC.
- Preventing spills won’t work if you’re not in a leaf function, as you don’t know if the callee will will spill callee-save registers that you’ve put your temporaries in.
The approach that I used made the compiler zero all spill slots in the return path and added a warning if you called a non-sensitive function from a sensitive function and stored zero in all temporary registers and unused argument registers. This is something that you can do entirely in the back end, as long as you have a list of sensitive functions. Other projects got in the way and I never had time to do a proper security evaluation, but the approach seemed sane to the cryptographers that I discussed it with.
David