KASLR vs. KASAN on x86

1 view
Skip to first unread message

Dave Hansen

unread,
Mar 3, 2023, 5:35:37 PM3/3/23
to the arch/x86 maintainers, LKML, Andrey Ryabinin, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasa...@googlegroups.com, Kees Cook, Thomas Garnier
Hi KASAN folks,

Currently, x86 disables (most) KASLR when KASAN is enabled:

> /*
> * Apply no randomization if KASLR was disabled at boot or if KASAN
> * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
> */
> static inline bool kaslr_memory_enabled(void)
> {
> return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> }

I'm a bit confused by this, though. This code predates 5-level paging
so a PGD should be assumed to be 512G. The kernel_randomize_memory()
granularity seems to be 1 TB, which *is* PGD-aligned.

Are KASAN and kernel_randomize_memory()/KASLR (modules and
cpu_entry_area randomization is separate) really incompatible? Does
anyone have a more thorough explanation than that comment?

This isn't a big deal since KASAN is a debugging option after all. But,
I'm trying to unravel why this:

> if (kaslr_enabled()) {
> pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> kaslr_offset(),
> __START_KERNEL,
> __START_KERNEL_map,
> MODULES_VADDR-1);

for instance uses kaslr_enabled() which includes just randomizing
module_load_offset, but *not* __START_KERNEL. I think this case should
be using kaslr_memory_enabled() to match up with the check in
kernel_randomize_memory(). But this really boils down to what the
difference is between kaslr_memory_enabled() and kaslr_enabled().

Andrey Ryabinin

unread,
Mar 8, 2023, 12:24:11 PM3/8/23
to Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasa...@googlegroups.com, Kees Cook, Thomas Garnier
On Fri, Mar 3, 2023 at 11:35 PM Dave Hansen <dave....@intel.com> wrote:
>
> Hi KASAN folks,
>
> Currently, x86 disables (most) KASLR when KASAN is enabled:
>
> > /*
> > * Apply no randomization if KASLR was disabled at boot or if KASAN
> > * is enabled. KASAN shadow mappings rely on regions being PGD aligned.
> > */
> > static inline bool kaslr_memory_enabled(void)
> > {
> > return kaslr_enabled() && !IS_ENABLED(CONFIG_KASAN);
> > }
>
> I'm a bit confused by this, though. This code predates 5-level paging
> so a PGD should be assumed to be 512G. The kernel_randomize_memory()
> granularity seems to be 1 TB, which *is* PGD-aligned.
>
> Are KASAN and kernel_randomize_memory()/KASLR (modules and
> cpu_entry_area randomization is separate) really incompatible? Does
> anyone have a more thorough explanation than that comment?
>

Yeah, I agree with you here, the comment doesn't make sense to me as well.
However, I see one problem with KASAN and kernel_randomize_memory()
compatibility:
vaddr_start - vaddr_end includes KASAN shadow memory
(Documentation/x86/x86_64/mm.rst):
ffffea0000000000 | -22 TB | ffffeaffffffffff | 1 TB |
virtual memory map (vmemmap_base)
ffffeb0000000000 | -21 TB | ffffebffffffffff | 1 TB | ... unused hole
ffffec0000000000 | -20 TB | fffffbffffffffff | 16 TB | KASAN
shadow memory
fffffc0000000000 | -4 TB | fffffdffffffffff | 2 TB | ... unused hole
| | | |
vaddr_end for KASLR

So the vmemmap_base and probably some part of vmalloc could easily end
up in KASAN shadow.

> This isn't a big deal since KASAN is a debugging option after all. But,
> I'm trying to unravel why this:
>
> > if (kaslr_enabled()) {
> > pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> > kaslr_offset(),
> > __START_KERNEL,
> > __START_KERNEL_map,
> > MODULES_VADDR-1);
>
> for instance uses kaslr_enabled() which includes just randomizing
> module_load_offset, but *not* __START_KERNEL. I think this case should
> be using kaslr_memory_enabled() to match up with the check in
> kernel_randomize_memory(). But this really boils down to what the
> difference is between kaslr_memory_enabled() and kaslr_enabled().

This code looks correct to me. __START_KERNEL is just a constant, it's
never randomized.
The location of the kernel image (.text, .data ...) however is
randomized, kaslr_offset() - is the random number here.
So
kaslr_enabled() - randomization of the kernel image and modules.
kaslr_memory_enabled() - randomization of the linear mapping
(__PAGE_OFFSET), vmalloc (VMALLOC_START) and vmemmap (VMEMMAP_START)

Michal Koutný

unread,
Mar 13, 2023, 5:41:32 AM3/13/23
to Andrey Ryabinin, Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasa...@googlegroups.com, Kees Cook, Thomas Garnier
On Wed, Mar 08, 2023 at 06:24:05PM +0100, Andrey Ryabinin <ryabin...@gmail.com> wrote:
> So the vmemmap_base and probably some part of vmalloc could easily end
> up in KASAN shadow.

Would it help to (conditionally) reduce vaddr_end to the beginning of
KASAN shadow memory?
(I'm not that familiar with KASAN, so IOW, would KASAN handle
randomized: linear mapping (__PAGE_OFFSET), vmalloc (VMALLOC_START) and
vmemmap (VMEMMAP_START) in that smaller range.)

Thanks,
Michal
signature.asc

Andrey Ryabinin

unread,
Mar 13, 2023, 9:40:36 AM3/13/23
to Michal Koutný, Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasa...@googlegroups.com, Kees Cook, Thomas Garnier
Yes, with the vaddr_end = KASAN_SHADOW_START it should work,
kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()

> Thanks,
> Michal

Michal Koutný

unread,
May 31, 2023, 11:05:50 AM5/31/23
to Andrey Ryabinin, Dave Hansen, the arch/x86 maintainers, LKML, Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov, Vincenzo Frascino, kasa...@googlegroups.com, Kees Cook, Thomas Garnier
On Mon, Mar 13, 2023 at 02:40:33PM +0100, Andrey Ryabinin <ryabin...@gmail.com> wrote:
> Yes, with the vaddr_end = KASAN_SHADOW_START it should work,
> kaslr_memory_enabled() can be removed in favor of just the kaslr_enabled()

Thanks. FWIW, I've found the cautionary comment at vaddr_end from the
commit 1dddd2512511 ("x86/kaslr: Fix the vaddr_end mess"), so I'm not
removing kaslr_enabled_enabled() now.

Michal
signature.asc
Reply all
Reply to author
Forward
0 new messages