[PATCH v2] kho: use checked arithmetic in deserialize_bitmap()

0 views
Skip to first unread message

Marco Elver

unread,
Mar 19, 2026, 5:06:09 PM (2 days ago) Mar 19
to el...@google.com, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav, ke...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, kasa...@googlegroups.com
The function deserialize_bitmap() calculates the reservation size using:

int sz = 1 << (order + PAGE_SHIFT);

If a corrupted KHO image provides an order >= 20 (on systems with 4KB
pages), the shift amount becomes >= 32, which overflows the 32-bit
integer. This results in a zero-size memory reservation.

Furthermore, the physical address calculation:

phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT));

can also overflow and wrap around if the order is large. This allows a
corrupt KHO image to cause out-of-bounds updates to page->private of
arbitrary physical pages during early boot.

Fix this by changing 'sz' to 'unsigned long' and using checked add and
shift to safely calculate the shift amount, size, and physical address,
skipping malformed chunks. This allows preserving memory with an order
larger than MAX_PAGE_ORDER.

Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Signed-off-by: Marco Elver <el...@google.com>
---
v2:
* Switch to unsigned long and use checked shift and add (Mike).

v1: https://lore.kernel.org/all/20260214010013....@google.com/
---
kernel/liveupdate/kexec_handover.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index cc68a3692905..0d8417dcd3ff 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -19,6 +19,7 @@
#include <linux/libfdt.h>
#include <linux/list.h>
#include <linux/memblock.h>
+#include <linux/overflow.h>
#include <linux/page-isolation.h>
#include <linux/unaligned.h>
#include <linux/vmalloc.h>
@@ -461,15 +462,29 @@ static void __init deserialize_bitmap(unsigned int order,
struct khoser_mem_bitmap_ptr *elm)
{
struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap);
+ unsigned int shift;
unsigned long bit;
+ unsigned long sz;
+
+ if (check_add_overflow(order, PAGE_SHIFT, &shift) ||
+ check_shl_overflow(1UL, shift, &sz)) {
+ pr_warn("invalid order %u for preserved bitmap\n", order);
+ return;
+ }

for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) {
- int sz = 1 << (order + PAGE_SHIFT);
- phys_addr_t phys =
- elm->phys_start + (bit << (order + PAGE_SHIFT));
- struct page *page = phys_to_page(phys);
+ phys_addr_t offset, phys;
+ struct page *page;
union kho_page_info info;

+ if (check_shl_overflow((phys_addr_t)bit, shift, &offset) ||
+ check_add_overflow(elm->phys_start, offset, &phys)) {
+ pr_warn("invalid phys layout for preserved bitmap\n");
+ return;
+ }
+
+ page = phys_to_page(phys);
+
memblock_reserve(phys, sz);
memblock_reserved_mark_noinit(phys, sz);
info.magic = KHO_PAGE_MAGIC;
--
2.53.0.1018.g2bb0e51243-goog

Andrew Morton

unread,
Mar 19, 2026, 10:37:42 PM (2 days ago) Mar 19
to Marco Elver, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav, ke...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, kasa...@googlegroups.com
On Thu, 19 Mar 2026 22:03:53 +0100 Marco Elver <el...@google.com> wrote:

> The function deserialize_bitmap() calculates the reservation size using:
>
> int sz = 1 << (order + PAGE_SHIFT);
>
> If a corrupted KHO image provides an order >= 20 (on systems with 4KB
> pages), the shift amount becomes >= 32, which overflows the 32-bit
> integer. This results in a zero-size memory reservation.
>
> Furthermore, the physical address calculation:
>
> phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT));
>
> can also overflow and wrap around if the order is large. This allows a
> corrupt KHO image to cause out-of-bounds updates to page->private of
> arbitrary physical pages during early boot.
>
> Fix this by changing 'sz' to 'unsigned long' and using checked add and
> shift to safely calculate the shift amount, size, and physical address,
> skipping malformed chunks. This allows preserving memory with an order
> larger than MAX_PAGE_ORDER.

AI review asked questions:
https://sashiko.dev/#/patchset/20260319210528.1694513-2-elver%40google.com

Pratyush Yadav

unread,
Mar 20, 2026, 4:56:40 AM (yesterday) Mar 20
to Marco Elver, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav, ke...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, kasa...@googlegroups.com
Hi Marco,

On Thu, Mar 19 2026, Marco Elver wrote:

> The function deserialize_bitmap() calculates the reservation size using:
>
> int sz = 1 << (order + PAGE_SHIFT);
>
> If a corrupted KHO image provides an order >= 20 (on systems with 4KB
> pages), the shift amount becomes >= 32, which overflows the 32-bit
> integer. This results in a zero-size memory reservation.
>
> Furthermore, the physical address calculation:
>
> phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT));
>
> can also overflow and wrap around if the order is large. This allows a
> corrupt KHO image to cause out-of-bounds updates to page->private of
> arbitrary physical pages during early boot.
>
> Fix this by changing 'sz' to 'unsigned long' and using checked add and
> shift to safely calculate the shift amount, size, and physical address,
> skipping malformed chunks. This allows preserving memory with an order
> larger than MAX_PAGE_ORDER.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Signed-off-by: Marco Elver <el...@google.com>

deserialize_bitmap() is replaced with the radix tree with this series
[0]. Can you please redo these changes on top of that?

Also, a couple comments below.

[0] https://lore.kernel.org/linux-mm/20260206021428.3...@google.com/
Isn't it simpler to just check if (order + PAGE_SHIFT) > 63? KHO is only
designed to work on 64-bit platforms so we know the max possible shift
already. Is there any reason to call the proper overflow functions? The
only reason I ask is because I find the open-coded check easier to read.

>
> for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) {
> - int sz = 1 << (order + PAGE_SHIFT);
> - phys_addr_t phys =
> - elm->phys_start + (bit << (order + PAGE_SHIFT));
> - struct page *page = phys_to_page(phys);
> + phys_addr_t offset, phys;
> + struct page *page;
> union kho_page_info info;
>
> + if (check_shl_overflow((phys_addr_t)bit, shift, &offset) ||
> + check_add_overflow(elm->phys_start, offset, &phys)) {
> + pr_warn("invalid phys layout for preserved bitmap\n");
> + return;
> + }
> +
> + page = phys_to_page(phys);
> +
> memblock_reserve(phys, sz);
> memblock_reserved_mark_noinit(phys, sz);
> info.magic = KHO_PAGE_MAGIC;

--
Regards,
Pratyush Yadav

Pratyush Yadav

unread,
Mar 20, 2026, 5:34:12 AM (yesterday) Mar 20
to Andrew Morton, Marco Elver, Alexander Graf, Mike Rapoport, Pasha Tatashin, Pratyush Yadav, ke...@lists.infradead.org, linu...@kvack.org, linux-...@vger.kernel.org, kasa...@googlegroups.com
I have also been keeping an eye sashiko for kho/live update patches. I
think it is missing some context for KHO/live update, like the fact that
only 64-bit platforms are supported, FDT data doesn't need to care for
endianness, and so on. I think we need a set of subsystem prompts for
KHO and live update. I am experimenting around with a local deployment
of sashiko. I'll see if I can get a basic set of prompts working.

The the LLM review of this patch, I think the only relevant comment is
checking if elm->bitmap is NULL.

For the others:

1. The restore path does (should) support order larger than
MAX_PAGE_ORDER. I sent this series [0] to make that work properly.
2. KHO is not supported on 32-bit.
3. We just have to trust the previous kernel. There is no sane way of
preventing attacks if the previous kernel is malicious. For example,
it might as well give us valid memory addresses, but change the
contents there. So all of these checks only defend against buggy
kernels, not against malicious ones.

[0] https://lore.kernel.org/linux-mm/20260309123410.3...@kernel.org/T/#u

--
Regards,
Pratyush Yadav
Reply all
Reply to author
Forward
0 new messages