The function deserialize_bitmap() calculates the reservation size using:
int sz = 1 << (order + PAGE_SHIFT);
If a corrupted KHO image provides an order >= 20 (on systems with 4KB
pages), the shift amount becomes >= 32, which overflows the 32-bit
integer. This results in a zero-size memory reservation.
Furthermore, the physical address calculation:
phys_addr_t phys = elm->phys_start + (bit << (order + PAGE_SHIFT));
can also overflow and wrap around if the order is large. This allows a
corrupt KHO image to cause out-of-bounds updates to page->private of
arbitrary physical pages during early boot.
Fix this by changing 'sz' to 'unsigned long' and using checked add and
shift to safely calculate the shift amount, size, and physical address,
skipping malformed chunks. This allows preserving memory with an order
larger than MAX_PAGE_ORDER.
Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Signed-off-by: Marco Elver <
el...@google.com>
---
v2:
* Switch to unsigned long and use checked shift and add (Mike).
v1:
https://lore.kernel.org/all/20260214010013....@google.com/
---
kernel/liveupdate/kexec_handover.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)
diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index cc68a3692905..0d8417dcd3ff 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -19,6 +19,7 @@
#include <linux/libfdt.h>
#include <linux/list.h>
#include <linux/memblock.h>
+#include <linux/overflow.h>
#include <linux/page-isolation.h>
#include <linux/unaligned.h>
#include <linux/vmalloc.h>
@@ -461,15 +462,29 @@ static void __init deserialize_bitmap(unsigned int order,
struct khoser_mem_bitmap_ptr *elm)
{
struct kho_mem_phys_bits *bitmap = KHOSER_LOAD_PTR(elm->bitmap);
+ unsigned int shift;
unsigned long bit;
+ unsigned long sz;
+
+ if (check_add_overflow(order, PAGE_SHIFT, &shift) ||
+ check_shl_overflow(1UL, shift, &sz)) {
+ pr_warn("invalid order %u for preserved bitmap\n", order);
+ return;
+ }
for_each_set_bit(bit, bitmap->preserve, PRESERVE_BITS) {
- int sz = 1 << (order + PAGE_SHIFT);
- phys_addr_t phys =
- elm->phys_start + (bit << (order + PAGE_SHIFT));
- struct page *page = phys_to_page(phys);
+ phys_addr_t offset, phys;
+ struct page *page;
union kho_page_info info;
+ if (check_shl_overflow((phys_addr_t)bit, shift, &offset) ||
+ check_add_overflow(elm->phys_start, offset, &phys)) {
+ pr_warn("invalid phys layout for preserved bitmap\n");
+ return;
+ }
+
+ page = phys_to_page(phys);
+
memblock_reserve(phys, sz);
memblock_reserved_mark_noinit(phys, sz);
info.magic = KHO_PAGE_MAGIC;
--
2.53.0.1018.g2bb0e51243-goog