Andrew Morton
unread,Jun 28, 2024, 10:30:59 PM (4 days ago) Jun 28Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to mm-co...@vger.kernel.org, vba...@suse.cz, sv...@linux.ibm.com, ros...@goodmis.org, roman.g...@linux.dev, rien...@google.com, pen...@kernel.org, mhir...@kernel.org, mark.r...@arm.com, kasa...@googlegroups.com, iamjoon...@lge.com, h...@linux.ibm.com, g...@linux.ibm.com, gli...@google.com, el...@google.com, dvy...@google.com, c...@linux.com, bornt...@linux.ibm.com, agor...@linux.ibm.com, 42.h...@gmail.com, i...@linux.ibm.com, ak...@linux-foundation.org
The quilt patch titled
Subject: mm: slub: disable KMSAN when checking the padding bytes
has been removed from the -mm tree. Its filename was
mm-slub-disable-kmsan-when-checking-the-padding-bytes.patch
This patch was dropped because it was merged into the mm-stable branch
of git://
git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: Ilya Leoshkevich <
i...@linux.ibm.com>
Subject: mm: slub: disable KMSAN when checking the padding bytes
Date: Fri, 21 Jun 2024 13:35:02 +0200
Even though the KMSAN warnings generated by memchr_inv() are suppressed by
metadata_access_enable(), its return value may still be poisoned.
The reason is that the last iteration of memchr_inv() returns `*start !=
value ? start : NULL`, where *start is poisoned. Because of this,
somewhat counterintuitively, the shadow value computed by
visitSelectInst() is equal to `(uintptr_t)start`.
One possibility to fix this, since the intention behind guarding
memchr_inv() behind metadata_access_enable() is to touch poisoned metadata
without triggering KMSAN, is to unpoison its return value. However, this
approach is too fragile. So simply disable the KMSAN checks in the
respective functions.
Link:
https://lkml.kernel.org/r/20240621113706...@linux.ibm.com
Signed-off-by: Ilya Leoshkevich <
i...@linux.ibm.com>
Reviewed-by: Alexander Potapenko <
gli...@google.com>
Cc: Alexander Gordeev <
agor...@linux.ibm.com>
Cc: Christian Borntraeger <
bornt...@linux.ibm.com>
Cc: Christoph Lameter <
c...@linux.com>
Cc: David Rientjes <
rien...@google.com>
Cc: Dmitry Vyukov <
dvy...@google.com>
Cc: Heiko Carstens <
h...@linux.ibm.com>
Cc: Hyeonggon Yoo <
42.h...@gmail.com>
Cc: Joonsoo Kim <
iamjoon...@lge.com>
Cc: <
kasa...@googlegroups.com>
Cc: Marco Elver <
el...@google.com>
Cc: Mark Rutland <
mark.r...@arm.com>
Cc: Masami Hiramatsu (Google) <
mhir...@kernel.org>
Cc: Pekka Enberg <
pen...@kernel.org>
Cc: Roman Gushchin <
roman.g...@linux.dev>
Cc: Steven Rostedt (Google) <
ros...@goodmis.org>
Cc: Sven Schnelle <
sv...@linux.ibm.com>
Cc: Vasily Gorbik <
g...@linux.ibm.com>
Cc: Vlastimil Babka <
vba...@suse.cz>
Signed-off-by: Andrew Morton <
ak...@linux-foundation.org>
---
mm/slub.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
--- a/mm/slub.c~mm-slub-disable-kmsan-when-checking-the-padding-bytes
+++ a/mm/slub.c
@@ -1176,9 +1176,16 @@ static void restore_bytes(struct kmem_ca
memset(from, data, to - from);
}
-static int check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
- u8 *object, char *what,
- u8 *start, unsigned int value, unsigned int bytes)
+#ifdef CONFIG_KMSAN
+#define pad_check_attributes noinline __no_kmsan_checks
+#else
+#define pad_check_attributes
+#endif
+
+static pad_check_attributes int
+check_bytes_and_report(struct kmem_cache *s, struct slab *slab,
+ u8 *object, char *what,
+ u8 *start, unsigned int value, unsigned int bytes)
{
u8 *fault;
u8 *end;
@@ -1270,7 +1277,8 @@ static int check_pad_bytes(struct kmem_c
}
/* Check the pad bytes at the end of a slab page */
-static void slab_pad_check(struct kmem_cache *s, struct slab *slab)
+static pad_check_attributes void
+slab_pad_check(struct kmem_cache *s, struct slab *slab)
{
u8 *start;
u8 *fault;
_
Patches currently in -mm which might be from
i...@linux.ibm.com are