[syzbot] [block?] general protection fault in bio_add_page

1 view
Skip to first unread message

syzbot

unread,
Mar 20, 2026, 6:44:32 PM (15 hours ago) Mar 20
to ax...@kernel.dk, linux...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 8e42d2514a7e Add linux-next specific files for 20260318
git tree: linux-next
console output: https://syzkaller.appspot.com/x/log.txt?x=139b34ba580000
kernel config: https://syzkaller.appspot.com/x/.config?x=1da705b17f2649a3
dashboard link: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=108d7352580000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/64c940773401/disk-8e42d251.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/aa7a94376665/vmlinux-8e42d251.xz
kernel image: https://storage.googleapis.com/syzbot-assets/5a7ab603c859/bzImage-8e42d251.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+ed8bc2...@syzkaller.appspotmail.com

Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 0 UID: 0 PID: 35 Comm: kworker/u8:2 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: writeback wb_workfn (flush-8:0)
RIP: 0010:bvec_set_page include/linux/bvec.h:44 [inline]
RIP: 0010:__bio_add_page block/bio.c:992 [inline]
RIP: 0010:bio_add_page+0x462/0x6e0 block/bio.c:1048
Code: fd 48 8b 1b 48 8b 44 24 30 42 0f b6 04 30 84 c0 0f 85 c3 01 00 00 48 8b 14 24 0f b7 02 c1 e0 04 48 01 c3 48 89 d8 48 c1 e8 03 <42> 80 3c 30 00 74 0c 48 89 df e8 5f da a3 fd 48 8b 14 24 48 8b 44
RSP: 0000:ffffc90000ab6b80 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88801eac3d00
RDX: ffff88802cd6ea78 RSI: 00000000ffffefff RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffea0001b06147 R09: 1ffffd4000360c28
R10: dffffc0000000000 R11: fffff94000360c29 R12: 1ffff110059add4f
R13: 0000000000001000 R14: dffffc0000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff888124de1000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9bc55ed6b8 CR3: 00000000769ea000 CR4: 00000000003526f0
Call Trace:
<TASK>
bio_add_folio+0x64/0x90 block/bio.c:1084
io_submit_add_bh fs/ext4/page-io.c:465 [inline]
ext4_bio_write_folio+0x1446/0x1ea0 fs/ext4/page-io.c:603
mpage_map_and_submit_buffers fs/ext4/inode.c:2326 [inline]
mpage_map_and_submit_extent fs/ext4/inode.c:2516 [inline]
ext4_do_writepages+0x207e/0x46e0 fs/ext4/inode.c:2928
ext4_writepages+0x241/0x3b0 fs/ext4/inode.c:3022
do_writepages+0x32e/0x550 mm/page-writeback.c:2554
__writeback_single_inode+0x133/0x11a0 fs/fs-writeback.c:1750
writeback_sb_inodes+0x992/0x1a20 fs/fs-writeback.c:2042
__writeback_inodes_wb+0x111/0x240 fs/fs-writeback.c:2118
wb_writeback+0x46a/0xb70 fs/fs-writeback.c:2229
wb_check_start_all fs/fs-writeback.c:2355 [inline]
wb_do_writeback fs/fs-writeback.c:2381 [inline]
wb_workfn+0x95b/0xf50 fs/fs-writeback.c:2414
process_one_work+0x9ab/0x1780 kernel/workqueue.c:3288
process_scheduled_works kernel/workqueue.c:3379 [inline]
worker_thread+0xba8/0x11e0 kernel/workqueue.c:3465
kthread+0x388/0x470 kernel/kthread.c:436
ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:bvec_set_page include/linux/bvec.h:44 [inline]
RIP: 0010:__bio_add_page block/bio.c:992 [inline]
RIP: 0010:bio_add_page+0x462/0x6e0 block/bio.c:1048
Code: fd 48 8b 1b 48 8b 44 24 30 42 0f b6 04 30 84 c0 0f 85 c3 01 00 00 48 8b 14 24 0f b7 02 c1 e0 04 48 01 c3 48 89 d8 48 c1 e8 03 <42> 80 3c 30 00 74 0c 48 89 df e8 5f da a3 fd 48 8b 14 24 48 8b 44
RSP: 0000:ffffc90000ab6b80 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff88801eac3d00
RDX: ffff88802cd6ea78 RSI: 00000000ffffefff RDI: 0000000000000000
RBP: 0000000000000000 R08: ffffea0001b06147 R09: 1ffffd4000360c28
R10: dffffc0000000000 R11: fffff94000360c29 R12: 1ffff110059add4f
R13: 0000000000001000 R14: dffffc0000000000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff888124ee1000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005577be56a168 CR3: 000000002552e000 CR4: 00000000003526f0
----------------
Code disassembly (best guess):
0: fd std
1: 48 8b 1b mov (%rbx),%rbx
4: 48 8b 44 24 30 mov 0x30(%rsp),%rax
9: 42 0f b6 04 30 movzbl (%rax,%r14,1),%eax
e: 84 c0 test %al,%al
10: 0f 85 c3 01 00 00 jne 0x1d9
16: 48 8b 14 24 mov (%rsp),%rdx
1a: 0f b7 02 movzwl (%rdx),%eax
1d: c1 e0 04 shl $0x4,%eax
20: 48 01 c3 add %rax,%rbx
23: 48 89 d8 mov %rbx,%rax
26: 48 c1 e8 03 shr $0x3,%rax
* 2a: 42 80 3c 30 00 cmpb $0x0,(%rax,%r14,1) <-- trapping instruction
2f: 74 0c je 0x3d
31: 48 89 df mov %rbx,%rdi
34: e8 5f da a3 fd call 0xfda3da98
39: 48 8b 14 24 mov (%rsp),%rdx
3d: 48 rex.W
3e: 8b .byte 0x8b
3f: 44 rex.R


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
4:36 AM (5 hours ago) 4:36 AM
to linux-...@vger.kernel.org, syzkall...@googlegroups.com
For archival purposes, forwarding an incoming command email to
linux-...@vger.kernel.org, syzkall...@googlegroups.com.

***

Subject: [PATCH] ext4: fix NULL page dereference in ext4_bio_write_folio() with large folios
Author: karti...@gmail.com

#syz test: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master


When blocksize < PAGE_SIZE, a folio can span multiple pages with
multiple buffer heads. ext4_bio_write_folio() encrypted the entire
folio once with offset=0 via fscrypt_encrypt_pagecache_blocks(),
which always returns a single bounce page covering only the first
page of the folio.

When the write loop iterated over buffer heads beyond the first page,
bio_add_folio() calculated nr = bh_offset(bh) / PAGE_SIZE which was
non-zero for bh's on subsequent pages. folio_page(io_folio, nr) then
went out of bounds on the single page bounce folio, returning a NULL
or garbage page pointer, causing a NULL pointer dereference in
bvec_set_page().

Fix this by moving the encryption inside the write loop and encrypting
per buffer head using the correct offset within the folio via
offset_in_folio(folio, bh->b_data). Each buffer head now gets its own
bounce page at index 0, so folio_page(io_folio, 0) is always valid.

The existing retry logic for -ENOMEM is preserved.

Reported-by: syzbot+ed8bc2...@syzkaller.appspotmail.com
Signed-off-by: Deepanshu kartikey <Karti...@gmail.com>
---
fs/ext4/page-io.c | 87 +++++++++++++++++++++++------------------------
1 file changed, 43 insertions(+), 44 deletions(-)

diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..d7114171cd52 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -537,56 +537,55 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
* (e.g. holes) to be unnecessarily encrypted, but this is rare and
* can't happen in the common case of blocksize == PAGE_SIZE.
*/
- if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
- gfp_t gfp_flags = GFP_NOFS;
- unsigned int enc_bytes = round_up(len, i_blocksize(inode));
- struct page *bounce_page;
-
- /*
- * Since bounce page allocation uses a mempool, we can only use
- * a waiting mask (i.e. request guaranteed allocation) on the
- * first page of the bio. Otherwise it can deadlock.
- */
- if (io->io_bio)
- gfp_flags = GFP_NOWAIT;
- retry_encrypt:
- bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
- enc_bytes, 0, gfp_flags);
- if (IS_ERR(bounce_page)) {
- ret = PTR_ERR(bounce_page);
- if (ret == -ENOMEM &&
- (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
- gfp_t new_gfp_flags = GFP_NOFS;
- if (io->io_bio)
- ext4_io_submit(io);
- else
- new_gfp_flags |= __GFP_NOFAIL;
- memalloc_retry_wait(gfp_flags);
- gfp_flags = new_gfp_flags;
- goto retry_encrypt;
- }
-
- printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
- folio_redirty_for_writepage(wbc, folio);
- do {
- if (buffer_async_write(bh)) {
- clear_buffer_async_write(bh);
- set_buffer_dirty(bh);
- }
- bh = bh->b_this_page;
- } while (bh != head);
-
- return ret;
- }
- io_folio = page_folio(bounce_page);
- }
-
__folio_start_writeback(folio, keep_towrite);

/* Now submit buffers to write */
do {
if (!buffer_async_write(bh))
continue;
+ if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
+ gfp_t gfp_flags = GFP_NOFS;
+ struct page *bounce_page;
+ /*
+ * Since bounce page allocation uses a mempool, we can
+ * only use a waiting mask (i.e. request guaranteed
+ * allocation) on the first page of the bio.
+ * Otherwise it can deadlock.
+ */
+ if (io->io_bio)
+ gfp_flags = GFP_NOWAIT;
+ retry_encrypt:
+ bounce_page = fscrypt_encrypt_pagecache_blocks(folio,
+ bh->b_size,
+ offset_in_folio(folio, bh->b_data),
+ gfp_flags);
+ if (IS_ERR(bounce_page)) {
+ ret = PTR_ERR(bounce_page);
+ if (ret == -ENOMEM &&
+ (io->io_bio || wbc->sync_mode == WB_SYNC_ALL)) {
+ gfp_t new_gfp_flags = GFP_NOFS;
+ if (io->io_bio)
+ ext4_io_submit(io);
+ else
+ new_gfp_flags |= __GFP_NOFAIL;
+ memalloc_retry_wait(gfp_flags);
+ gfp_flags = new_gfp_flags;
+ goto retry_encrypt;
+ }
+ printk_ratelimited(KERN_ERR "%s: ret = %d\n",
+ __func__, ret);
+ folio_redirty_for_writepage(wbc, folio);
+ do {
+ if (buffer_async_write(bh)) {
+ clear_buffer_async_write(bh);
+ set_buffer_dirty(bh);
+ }
+ bh = bh->b_this_page;
+ } while (bh != head);
+ return ret;
+ }
+ io_folio = page_folio(bounce_page);
+ }
io_submit_add_bh(io, inode, folio, io_folio, bh);
} while ((bh = bh->b_this_page) != head);

--
2.43.0

syzbot

unread,
6:42 AM (3 hours ago) 6:42 AM
to karti...@gmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
SYZFAIL: failed to recv rpc

SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=-1 (errno 104: Connection reset by peer)


Tested on:

commit: a0c83177 Merge tag 'drm-fixes-2026-03-21' of https://g..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=16ae01d6580000
kernel config: https://syzkaller.appspot.com/x/.config?x=a583012a914ba435
dashboard link: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=151d4cba580000

syzbot

unread,
8:15 AM (1 hour ago) 8:15 AM
to linux-...@vger.kernel.org, syzkall...@googlegroups.com
For archival purposes, forwarding an incoming command email to
linux-...@vger.kernel.org, syzkall...@googlegroups.com.

***

Subject: [PATCH] ext4: fix general protection fault in bio_add_page for encrypted large folios
When writing back an encrypted file, ext4_bio_write_folio() encrypts the
folio into a single-page bounce buffer and passes it as io_folio to
io_submit_add_bh(). The offset passed to bio_add_folio() was always
bh_offset(bh), which is relative to the original folio.

For a large folio this offset can exceed PAGE_SIZE. bio_add_folio() calls
folio_page(io_folio, off >> PAGE_SHIFT) which computes &folio->page + N.
For a single-page bounce folio with N >= 1 this is out-of-bounds, causing
a general protection fault caught by KASAN:

KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
RIP: 0010:bvec_set_page include/linux/bvec.h:44 [inline]
RIP: 0010:bio_add_page+0x462/0x6e0 block/bio.c:1048

Fix this by computing io_off at the call site. For the non-encrypted path
io_folio == folio so bh_offset(bh) is used unchanged. For the encrypted
path the bounce page is always a single PAGE_SIZE page, so the offset is
taken modulo PAGE_SIZE to map it correctly into the bounce page.

Using hardcoded 0 would be wrong for sub-page block sizes (e.g. 1024-byte
blocks) where multiple buffer heads exist within one page at offsets
0, 1024, 2048, 3072 etc. bh_offset(bh) % PAGE_SIZE handles all block
sizes correctly.

Reported-by: syzbot+ed8bc2...@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
Signed-off-by: Deepanshu Kartikey <Karti...@gmail.com>
---
fs/ext4/page-io.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index a8c95eee91b7..006b2f5173de 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -438,7 +438,8 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
struct inode *inode,
struct folio *folio,
struct folio *io_folio,
- struct buffer_head *bh)
+ struct buffer_head *bh,
+ size_t io_off)
{
if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
!fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
@@ -449,7 +450,7 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
io_submit_init_bio(io, bh);
io->io_bio->bi_write_hint = inode->i_write_hint;
}
- if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh)))
+ if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, io_off))
goto submit_and_retry;
wbc_account_cgroup_owner(io->io_wbc, folio, bh->b_size);
io->io_next_block++;
@@ -585,9 +586,20 @@ int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,

/* Now submit buffers to write */
do {
+ size_t io_off;
+
if (!buffer_async_write(bh))
continue;
- io_submit_add_bh(io, inode, folio, io_folio, bh);
+ /*
+ * When io_folio is a single-page bounce buffer (fscrypt),
+ * normalise to PAGE_SIZE to handle all block sizes correctly.
+ * Using 0 would break sub-page block sizes (e.g. 1024-byte
+ * blocks) where multiple bh offsets exist within one page
+ */
+ io_off = (io_folio == folio)
+ ? bh_offset(bh)
+ : bh_offset(bh) % PAGE_SIZE;
+ io_submit_add_bh(io, inode, folio, io_folio, bh, io_off);
} while ((bh = bh->b_this_page) != head);

return 0;
--
2.43.0

syzbot

unread,
8:42 AM (1 hour ago) 8:42 AM
to karti...@gmail.com, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot has tested the proposed patch but the reproducer is still triggering an issue:
SYZFAIL: failed to recv rpc

SYZFAIL: failed to recv rpc
fd=3 want=4 recv=0 n=-1 (errno 104: Connection reset by peer)


Tested on:

commit: a0c83177 Merge tag 'drm-fixes-2026-03-21' of https://g..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13e40b52580000
kernel config: https://syzkaller.appspot.com/x/.config?x=a583012a914ba435
dashboard link: https://syzkaller.appspot.com/bug?extid=ed8bc247f231c1a48e21
compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
patch: https://syzkaller.appspot.com/x/patch.diff?x=11019e02580000

Reply all
Reply to author
Forward
0 new messages