[syzbot] [hfs?] kernel BUG in hfsplus_bnode_unhash

4 views
Skip to first unread message

syzbot

unread,
Mar 6, 2023, 12:44:39 PM3/6/23
to linux-...@vger.kernel.org, linux-...@vger.kernel.org, syzkall...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 0988a0ea7919 Merge tag 'for-v6.3-part2' of git://git.kerne..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=17ee96e4c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=ff98a3b3c1aed3ab
dashboard link: https://syzkaller.appspot.com/bug?extid=65f654e7ff6234bf771f
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+65f654...@syzkaller.appspotmail.com

------------[ cut here ]------------
kernel BUG at fs/hfsplus/bnode.c:461!
invalid opcode: 0000 [#1] PREEMPT SMP KASAN
CPU: 0 PID: 100 Comm: kswapd0 Not tainted 6.2.0-syzkaller-13467-g0988a0ea7919 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
RIP: 0010:hfsplus_bnode_unhash+0xf7/0x1e0 fs/hfsplus/bnode.c:461
Code: 2b e8 fd e7 34 ff 48 8d 6b 20 48 89 e8 48 c1 e8 03 42 80 3c 28 00 0f 85 b3 00 00 00 48 8b 5b 20 48 85 db 75 d2 e8 d9 e7 34 ff <0f> 0b e8 d2 e7 34 ff e8 cd e7 34 ff 49 8d 7c 24 20 48 b8 00 00 00
RSP: 0018:ffffc90001587348 EFLAGS: 00010293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8880160f0100 RSI: ffffffff824f32d7 RDI: ffff88802a310120
RBP: ffff88802a310000 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff888029372a00
R13: 0000000000000000 R14: ffffea00009f81c0 R15: 0000000000001000
FS: 0000000000000000(0000) GS:ffff88802ca00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7fd8638528 CR3: 0000000071e75000 CR4: 0000000000150ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
hfsplus_release_folio+0x285/0x5f0 fs/hfsplus/inode.c:102
filemap_release_folio+0x13f/0x1b0 mm/filemap.c:4121
shrink_folio_list+0x1fe3/0x3c80 mm/vmscan.c:2010
evict_folios+0x794/0x1940 mm/vmscan.c:5121
try_to_shrink_lruvec+0x82c/0xb90 mm/vmscan.c:5297
shrink_one+0x46b/0x810 mm/vmscan.c:5341
shrink_many mm/vmscan.c:5394 [inline]
lru_gen_shrink_node mm/vmscan.c:5511 [inline]
shrink_node+0x2064/0x35f0 mm/vmscan.c:6459
kswapd_shrink_node mm/vmscan.c:7262 [inline]
balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7452
kswapd+0x70b/0x1000 mm/vmscan.c:7712
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:hfsplus_bnode_unhash+0xf7/0x1e0 fs/hfsplus/bnode.c:461
Code: 2b e8 fd e7 34 ff 48 8d 6b 20 48 89 e8 48 c1 e8 03 42 80 3c 28 00 0f 85 b3 00 00 00 48 8b 5b 20 48 85 db 75 d2 e8 d9 e7 34 ff <0f> 0b e8 d2 e7 34 ff e8 cd e7 34 ff 49 8d 7c 24 20 48 b8 00 00 00
RSP: 0018:ffffc90001587348 EFLAGS: 00010293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8880160f0100 RSI: ffffffff824f32d7 RDI: ffff88802a310120
RBP: ffff88802a310000 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: ffff888029372a00
R13: 0000000000000000 R14: ffffea00009f81c0 R15: 0000000000001000
FS: 0000000000000000(0000) GS:ffff88802ca00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7fd8638528 CR3: 0000000071e75000 CR4: 0000000000150ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Sep 5, 2023, 5:25:55 AM9/5/23
to syzkall...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages