[hfs?] WARNING: ODEBUG bug in hfsplus_fill_super (2)

6 views
Skip to first unread message

syzbot

unread,
Mar 22, 2023, 11:47:41 PM3/22/23
to syzkaller-upst...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: fe15c26ee26e Linux 6.3-rc1
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci
console output: https://syzkaller.appspot.com/x/log.txt?x=14778d9ec80000
kernel config: https://syzkaller.appspot.com/x/.config?x=7573cbcd881a88c9
dashboard link: https://syzkaller.appspot.com/bug?extid=25a6bded055faacf0dc6
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: arm64
CC: [linux-...@vger.kernel.org linux-...@vger.kernel.org]

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/89d41abd07bd/disk-fe15c26e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/fa75f5030ade/vmlinux-fe15c26e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/590d0f5903ee/Image-fe15c26e.gz.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+25a6bd...@syzkaller.appspotmail.com

------------[ cut here ]------------
ODEBUG: free active (active state 0) object: 00000000bc20810d object type: timer_list hint: arch_atomic64_fetch_andnot arch/arm64/include/asm/atomic.h:85 [inline]
ODEBUG: free active (active state 0) object: 00000000bc20810d object type: timer_list hint: arch_atomic_long_fetch_andnot include/linux/atomic/atomic-long.h:305 [inline]
ODEBUG: free active (active state 0) object: 00000000bc20810d object type: timer_list hint: arch_test_and_clear_bit include/asm-generic/bitops/atomic.h:53 [inline]
ODEBUG: free active (active state 0) object: 00000000bc20810d object type: timer_list hint: test_and_clear_bit include/asm-generic/bitops/instrumented-atomic.h:86 [inline]
ODEBUG: free active (active state 0) object: 00000000bc20810d object type: timer_list hint: delayed_sync_fs+0x0/0xe8 fs/hfsplus/super.c:459
WARNING: CPU: 1 PID: 10890 at lib/debugobjects.c:512 debug_print_object lib/debugobjects.c:509 [inline]
WARNING: CPU: 1 PID: 10890 at lib/debugobjects.c:512 __debug_check_no_obj_freed lib/debugobjects.c:996 [inline]
WARNING: CPU: 1 PID: 10890 at lib/debugobjects.c:512 debug_check_no_obj_freed+0x410/0x528 lib/debugobjects.c:1027
Modules linked in:
CPU: 1 PID: 10890 Comm: syz-executor.2 Not tainted 6.3.0-rc1-syzkaller-gfe15c26ee26e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
pstate: 60400005 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : debug_print_object lib/debugobjects.c:509 [inline]
pc : __debug_check_no_obj_freed lib/debugobjects.c:996 [inline]
pc : debug_check_no_obj_freed+0x410/0x528 lib/debugobjects.c:1027
lr : debug_print_object lib/debugobjects.c:509 [inline]
lr : __debug_check_no_obj_freed lib/debugobjects.c:996 [inline]
lr : debug_check_no_obj_freed+0x410/0x528 lib/debugobjects.c:1027
sp : ffff80002c9f7450
x29: ffff80002c9f74a0 x28: ffff8000125dee40 x27: dfff800000000000
x26: ffff0000d778aa38 x25: 0000000000000000 x24: ffff80001a117b20
x23: ffff8000125dee40 x22: ffff0000d778aa38 x21: ffff80001a117b18
x20: ffff800012a9a9d8 x19: ffff0000d778a800 x18: 1fffe000368995b6
x17: ffff800015cdd000 x16: ffff8000083154ec x15: 0000000000000000
x14: 1ffff00002b9c0b2 x13: dfff800000000000 x12: 0000000000000003
x11: ff8080000ab89838 x10: 0000000000000003 x9 : e536ed068812a800
x8 : e536ed068812a800 x7 : ffff80000828dc14 x6 : 0000000000000000
x5 : 0000000000000001 x4 : 0000000000000001 x3 : 0000000000000000
x2 : 0000000000000006 x1 : ffff8000125c08e0 x0 : ffff80019e89e000
Call trace:
debug_print_object lib/debugobjects.c:509 [inline]
__debug_check_no_obj_freed lib/debugobjects.c:996 [inline]
debug_check_no_obj_freed+0x410/0x528 lib/debugobjects.c:1027
slab_free_hook mm/slub.c:1756 [inline]
slab_free_freelist_hook mm/slub.c:1807 [inline]
slab_free mm/slub.c:3787 [inline]
__kmem_cache_free+0x258/0x4b4 mm/slub.c:3800
kfree+0x104/0x228 mm/slab_common.c:1019
hfsplus_fill_super+0xbc0/0x166c fs/hfsplus/super.c:612
mount_bdev+0x26c/0x368 fs/super.c:1371
hfsplus_mount+0x44/0x58 fs/hfsplus/super.c:641
legacy_get_tree+0xd4/0x16c fs/fs_context.c:610
vfs_get_tree+0x90/0x274 fs/super.c:1501
do_new_mount+0x25c/0x8c8 fs/namespace.c:3042
path_mount+0x590/0xe20 fs/namespace.c:3372
do_mount fs/namespace.c:3385 [inline]
__do_sys_mount fs/namespace.c:3594 [inline]
__se_sys_mount fs/namespace.c:3571 [inline]
__arm64_sys_mount+0x45c/0x594 fs/namespace.c:3571
__invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
invoke_syscall+0x98/0x2c0 arch/arm64/kernel/syscall.c:52
el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142
do_el0_svc+0x64/0x198 arch/arm64/kernel/syscall.c:193
el0_svc+0x58/0x168 arch/arm64/kernel/entry-common.c:637
el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591
irq event stamp: 5664
hardirqs last enabled at (5663): [<ffff80000828dcb4>] raw_spin_rq_unlock_irq kernel/sched/sched.h:1378 [inline]
hardirqs last enabled at (5663): [<ffff80000828dcb4>] finish_lock_switch+0xbc/0x1e4 kernel/sched/core.c:5062
hardirqs last disabled at (5664): [<ffff80001245e098>] el1_dbg+0x24/0x80 arch/arm64/kernel/entry-common.c:405
softirqs last enabled at (4134): [<ffff800008020ec0>] softirq_handle_end kernel/softirq.c:414 [inline]
softirqs last enabled at (4134): [<ffff800008020ec0>] __do_softirq+0xd64/0xfbc kernel/softirq.c:600
softirqs last disabled at (4125): [<ffff80000802b524>] ____do_softirq+0x14/0x20 arch/arm64/kernel/irq.c:80
---[ end trace 0000000000000000 ]---


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Jun 16, 2023, 11:45:48 PM6/16/23
to syzkaller-upst...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages