[v5.15] INFO: task hung in do_unlinkat

0 visningar
Hoppa till det första olästa meddelandet

syzbot

oläst,
12 maj 2024 11:06:2912 maj
till syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 284087d4f7d5 Linux 5.15.158
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1089895c980000
kernel config: https://syzkaller.appspot.com/x/.config?x=b0dd54e4b5171ebc
dashboard link: https://syzkaller.appspot.com/bug?extid=940726887a4cd3e4b9fc
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/c2e33c1db6bf/disk-284087d4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/d9f77284af1d/vmlinux-284087d4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a600323dd149/bzImage-284087d4.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+940726...@syzkaller.appspotmail.com

INFO: task syz-executor.4:5049 blocked for more than 143 seconds.
Not tainted 5.15.158-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:25752 pid: 5049 ppid: 4787 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
rwsem_down_write_slowpath+0xf0c/0x16a0 kernel/locking/rwsem.c:1165
inode_lock_nested include/linux/fs.h:824 [inline]
do_unlinkat+0x266/0x950 fs/namei.c:4331
__do_sys_unlink fs/namei.c:4396 [inline]
__se_sys_unlink fs/namei.c:4394 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4394
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f03860eed69
RSP: 002b:00007f03846400c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007f038621d050 RCX: 00007f03860eed69
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000740
RBP: 00007f038613b49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f038621d050 R15: 00007fff46463088
</TASK>
INFO: task syz-executor.4:5051 blocked for more than 143 seconds.
Not tainted 5.15.158-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:27232 pid: 5051 ppid: 4787 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
rwsem_down_write_slowpath+0xf0c/0x16a0 kernel/locking/rwsem.c:1165
inode_lock include/linux/fs.h:789 [inline]
open_last_lookups fs/namei.c:3529 [inline]
path_openat+0x824/0x2f20 fs/namei.c:3739
do_filp_open+0x21c/0x460 fs/namei.c:3769
do_sys_openat2+0x13b/0x500 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_creat fs/open.c:1343 [inline]
__se_sys_creat fs/open.c:1337 [inline]
__x64_sys_creat+0x11f/0x160 fs/open.c:1337
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f03860eed69
RSP: 002b:00007f038461f0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007f038621d120 RCX: 00007f03860eed69
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000040
RBP: 00007f038613b49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f038621d120 R15: 00007fff46463088
</TASK>
INFO: task syz-executor.4:5056 blocked for more than 144 seconds.
Not tainted 5.15.158-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:25272 pid: 5056 ppid: 4787 flags:0x00004206
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
rwsem_down_write_slowpath+0xf0c/0x16a0 kernel/locking/rwsem.c:1165
inode_lock_nested include/linux/fs.h:824 [inline]
do_unlinkat+0x266/0x950 fs/namei.c:4331
__do_sys_unlink fs/namei.c:4396 [inline]
__se_sys_unlink fs/namei.c:4394 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4394
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f03860eed69
RSP: 002b:00007f03845fe0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007f038621d1f0 RCX: 00007f03860eed69
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000200
RBP: 00007f038613b49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f038621d1f0 R15: 00007fff46463088
</TASK>
INFO: task syz-executor.4:5059 blocked for more than 145 seconds.
Not tainted 5.15.158-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:28216 pid: 5059 ppid: 4787 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
rwsem_down_write_slowpath+0xf0c/0x16a0 kernel/locking/rwsem.c:1165
inode_lock include/linux/fs.h:789 [inline]
open_last_lookups fs/namei.c:3529 [inline]
path_openat+0x824/0x2f20 fs/namei.c:3739
do_filp_open+0x21c/0x460 fs/namei.c:3769
do_sys_openat2+0x13b/0x500 fs/open.c:1253
do_sys_open fs/open.c:1269 [inline]
__do_sys_openat fs/open.c:1285 [inline]
__se_sys_openat fs/open.c:1280 [inline]
__x64_sys_openat+0x243/0x290 fs/open.c:1280
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f03860eed69
RSP: 002b:00007f03845dd0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f038621d2c0 RCX: 00007f03860eed69
RDX: 00000000000026e1 RSI: 0000000020000280 RDI: ffffffffffffff9c
RBP: 00007f038613b49e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f038621d2c0 R15: 00007fff46463088
</TASK>

Showing all locks held in the system:
4 locks held by kworker/0:0/7:
#0: ffff888023c53d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90000cc7d20 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xcc/0x1720 net/ipv6/addrconf.c:4112
#3: ffff8880b9a3a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by kworker/0:1/13:
#0: ffff888011c72138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90000d27d20 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by khungtaskd/27:
#0: ffffffff8c91fae0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
8 locks held by kworker/u4:4/2561:
#0: ffff888011dcd138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000b247d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8d9dbf50 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:558
#3: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: ieee80211_unregister_hw+0x4d/0x220 net/mac80211/main.c:1383
#4: ffff888054068628 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: wiphy_lock include/net/cfg80211.h:5314 [inline]
#4: ffff888054068628 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: cfg80211_netdev_notifier_call+0x4d4/0x1210 net/wireless/core.c:1413
#5: ffff888022050d40 (&wdev->mtx){+.+.}-{3:3}, at: wdev_lock net/wireless/core.h:220 [inline]
#5: ffff888022050d40 (&wdev->mtx){+.+.}-{3:3}, at: cfg80211_leave net/wireless/core.c:1253 [inline]
#5: ffff888022050d40 (&wdev->mtx){+.+.}-{3:3}, at: cfg80211_netdev_notifier_call+0x4e2/0x1210 net/wireless/core.c:1414
#6: ffff8880540697b0 (&local->sta_mtx){+.+.}-{3:3}, at: __sta_info_flush+0x183/0x4d0 net/mac80211/sta_info.c:1209
#7: ffffffff8c9240a8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#7: ffffffff8c9240a8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:845
1 lock held by dhcpcd/3175:
#0: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5626
2 locks held by getty/3264:
#0: ffff88802413f098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc90002bab2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
2 locks held by kworker/1:3/3506:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90002c97d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
3 locks held by kworker/0:4/3562:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90003e17d20 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8c923fb0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x9c/0x4e0 kernel/rcu/tree.c:4039
2 locks held by kworker/1:4/3563:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900030c7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:5/3565:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90003e37d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:6/3566:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900030f7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
3 locks held by kworker/1:7/3596:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90003ed7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8c9240a8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#2: ffffffff8c9240a8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845
3 locks held by kworker/u4:7/4342:
1 lock held by syz-executor.2/4530:
#0: ffff8880745b80e0 (&type->s_umount_key#60/1){+.+.}-{3:3}, at: alloc_super+0x210/0x940 fs/super.c:229
1 lock held by syz-executor.4/4659:
#0: ffff888079b8c0e0 (&type->s_umount_key#60/1){+.+.}-{3:3}, at: alloc_super+0x210/0x940 fs/super.c:229
1 lock held by syz-executor.0/4814:
#0: ffff88805148c0e0 (&type->s_umount_key#60/1){+.+.}-{3:3}, at: alloc_super+0x210/0x940 fs/super.c:229
3 locks held by syz-executor.4/5038:
2 locks held by syz-executor.4/5049:
#0: ffff888074e88460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88805b728188 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:824 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: do_unlinkat+0x266/0x950 fs/namei.c:4331
2 locks held by syz-executor.4/5051:
#0: ffff888074e88460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: open_last_lookups fs/namei.c:3529 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: path_openat+0x824/0x2f20 fs/namei.c:3739
2 locks held by syz-executor.4/5056:
#0: ffff888074e88460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88805b728188 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:824 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20/1){+.+.}-{3:3}, at: do_unlinkat+0x266/0x950 fs/namei.c:4331
2 locks held by syz-executor.4/5059:
#0: ffff888074e88460 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:377
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: open_last_lookups fs/namei.c:3529 [inline]
#1: ffff88805b728188 (&type->i_mutex_dir_key#20){+.+.}-{3:3}, at: path_openat+0x824/0x2f20 fs/namei.c:3739
2 locks held by kworker/1:8/5826:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90003cafd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:9/5853:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900039e7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
4 locks held by syz-executor.1/6173:
#0: ffff88801af3a0e0 (&type->s_umount_key#95){+.+.}-{3:3}, at: deactivate_super+0xa9/0xe0 fs/super.c:365
#1: ffffffff8ce01808 (uuid_mutex){+.+.}-{3:3}, at: btrfs_close_devices+0xbc/0x5c0 fs/btrfs/volumes.c:1231
#2: ffff88801b2a8918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0xfb/0x790 block/bdev.c:912
#3: ffff8881475a3468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0xa9/0xbe0 drivers/block/loop.c:1365
2 locks held by syz-executor.0/6339:
1 lock held by syz-executor.2/6698:
#0: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9e7b08 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5626
2 locks held by syz-executor.4/6773:
#0: ffff888071a40210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:789 [inline]
#0: ffff888071a40210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:648 [inline]
#0: ffff888071a40210 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1336
#1: ffff888079b42120 (sk_lock-AF_CAN){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1668 [inline]
#1: ffff888079b42120 (sk_lock-AF_CAN){+.+.}-{0:0}, at: bcm_release+0x1e0/0x860 net/can/bcm.c:1522
2 locks held by kworker/1:10/6775:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900047e7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:11/6776:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000485fd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:12/6777:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900049cfd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:13/6778:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900049dfd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:14/6779:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900049efd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:15/6780:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900049ffd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:16/6781:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90004a0fd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/1:17/6782:
#0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90004a1fd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.15.158-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 160 Comm: kworker/u4:3 Not tainted 5.15.158-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Workqueue: bat_events batadv_nc_worker
RIP: 0010:check_kcov_mode kernel/kcov.c:174 [inline]
RIP: 0010:write_comp_data kernel/kcov.c:218 [inline]
RIP: 0010:__sanitizer_cov_trace_const_cmp4+0x30/0x80 kernel/kcov.c:284
Code: 8b 15 94 0f 82 7e 65 8b 05 95 0f 82 7e a9 00 01 ff 00 74 10 a9 00 01 00 00 74 5b 83 ba 34 16 00 00 00 74 52 8b 82 10 16 00 00 <83> f8 03 75 47 48 8b 8a 18 16 00 00 44 8b 92 14 16 00 00 49 c1 e2
RSP: 0018:ffffc90001bc7a88 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000080000000 RCX: ffff8880188e8000
RDX: ffff8880188e8000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90001bc7b50 R08: ffffffff8186dcf0 R09: ffffed100e72f8a1
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000378f5c R14: ffffc90001bc7ae0 R15: 0000000000000201
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00000000200015c0 CR3: 0000000050fca000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
trace_hardirqs_on+0x30/0x80 kernel/trace/trace_preemptirq.c:43
__local_bh_enable_ip+0x164/0x1f0 kernel/softirq.c:388
spin_unlock_bh include/linux/spinlock.h:408 [inline]
batadv_nc_purge_paths+0x30e/0x3b0 net/batman-adv/network-coding.c:475
batadv_nc_worker+0x30b/0x5b0 net/batman-adv/network-coding.c:726
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:334
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:300
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Svara alla
Svara författaren
Vidarebefordra
0 nya meddelanden