[v5.15] INFO: task hung in new_device_store (2)

0 views
Skip to first unread message

syzbot

unread,
Jan 21, 2024, 10:38:23 PMJan 21
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: ddcaf4999061 Linux 5.15.147
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1550306be80000
kernel config: https://syzkaller.appspot.com/x/.config?x=8c65db3d25098c3c
dashboard link: https://syzkaller.appspot.com/bug?extid=82a4cdf2dc261bfac8dc
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/33608a58aade/disk-ddcaf499.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/f475952349f9/vmlinux-ddcaf499.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0875b18fba29/bzImage-ddcaf499.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+82a4cd...@syzkaller.appspotmail.com

INFO: task syz-executor.1:11240 blocked for more than 143 seconds.
Not tainted 5.15.147-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:21624 pid:11240 ppid: 1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5030 [inline]
__schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
schedule+0x11b/0x1f0 kernel/sched/core.c:6459
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6518
__mutex_lock_common+0xe34/0x25a0 kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
new_device_store+0x1b0/0x910 drivers/net/netdevsim/bus.c:295
kernfs_fop_write_iter+0x3a2/0x4f0 fs/kernfs/file.c:296
call_write_iter include/linux/fs.h:2146 [inline]
new_sync_write fs/read_write.c:507 [inline]
vfs_write+0xacf/0xe50 fs/read_write.c:594
ksys_write+0x1a2/0x2c0 fs/read_write.c:647
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7f14eaeeeaef
RSP: 002b:00007ffe5407a820 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f14eaeeeaef
RDX: 0000000000000003 RSI: 00007ffe5407a870 RDI: 0000000000000005
RBP: 00007f14eaf3c045 R08: 0000000000000000 R09: 00007ffe5407a677
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 00007ffe5407a870 R14: 00007f14ebb47620 R15: 0000000000000003
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91f220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
5 locks held by kworker/u4:1/144:
#0: ffff888011dcd138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900010bfd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8d9ce810 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:558
#3: ffff88805bacd3e8 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_destruct+0x10c/0x2f0 drivers/net/wireguard/device.c:233
#4: ffffffff8c9237e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#4: ffffffff8c9237e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845
3 locks held by kworker/1:1H/263:
#0: ffff8880b9b39718 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
#1: ffff8880b9b27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53d/0x810 kernel/sched/psi.c:891
#2: ffffffff8c91f220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:269
2 locks held by kworker/0:2/1065:
#0: ffff888011c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90005017d20 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
5 locks held by kworker/u4:3/1125:
#0: ffff8880143ac938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc900052afd20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffff88814b5ee0e0 (&type->s_umount_key#32){++++}-{3:3}, at: trylock_super+0x1b/0xf0 fs/super.c:418
#3: ffff88814b5f0bd8 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: ext4_writepages+0x1f6/0x3d10 fs/ext4/inode.c:2677
#4: ffff88814b5f2990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x12b9/0x1570 fs/jbd2/transaction.c:462
1 lock held by dhcpcd/3176:
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5627
2 locks held by getty/3264:
#0: ffff88802498d098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc9000250b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
2 locks held by kworker/u4:6/3681:
3 locks held by kworker/1:13/6353:
#0: ffff888011c70d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc90005057d20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:251
6 locks held by syz-executor.3/11231:
#0: ffff88807e42e460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x29a/0xe50 fs/read_write.c:590
#1: ffff88805b7c0888 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1e7/0x4f0 fs/kernfs/file.c:287
#2: ffff888147d45748 (kn->active#233){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20b/0x4f0 fs/kernfs/file.c:288
#3: ffffffff8d356c68 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: del_device_store+0xf1/0x470 drivers/net/netdevsim/bus.c:344
#4: ffff888051b66178 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:760 [inline]
#4: ffff888051b66178 (&dev->mutex){....}-{3:3}, at: __device_driver_lock drivers/base/dd.c:1044 [inline]
#4: ffff888051b66178 (&dev->mutex){....}-{3:3}, at: device_release_driver_internal+0xc2/0x7f0 drivers/base/dd.c:1259
#5: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: unregister_nexthop_notifier+0x78/0x210 net/ipv4/nexthop.c:3622
4 locks held by syz-executor.1/11240:
#0: ffff88807e42e460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x29a/0xe50 fs/read_write.c:590
#1: ffff88801d509088 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1e7/0x4f0 fs/kernfs/file.c:287
#2: ffff888147d45660 (kn->active#234){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20b/0x4f0 fs/kernfs/file.c:288
#3: ffffffff8d356c68 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: new_device_store+0x1b0/0x910 drivers/net/netdevsim/bus.c:295
1 lock held by dhcpcd/11566:
#0: ffff88805d0c2610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#0: ffff88805d0c2610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:648 [inline]
#0: ffff88805d0c2610 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1336
2 locks held by syz-executor.4/11969:
1 lock held by syz-executor.2/11972:
2 locks held by syz-executor.4/11978:
1 lock held by syz-executor.2/11981:
1 lock held by syz-executor.4/11984:
1 lock held by syz-executor.2/11992:
1 lock held by syz-executor.4/11994:
1 lock held by syz-executor.4/12004:
1 lock held by syz-executor.2/12007:
1 lock held by syz-executor.4/12009:
1 lock held by syz-executor.2/12012:
3 locks held by syz-executor.4/12016:
1 lock held by syz-executor.2/12019:
1 lock held by syz-executor.4/12024:
1 lock held by syz-executor.2/12025:
3 locks held by syz-executor.4/12031:
1 lock held by syz-executor.2/12032:
2 locks held by syz-executor.2/12037:
2 locks held by syz-executor.4/12039:
1 lock held by syz-executor.2/12042:
1 lock held by syz-executor.4/12046:
3 locks held by syz-executor.4/12053:
1 lock held by syz-executor.2/12055:
2 locks held by syz-executor.4/12059:
3 locks held by syz-executor.2/12062:
2 locks held by syz-executor.4/12067:
1 lock held by syz-executor.2/12070:
1 lock held by syz-executor.4/12072:
1 lock held by syz-executor.2/12148:
4 locks held by syz-executor.4/12150:
1 lock held by syz-executor.2/12154:
1 lock held by syz-executor.4/12159:
1 lock held by syz-executor.3/12160:
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5627
1 lock held by syz-executor.4/12166:
1 lock held by syz-executor.2/12167:
1 lock held by syz-executor.1/12174:
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5627
1 lock held by syz-executor.2/12177:
1 lock held by syz-executor.5/12178:
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5627
1 lock held by syz-executor.0/12181:
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff8d9da3c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5627
1 lock held by syz-executor.4/12185:
4 locks held by syz-executor.2/12190:
#0: ffff88807c3d8ff0 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close+0x63/0x1070 net/bluetooth/hci_core.c:1737
#1: ffff88807c3d8078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_do_close+0x431/0x1070 net/bluetooth/hci_core.c:1782
#2: ffffffff8db25028 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1523 [inline]
#2: ffffffff8db25028 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xb8/0x220 net/bluetooth/hci_conn.c:1624
#3: ffffffff8c9237e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
#3: ffffffff8c9237e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845
1 lock held by syz-executor.4/12192:
1 lock held by syz-executor.2/12196:
1 lock held by syz-executor.4/12199:
1 lock held by syz-executor.4/12205:
1 lock held by syz-executor.2/12206:
1 lock held by syz-executor.2/12210:
1 lock held by syz-executor.4/12214:
1 lock held by syz-executor.2/12218:
1 lock held by syz-executor.4/12223:
1 lock held by syz-executor.2/12224:
1 lock held by syz-executor.4/12229:
1 lock held by syz-executor.2/12231:
#0: ffff88814b5f0bd8 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: ext4_writepages+0x1f6/0x3d10 fs/ext4/inode.c:2677
1 lock held by syz-executor.4/12241:
1 lock held by syz-executor.4/12249:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.15.147-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 12086 Comm: kworker/u4:9 Not tainted 5.15.147-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:mark_lock+0x4d/0x340 kernel/locking/lockdep.c:4563
Code: 27 49 8d 5f 20 48 89 d8 48 c1 e8 03 42 0f b6 04 28 84 c0 0f 85 81 02 00 00 31 ed f6 43 02 03 40 0f 94 c5 83 f5 09 eb 0b 89 d5 <83> fa 20 0f 83 d0 02 00 00 41 be 01 00 00 00 89 e9 41 d3 e6 49 8d
RSP: 0018:ffffc90004737978 EFLAGS: 00000093
RAX: 0000000000040734 RBX: ffff888024a1a8d8 RCX: ffffffff8162ee48
RDX: 0000000000000006 RSI: ffff888024a1a8d8 RDI: ffff888024a19dc0
RBP: 0000000000000006 R08: dffffc0000000000 R09: fffffbfff1f79e35
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff888024a1a8f8
R13: dffffc0000000000 R14: ffff888024a1a8a8 R15: ffff888024a1a8d8
FS: 0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000556153ec9680 CR3: 000000000c68e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
mark_held_locks kernel/locking/lockdep.c:4193 [inline]
__trace_hardirqs_on_caller kernel/locking/lockdep.c:4219 [inline]
lockdep_hardirqs_on_prepare+0x3a0/0x7a0 kernel/locking/lockdep.c:4278
trace_hardirqs_on+0x67/0x80 kernel/trace/trace_preemptirq.c:49
__local_bh_enable_ip+0x164/0x1f0 kernel/softirq.c:388
spin_unlock_bh include/linux/spinlock.h:408 [inline]
batadv_nc_purge_paths+0x30e/0x3b0 net/batman-adv/network-coding.c:475
batadv_nc_worker+0x30b/0x5b0 net/batman-adv/network-coding.c:726
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Apr 30, 2024, 11:38:16 PM (2 days ago) Apr 30
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages