[v5.15] INFO: task hung in genl_rcv_msg

4 views
Skip to first unread message

syzbot

unread,
Aug 1, 2023, 4:26:57 PM8/1/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 09996673e313 Linux 5.15.123
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17e4d7ada80000
kernel config: https://syzkaller.appspot.com/x/.config?x=f9d4431ce682ac7e
dashboard link: https://syzkaller.appspot.com/bug?extid=0af3f48b401c065069cb
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/e665cef1f596/disk-09996673.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/c21eeab4b038/vmlinux-09996673.xz
kernel image: https://storage.googleapis.com/syzbot-assets/15e2b5a7aa32/bzImage-09996673.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0af3f4...@syzkaller.appspotmail.com

INFO: task syz-executor.4:7600 blocked for more than 143 seconds.
Not tainted 5.15.123-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:21656 pid: 7600 ppid: 1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5026 [inline]
__schedule+0x12c4/0x4590 kernel/sched/core.c:6372
schedule+0x11b/0x1f0 kernel/sched/core.c:6455
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6514
__mutex_lock_common+0xe34/0x25a0 kernel/locking/mutex.c:669
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
genl_lock net/netlink/genetlink.c:33 [inline]
genl_rcv_msg+0x124/0x14a0 net/netlink/genetlink.c:790
netlink_rcv_skb+0x1cf/0x410 net/netlink/af_netlink.c:2505
genl_rcv+0x24/0x40 net/netlink/genetlink.c:803
netlink_unicast_kernel net/netlink/af_netlink.c:1330 [inline]
netlink_unicast+0x7b6/0x980 net/netlink/af_netlink.c:1356
netlink_sendmsg+0xa30/0xd60 net/netlink/af_netlink.c:1924
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg net/socket.c:724 [inline]
__sys_sendto+0x564/0x720 net/socket.c:2039
__do_sys_sendto net/socket.c:2051 [inline]
__se_sys_sendto net/socket.c:2047 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2047
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fd907bf37dc
RSP: 002b:00007ffd5e827f60 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007fd908839620 RCX: 00007fd907bf37dc
RDX: 0000000000000020 RSI: 00007fd908839670 RDI: 0000000000000005
RBP: 0000000000000000 R08: 00007ffd5e827fb4 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffd5e828008 R14: 00007fd908839670 R15: 0000000000000000
</TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff8c91e920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3265:
#0: ffff88814bc52098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
#1: ffffc900022a32e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1da0 drivers/tty/n_tty.c:2147
3 locks held by kworker/u4:7/3726:
3 locks held by syz-executor.4/7529:
2 locks held by syz-executor.4/7600:
#0: ffffffff8da40070 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_lock net/netlink/genetlink.c:33 [inline]
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_rcv_msg+0x124/0x14a0 net/netlink/genetlink.c:790
2 locks held by syz-executor.4/7618:
#0: ffffffff8da40070 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_lock net/netlink/genetlink.c:33 [inline]
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_rcv_msg+0x124/0x14a0 net/netlink/genetlink.c:790
2 locks held by syz-executor.4/7627:
#0: ffffffff8da40070 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_lock net/netlink/genetlink.c:33 [inline]
#1: ffffffff8da3ff28 (genl_mutex){+.+.}-{3:3}, at: genl_rcv_msg+0x124/0x14a0 net/netlink/genetlink.c:790
2 locks held by dhcpcd/7639:
#0: ffff88802b4ec120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1661 [inline]
#0: ffff88802b4ec120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x33/0xd50 net/packet/af_packet.c:3160
#1: ffffffff8c922e68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffffffff8c922e68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:842

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.15.123-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
watchdog+0xe72/0xeb0 kernel/hung_task.c:295
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3726 Comm: kworker/u4:7 Not tainted 5.15.123-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:lockdep_enabled kernel/locking/lockdep.c:91 [inline]
RIP: 0010:lock_acquire+0x134/0x4f0 kernel/locking/lockdep.c:5598
Code: 65 8b 05 8f 0a a0 7e 85 c0 0f 85 88 01 00 00 65 48 8b 1d 3f 01 a0 7e 48 81 c3 ec 0a 00 00 48 89 d8 48 c1 e8 03 42 0f b6 04 28 <84> c0 0f 85 c4 02 00 00 83 3b 00 0f 85 5c 01 00 00 4c 8d bc 24 80
RSP: 0018:ffffc90004f37a80 EFLAGS: 00000a03
RAX: 0000000000000000 RBX: ffff88801ad1c66c RCX: ffffffff81626e2c
RDX: 0000000000000000 RSI: ffffffff8ad86b60 RDI: ffffffff8ad86b20
RBP: ffffc90004f37bd8 R08: dffffc0000000000 R09: fffffbfff1bc7f0e
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff920009e6f58
R13: dffffc0000000000 R14: 0000000000000000 R15: ffff888014fa6100
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000564ce2f05600 CR3: 000000007c484000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:269
rcu_read_lock include/linux/rcupdate.h:696 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:412 [inline]
batadv_nc_worker+0xc1/0x5b0 net/batman-adv/network-coding.c:723
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Nov 9, 2023, 3:26:15 PM11/9/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages