[v6.1] INFO: task hung in dev_ethtool

0 views
Skip to first unread message

syzbot

unread,
Feb 11, 2024, 4:03:24 AMFeb 11
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: f1bb70486c9c Linux 6.1.77
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=166e0648180000
kernel config: https://syzkaller.appspot.com/x/.config?x=39447811cb133e7e
dashboard link: https://syzkaller.appspot.com/bug?extid=74ee0aafe8a25598a978
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f93cb7e9dad2/disk-f1bb7048.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/22703d1d86ee/vmlinux-f1bb7048.xz
kernel image: https://storage.googleapis.com/syzbot-assets/4129725af309/bzImage-f1bb7048.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+74ee0a...@syzkaller.appspotmail.com

INFO: task syz-executor.4:17938 blocked for more than 143 seconds.
Not tainted 6.1.77-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:28920 pid:17938 ppid:13513 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
dev_ethtool+0x1f4/0x1540 net/ethtool/ioctl.c:3041
dev_ioctl+0x273/0xf70 net/core/dev_ioctl.c:524
sock_do_ioctl+0x26b/0x450 net/socket.c:1218
sock_ioctl+0x47f/0x770 net/socket.c:1321
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f0f2967dda9
RSP: 002b:00007f0f2a46c0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f0f297abf80 RCX: 00007f0f2967dda9
RDX: 0000000020000040 RSI: 0000000000008946 RDI: 0000000000000003
RBP: 00007f0f296ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f0f297abf80 R15: 00007ffd4b5348d8
</TASK>
INFO: task syz-executor.1:17939 blocked for more than 143 seconds.
Not tainted 6.1.77-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1 state:D stack:28920 pid:17939 ppid:3578 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
dev_ethtool+0x1f4/0x1540 net/ethtool/ioctl.c:3041
dev_ioctl+0x273/0xf70 net/core/dev_ioctl.c:524
sock_do_ioctl+0x26b/0x450 net/socket.c:1218
sock_ioctl+0x47f/0x770 net/socket.c:1321
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:870 [inline]
__se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f00f287dda9
RSP: 002b:00007f00f36520c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f00f29abf80 RCX: 00007f00f287dda9
RDX: 0000000020000040 RSI: 0000000000008946 RDI: 0000000000000003
RBP: 00007f00f28ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f00f29abf80 R15: 00007ffdcdbf5b68
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8d12a910 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8d12b110 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:318 [inline]
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:759 [inline]
#0: ffffffff8d12a740 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494
1 lock held by dhcpcd/3218:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: devinet_ioctl+0x2a5/0x1b20 net/ipv4/devinet.c:1070
2 locks held by getty/3301:
#0: ffff88814b8aa098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2188
3 locks held by kworker/1:11/5266:
#0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc9000aaffd20 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
3 locks held by kworker/1:19/15088:
#0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc900179d7d20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
3 locks held by kworker/0:21/13753:
#0: ffff888027bce938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc9000323fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4639
3 locks held by kworker/1:21/15462:
#0: ffff888027bce938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc9000afefd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4639
5 locks held by kworker/u4:13/19563:
#0: ffff888012616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc9000afb7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e288890 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
#3: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xe5/0x9d0 net/core/dev.c:11377
#4: ffffffff8d12fc00 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x5f0 kernel/rcu/tree.c:4018
1 lock held by syz-executor.4/17938:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: dev_ethtool+0x1f4/0x1540 net/ethtool/ioctl.c:3041
1 lock held by syz-executor.1/17939:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: dev_ethtool+0x1f4/0x1540 net/ethtool/ioctl.c:3041
1 lock held by syz-executor.0/17949:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.2/17958:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.1/18343:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.4/18354:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.0/18378:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.3/18714:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x465/0x2430 drivers/net/tun.c:3098
1 lock held by syz-executor.3/18716:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x465/0x2430 drivers/net/tun.c:3098
1 lock held by syz-executor.3/18717:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x465/0x2430 drivers/net/tun.c:3098
1 lock held by syz-executor.3/18718:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: tun_set_queue drivers/net/tun.c:2969 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x3f2/0x2430 drivers/net/tun.c:3091
1 lock held by syz-executor.3/18719:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x465/0x2430 drivers/net/tun.c:3098
1 lock held by syz-executor.2/18722:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.3/18726:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.1/18731:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.4/18735:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.0/18739:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.2/18742:
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e294ae8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf88/0xfd0 kernel/hung_task.c:377
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5506 Comm: kworker/u4:16 Not tainted 6.1.77-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: krdsd rds_connect_worker
RIP: 0010:unwind_next_frame+0x551/0x2220 arch/x86/kernel/unwind_orc.c:461
Code: 0f b6 04 38 84 c0 0f 85 1d 17 00 00 c6 03 01 48 c7 c5 a0 6a e9 8a 4c 8d 6d 04 48 8d 5d 05 4d 89 ef 49 c1 ef 03 41 0f b6 04 3f <84> c0 0f 85 92 16 00 00 48 89 d8 48 c1 e8 03 0f b6 04 38 84 c0 0f
RSP: 0018:ffffc900150d73c0 EFLAGS: 00000a06
RAX: 0000000000000000 RBX: ffffffff8ef0ee19 RCX: ffffffff8e84d74c
RDX: ffffffff8ef0ee14 RSI: ffffffff81783773 RDI: dffffc0000000000
RBP: ffffffff8ef0ee14 R08: 0000000000000003 R09: ffffc900150d7590
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000000
R13: ffffffff8ef0ee18 R14: ffffffff8e84d748 R15: 1ffffffff1de1dc3
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb3cb6d5000 CR3: 000000000ce8e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4b/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x65/0x70 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x52/0x3a0 mm/slab.h:737
slab_alloc_node mm/slub.c:3398 [inline]
slab_alloc mm/slub.c:3406 [inline]
__kmem_cache_alloc_lru mm/slub.c:3413 [inline]
kmem_cache_alloc+0x10c/0x2d0 mm/slub.c:3422
sk_prot_alloc+0x58/0x200 net/core/sock.c:2040
sk_alloc+0x36/0x350 net/core/sock.c:2099
inet_create+0x651/0xeb0 net/ipv4/af_inet.c:319
__sock_create+0x488/0x910 net/socket.c:1550
rds_tcp_conn_path_connect+0x2b2/0xbb0
rds_connect_worker+0x1d5/0x290 net/rds/threads.c:176
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages