[v6.6] INFO: task hung in vfs_unlink

0 views
Skip to first unread message

syzbot

unread,
Dec 30, 2025, 2:37:26 AM (4 days ago) 12/30/25
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 5fa4793a2d2d Linux 6.6.119
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=1205c422580000
kernel config: https://syzkaller.appspot.com/x/.config?x=691a6769a86ac817
dashboard link: https://syzkaller.appspot.com/bug?extid=3f6f958bcfcdd83ddc3e
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/63699875f1dd/disk-5fa4793a.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/8506652fcb6f/vmlinux-5fa4793a.xz
kernel image: https://storage.googleapis.com/syzbot-assets/1b30ceed1710/bzImage-5fa4793a.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+3f6f95...@syzkaller.appspotmail.com

INFO: task syz-executor:5763 blocked for more than 146 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor state:D stack:20488 pid:5763 ppid:1 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
rwsem_down_write_slowpath+0xbd2/0xfa0 kernel/locking/rwsem.c:1178
__down_write_common kernel/locking/rwsem.c:1306 [inline]
__down_write kernel/locking/rwsem.c:1315 [inline]
down_write+0x1a7/0x1f0 kernel/locking/rwsem.c:1574
inode_lock include/linux/fs.h:804 [inline]
vfs_unlink+0xf2/0x600 fs/namei.c:4322
do_unlinkat+0x328/0x570 fs/namei.c:4399
__do_sys_unlink fs/namei.c:4447 [inline]
__se_sys_unlink fs/namei.c:4445 [inline]
__x64_sys_unlink+0x49/0x50 fs/namei.c:4445
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f036c78ecf7
RSP: 002b:00007ffe3478c348 EFLAGS: 00000206 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f036c78ecf7
RDX: 00007ffe3478c370 RSI: 00007ffe3478c400 RDI: 00007ffe3478c400
RBP: 00007ffe3478c400 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000100 R11: 0000000000000206 R12: 00007ffe3478d4f0
R13: 00007f036c813d7d R14: 000000000001c886 R15: 00007ffe3478e5c0
</TASK>
INFO: task kworker/0:5:5861 blocked for more than 147 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:5 state:D stack:24808 pid:5861 ppid:2 flags:0x00004000
Workqueue: xfs-sync/loop3 xfs_flush_inodes_worker
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
wb_wait_for_completion+0x166/0x290 fs/fs-writeback.c:192
sync_inodes_sb+0x1bc/0x9e0 fs/fs-writeback.c:2772
xfs_flush_inodes_worker+0x61/0x80 fs/xfs/xfs_super.c:617
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
INFO: task syz.3.62:6194 blocked for more than 148 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.62 state:D stack:24208 pid:6194 ppid:5763 flags:0x00004002
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_timeout+0x9b/0x280 kernel/time/timer.c:2144
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x2bd/0x590 kernel/sched/completion.c:148
__flush_work+0x895/0x9f0 kernel/workqueue.c:3430
xfs_file_buffered_write+0x33c/0x940 fs/xfs/xfs_file.c:801
__kernel_write_iter+0x274/0x670 fs/read_write.c:517
dump_emit_page fs/coredump.c:957 [inline]
dump_user_range+0x3f6/0x800 fs/coredump.c:984
elf_core_dump+0x3114/0x36e0 fs/binfmt_elf.c:2184
do_coredump+0x1755/0x2480 fs/coredump.c:833
get_signal+0x1133/0x1400 kernel/signal.c:2888
arch_do_signal_or_restart+0x9c/0x7b0 arch/x86/kernel/signal.c:310
exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
exit_to_user_mode_prepare+0xf6/0x180 kernel/entry/common.c:210
irqentry_exit_to_user_mode+0x9/0x40 kernel/entry/common.c:315
exc_page_fault+0x8f/0x110 arch/x86/mm/fault.c:1524
asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:608
RIP: 0033:0x7f036c78f751
RSP: 002b:fffffffffffffe70 EFLAGS: 00010217

RAX: 0000000000000000 RBX: 00007f036c9e6090 RCX: 00007f036c78f749
RDX: 0000000000000000 RSI: fffffffffffffe70 RDI: 0000000000008000
RBP: 00007f036c813f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000
R13: 00007f036c9e6128 R14: 00007f036c9e6090 R15: 00007ffe3478e1a8
</TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
#0: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#0: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
3 locks held by kworker/0:1/9:
2 locks held by kworker/1:1/27:
1 lock held by khungtaskd/29:
#0:
ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by kworker/u4:3/49:
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000ba7d00 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000ba7d00 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
4 locks held by kworker/u4:7/1106:
#0: ffff888019e63938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888019e63938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc9000484fd00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc9000484fd00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807af5a608 (sb_internal#2){.+.+}-{0:0}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4534 [inline]
#2: ffff88807af5a608 (sb_internal#2){.+.+}-{0:0}, at: xfs_bmapi_convert_delalloc+0x2f4/0x1490 fs/xfs/libxfs/xfs_bmap.c:4661
#3:
ffff88805dfb0118 (&xfs_nondir_ilock_class#3){++++}-{3:3}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4539 [inline]
ffff88805dfb0118 (&xfs_nondir_ilock_class#3){++++}-{3:3}, at: xfs_bmapi_convert_delalloc+0x320/0x1490 fs/xfs/libxfs/xfs_bmap.c:4661
2 locks held by kworker/1:2/5095:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc900105ffd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc900105ffd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
2 locks held by getty/5533:
#0: ffff88802c95b0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by syz-executor/5763:
#0: ffff88807af5a418 (sb_writers#16){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
#1: ffff88805dcd5888 (&inode->i_sb->s_type->i_mutex_dir_key/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
#1: ffff88805dcd5888 (&inode->i_sb->s_type->i_mutex_dir_key/1){+.+.}-{3:3}, at: do_unlinkat+0x17c/0x570 fs/namei.c:4384
#2: ffff88805dfb0348 (&sb->s_type->i_mutex_key#21){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#2: ffff88805dfb0348 (&sb->s_type->i_mutex_key#21){++++}-{3:3}, at: vfs_unlink+0xf2/0x600 fs/namei.c:4322
4 locks held by kworker/0:5/5861:
#0: ffff88801e3e0538 ((wq_completion)xfs-sync/loop3){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88801e3e0538 ((wq_completion)xfs-sync/loop3){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90004a9fd00 ((work_completion)(&mp->m_flush_inodes_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90004a9fd00 ((work_completion)(&mp->m_flush_inodes_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807af5a0e0 (&type->s_umount_key#86){++++}-{3:3}, at: xfs_flush_inodes_worker+0x45/0x80 fs/xfs/xfs_super.c:616
#3: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
#3: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x1a0/0x9e0 fs/fs-writeback.c:2770
2 locks held by syz.3.62/6194:
#0: ffff88807af5a418 (sb_writers#16){.+.+}-{0:0}, at: do_coredump+0x1734/0x2480 fs/coredump.c:832
#1: ffff88805dfb0348 (&sb->s_type->i_mutex_key#21){++++}-{3:3}, at: xfs_ilock+0xee/0x360 fs/xfs/xfs_inode.c:195
2 locks held by kworker/0:7/6363:
1 lock held by syz.5.313/7870:
#0: ffff88805dcd9420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
#0: ffff88805dcd9420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
#0: ffff88805dcd9420 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1421
2 locks held by syz.1.314/7875:
#0: ffff88807af5a0e0 (&type->s_umount_key
#86){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
#86){++++}-{3:3}, at: super_lock+0x167/0x360 fs/super.c:117
#1: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
#1: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x1a0/0x9e0 fs/fs-writeback.c:2770
2 locks held by syz.1.314/7877:
#0: ffff88807af5a0e0 (&type->s_umount_key#86){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
#0: ffff88807af5a0e0 (&type->s_umount_key#86){++++}-{3:3}, at: super_lock+0x167/0x360 fs/super.c:117
#1: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
#1: ffff888141f387d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x1a0/0x9e0 fs/fs-writeback.c:2770
2 locks held by dhcpcd/7953:
#0: ffff888050f0c130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff888050f0c130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
#1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
#1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by dhcpcd/7954:
#0:
ffff88807b06e130
(sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
(sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7955:
#0: ffff88802e598130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88802e598130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7956:
#0: ffff88802caf8130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88802caf8130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7957:
#0: ffff88804c986130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88804c986130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7958:
#0: ffff88804c984130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88804c984130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7959:
#0: ffff88804c982130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88804c982130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7960:
#0: ffff88804c980130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff88804c980130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7961:
#0: ffff888020ef4130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff888020ef4130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by dhcpcd/7962:
#0: ffff888020ef0130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1767 [inline]
#0: ffff888020ef0130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3258
1 lock held by sed/7969:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5838 Comm: kworker/1:6 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: xfs-buf/loop3 xfs_buf_ioend_work
RIP: 0010:io_serial_out+0x7c/0xc0 drivers/tty/serial/8250/8250_port.c:424
Code: fa e8 fc 44 89 f9 d3 e5 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 ac 68 40 fd 41 03 2e 89 d8 89 ea ee <5b> 41 5c 41 5e 41 5f 5d c3 44 89 f9 80 e1 07 38 c1 7c aa 4c 89 ff
RSP: 0018:ffffc90004a4f250 EFLAGS: 00000006
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 00000000000003f9 RSI: 0000000000000000 RDI: 0000000000000020
RBP: 00000000000003f9 R08: 0000000000000003 R09: 0000000000000004
R10: dffffc0000000000 R11: fffff52000949e2c R12: dffffc0000000000
R13: 0000000000000000 R14: ffffffff971c4240 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555566015608 CR3: 0000000051de2000 CR4: 00000000003506e0
Call Trace:
<TASK>
serial_out drivers/tty/serial/8250/8250.h:122 [inline]
serial8250_clear_IER drivers/tty/serial/8250/8250_port.c:-1 [inline]
serial8250_console_write+0x2b3/0x17a0 drivers/tty/serial/8250/8250_port.c:3439
console_emit_next_record kernel/printk/printk.c:2944 [inline]
console_flush_all+0x6cd/0xd00 kernel/printk/printk.c:3000
console_unlock+0xae/0x340 kernel/printk/printk.c:3069
vprintk_emit+0x477/0x600 kernel/printk/printk.c:2341
_printk+0xd0/0x110 kernel/printk/printk.c:2366
print_hex_dump+0x1a9/0x260 lib/hexdump.c:285
xfs_hex_dump+0x3d/0x50 fs/xfs/xfs_message.c:110
xfs_buf_verifier_error+0x1cc/0x2a0 fs/xfs/xfs_error.c:460
xfs_agf_read_verify+0x1d5/0x250 fs/xfs/libxfs/xfs_alloc.c:-1
xfs_buf_ioend+0x260/0x6f0 fs/xfs/xfs_buf.c:1309
process_one_work kernel/workqueue.c:2634 [inline]
process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages