[v6.1] INFO: task hung in switchdev_deferred_process_work

0 views
Skip to first unread message

syzbot

unread,
Jan 28, 2024, 11:44:19 PMJan 28
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 883d1a956208 Linux 6.1.75
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=11d37d5fe80000
kernel config: https://syzkaller.appspot.com/x/.config?x=e191632f30d1d52a
dashboard link: https://syzkaller.appspot.com/bug?extid=e0e881452cc0b4a68469
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/0213a1024e59/disk-883d1a95.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/4196bd2137f2/vmlinux-883d1a95.xz
kernel image: https://storage.googleapis.com/syzbot-assets/945849d03d1a/bzImage-883d1a95.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+e0e881...@syzkaller.appspotmail.com

INFO: task kworker/1:5:10446 blocked for more than 143 seconds.
Not tainted 6.1.75-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:5 state:D stack:24216 pid:10446 ppid:2 flags:0x00004000
Workqueue: events switchdev_deferred_process_work
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
INFO: task syz-executor.0:15331 blocked for more than 143 seconds.
Not tainted 6.1.75-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0 state:D stack:25832 pid:15331 ppid:1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:74 [inline]
rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2508
netlink_unicast_kernel net/netlink/af_netlink.c:1326 [inline]
netlink_unicast+0x7d8/0x970 net/netlink/af_netlink.c:1352
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1874
sock_sendmsg_nosec net/socket.c:718 [inline]
__sock_sendmsg net/socket.c:730 [inline]
__sys_sendto+0x480/0x600 net/socket.c:2148
__do_sys_sendto net/socket.c:2160 [inline]
__se_sys_sendto net/socket.c:2156 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2156
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f4ee2a7ea9c
RSP: 002b:00007ffc71e30d50 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f4ee36d4620 RCX: 00007f4ee2a7ea9c
RDX: 0000000000000028 RSI: 00007f4ee36d4670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007ffc71e30da4 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f4ee36d4670 R15: 0000000000000000
</TASK>
INFO: task syz-executor.4:15340 blocked for more than 144 seconds.
Not tainted 6.1.75-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:25832 pid:15340 ppid:1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:74 [inline]
rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2508
netlink_unicast_kernel net/netlink/af_netlink.c:1326 [inline]
netlink_unicast+0x7d8/0x970 net/netlink/af_netlink.c:1352
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1874
sock_sendmsg_nosec net/socket.c:718 [inline]
__sock_sendmsg net/socket.c:730 [inline]
__sys_sendto+0x480/0x600 net/socket.c:2148
__do_sys_sendto net/socket.c:2160 [inline]
__se_sys_sendto net/socket.c:2156 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2156
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f1ca7e7ea9c
RSP: 002b:00007ffe067d09e0 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f1ca8ad4620 RCX: 00007f1ca7e7ea9c
RDX: 0000000000000028 RSI: 00007f1ca8ad4670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007ffe067d0a34 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f1ca8ad4670 R15: 0000000000000000
</TASK>
INFO: task syz-executor.3:15362 blocked for more than 145 seconds.
Not tainted 6.1.75-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3 state:D stack:25832 pid:15362 ppid:1 flags:0x00004006
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5245 [inline]
__schedule+0x142d/0x4550 kernel/sched/core.c:6558
schedule+0xbf/0x180 kernel/sched/core.c:6634
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0x6b9/0xd80 kernel/locking/mutex.c:747
rtnl_lock net/core/rtnetlink.c:74 [inline]
rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2508
netlink_unicast_kernel net/netlink/af_netlink.c:1326 [inline]
netlink_unicast+0x7d8/0x970 net/netlink/af_netlink.c:1352
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1874
sock_sendmsg_nosec net/socket.c:718 [inline]
__sock_sendmsg net/socket.c:730 [inline]
__sys_sendto+0x480/0x600 net/socket.c:2148
__do_sys_sendto net/socket.c:2160 [inline]
__se_sys_sendto net/socket.c:2156 [inline]
__x64_sys_sendto+0xda/0xf0 net/socket.c:2156
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f16f8a7ea9c
RSP: 002b:00007fff6ba67820 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007f16f96d4620 RCX: 00007f16f8a7ea9c
RDX: 0000000000000028 RSI: 00007f16f96d4670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fff6ba67874 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f16f96d4670 R15: 0000000000000000
</TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8d12a490 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8d12ac90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
#0: ffffffff8d12a2c0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:318 [inline]
#0: ffffffff8d12a2c0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:759 [inline]
#0: ffffffff8d12a2c0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494
2 locks held by getty/3306:
#0: ffff88814b1af098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
#1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2188
5 locks held by kworker/u4:5/3627:
#0: ffff888012616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc900055bfd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e289710 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
#3: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xd7/0x630 net/core/dev.c:11368
#4: ffffffff8d12f780 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x5f0 kernel/rcu/tree.c:3986
3 locks held by kworker/1:5/10446:
#0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc90006167d20 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
3 locks held by kworker/0:8/11820:
#0: ffff888012471d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc90005c87d20 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x8d/0xdb0 net/wireless/reg.c:2498
3 locks held by kworker/0:11/11824:
#0: ffff888028798938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc900030ffd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4639
3 locks held by kworker/1:7/12478:
#0: ffff888028798938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc90003b4fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4639
3 locks held by kworker/0:14/13351:
#0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#1: ffffc900032dfd20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
#2: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
3 locks held by syz-executor.1/14123:
1 lock held by syz-executor.0/15331:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.4/15340:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.3/15362:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.0/15920:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.4/15925:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.3/15930:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.1/16429:
1 lock held by syz-executor.1/16474:
#0: ffffffff8d12f8b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#0: ffffffff8d12f8b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3b0/0x8a0 kernel/rcu/tree_exp.h:950
1 lock held by syz-executor.0/16476:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.4/16480:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
1 lock held by syz-executor.3/16488:
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e295968 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x7c1/0xff0 net/core/rtnetlink.c:6119
2 locks held by syz-executor.1/16491:
1 lock held by syz-executor.1/16500:
2 locks held by syz-executor.2/16507:
2 locks held by syz-executor.1/16511:
2 locks held by syz-executor.1/16515:
3 locks held by syz-executor.1/16521:
3 locks held by syz-executor.1/16527:
2 locks held by syz-executor.1/16532:
3 locks held by syz-executor.1/16538:
2 locks held by syz-executor.2/16542:
2 locks held by syz-executor.2/16550:
2 locks held by syz-executor.1/16551:
3 locks held by syz-executor.2/16557:
2 locks held by syz-executor.1/16562:
2 locks held by syz-executor.2/16565:
3 locks held by syz-executor.1/16573:
2 locks held by syz-executor.2/16574:
2 locks held by syz-executor.2/16583:
2 locks held by syz-executor.1/16584:
3 locks held by syz-executor.2/16594:
2 locks held by syz-executor.1/16595:
3 locks held by syz-executor.2/16602:
3 locks held by syz-executor.1/16621:
2 locks held by syz-executor.2/16625:
2 locks held by syz-executor.2/16635:
3 locks held by syz-executor.1/16636:
2 locks held by syz-executor.1/16646:
3 locks held by syz-executor.2/16647:
2 locks held by syz-executor.2/16654:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.75-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xf88/0xfd0 kernel/hung_task.c:377
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 16450 Comm: syz-executor.1 Not tainted 6.1.75-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
RIP: 0010:debug_spin_unlock kernel/locking/spinlock_debug.c:102 [inline]
RIP: 0010:do_raw_spin_unlock+0xb6/0x8a0 kernel/locking/spinlock_debug.c:140
Code: 48 8b 2d 2d c7 96 7e 49 39 6d 00 0f 85 30 02 00 00 4d 8d 77 08 4c 89 f3 48 c1 eb 03 48 b8 00 00 00 00 00 fc ff df 0f b6 04 03 <84> c0 0f 85 ea 03 00 00 41 8b 06 65 8b 0d 58 71 96 7e 39 c8 0f 85
RSP: 0018:ffffc90006fc70b0 EFLAGS: 00000a06
RAX: 0000000000000000 RBX: 1ffff1100a203bb1 RCX: ffffffff816ba7d4
RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88805101dd80
RBP: ffff8880242c5940 R08: dffffc0000000000 R09: ffffed100a203bb1
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff1100a203bb2
R13: ffff88805101dd90 R14: ffff88805101dd88 R15: ffff88805101dd80
FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555555deada8 CR3: 00000000908e7000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
<TASK>
__raw_spin_unlock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_unlock+0x1a/0x40 kernel/locking/spinlock.c:186
spin_unlock include/linux/spinlock.h:390 [inline]
delete_from_page_cache_batch+0xaa2/0xc90 mm/filemap.c:338
truncate_inode_pages_range+0x370/0x1340 mm/truncate.c:369
ext4_evict_inode+0x39c/0x1150 fs/ext4/inode.c:221
evict+0x2a4/0x620 fs/inode.c:666
__dentry_kill+0x436/0x650 fs/dcache.c:607
dentry_kill+0xbb/0x290
dput+0x21a/0x470 fs/dcache.c:913
__fput+0x5e4/0x890 fs/file_table.c:328
task_work_run+0x246/0x300 kernel/task_work.c:179
exit_task_work include/linux/task_work.h:38 [inline]
do_exit+0xa73/0x26a0 kernel/exit.c:869
do_group_exit+0x202/0x2b0 kernel/exit.c:1019
get_signal+0x16f7/0x17d0 kernel/signal.c:2862
arch_do_signal_or_restart+0xb0/0x1a10 arch/x86/kernel/signal.c:871
exit_to_user_mode_loop+0x6a/0x100 kernel/entry/common.c:168
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
irqentry_exit_to_user_mode+0x5/0x40 kernel/entry/common.c:310
exc_general_protection+0x3e0/0x590 arch/x86/kernel/traps.c:729
asm_exc_general_protection+0x22/0x30 arch/x86/include/asm/idtentry.h:564
RIP: 0033:0x7fde4547cdb1
Code: Unable to access opcode bytes at 0x7fde4547cd87.
RSP: 002b:00000000200002e0 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007fde455abf80 RCX: 00007fde4547cda9
RDX: 0000000020000300 RSI: 00000000200002e0 RDI: 0000000000220080
RBP: 00007fde454c947a R08: 0000000020000380 R09: 0000000020000380
R10: 0000000020000340 R11: 0000000000000202 R12: 0000000000000000
R13: 000000000000000b R14: 00007fde455abf80 R15: 00007ffecacc7b58
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages