[v6.1] possible deadlock in pie_timer

4 views
Skip to first unread message

syzbot

unread,
Aug 1, 2023, 10:26:58 PM8/1/23
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d2a6dc4eaf6d Linux 6.1.42
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=169ab96ea80000
kernel config: https://syzkaller.appspot.com/x/.config?x=54d239cfb343e1e3
dashboard link: https://syzkaller.appspot.com/bug?extid=0129f8b98c208e6606fc
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/48a245e1f181/disk-d2a6dc4e.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/36fe96f5b416/vmlinux-d2a6dc4e.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d53f0286f35a/bzImage-d2a6dc4e.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+0129f8...@syzkaller.appspotmail.com

netlink: 12 bytes leftover after parsing attributes in process `syz-executor.1'.
=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
6.1.42-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor.1/2582 [HC0[0]:SC0[2]:HE1:SE0] is trying to acquire:
ffffffff8d202ba0 (fs_reclaim){+.+.}-{0:0}, at: might_alloc include/linux/sched/mm.h:271 [inline]
ffffffff8d202ba0 (fs_reclaim){+.+.}-{0:0}, at: slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710

and this task is already holding:
ffff88807a6b2108 (&sch->q.lock){+.-.}-{2:2}, at: netem_change+0x17e/0x1ea0 net/sched/sch_netem.c:969
which would create a new lock dependency:
(&sch->q.lock){+.-.}-{2:2} -> (fs_reclaim){+.+.}-{0:0}

but this new dependency connects a SOFTIRQ-irq-safe lock:
(&sch->q.lock){+.-.}-{2:2}

... which became SOFTIRQ-irq-safe at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:350 [inline]
pie_timer+0x150/0x310 net/sched/sch_pie.c:428
call_timer_fn+0x19e/0x6b0 kernel/time/timer.c:1474
expire_timers kernel/time/timer.c:1519 [inline]
__run_timers+0x67c/0x890 kernel/time/timer.c:1790
run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1803
__do_softirq+0x2e9/0xa4c kernel/softirq.c:571
invoke_softirq kernel/softirq.c:445 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:650
irq_exit_rcu+0x5/0x20 kernel/softirq.c:662
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
lock_acquire+0x26f/0x5a0 kernel/locking/lockdep.c:5673
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:306
rcu_read_lock include/linux/rcupdate.h:747 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
batadv_nc_worker+0xc1/0x5b0 net/batman-adv/network-coding.c:719
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2292
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2439
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

to a SOFTIRQ-irq-unsafe lock:
(fs_reclaim){+.+.}-{0:0}

... which became SOFTIRQ-irq-unsafe at:
...
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
kmalloc_trace+0x26/0xe0 mm/slab_common.c:1045
kmalloc include/linux/slab.h:553 [inline]
kzalloc include/linux/slab.h:689 [inline]
alloc_workqueue_attrs+0x46/0xc0 kernel/workqueue.c:3397
wq_numa_init+0x122/0x4b0 kernel/workqueue.c:5962
workqueue_init+0x22/0x59d kernel/workqueue.c:6089
kernel_init_freeable+0x40a/0x61f init/main.c:1614
kernel_init+0x19/0x290 init/main.c:1519
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(fs_reclaim);
local_irq_disable();
lock(&sch->q.lock);
lock(fs_reclaim);
<Interrupt>
lock(&sch->q.lock);

*** DEADLOCK ***

2 locks held by syz-executor.1/2582:
#0: ffffffff8e297a68 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
#0: ffffffff8e297a68 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6100
#1: ffff88807a6b2108 (&sch->q.lock){+.-.}-{2:2}, at: netem_change+0x17e/0x1ea0 net/sched/sch_netem.c:969

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&sch->q.lock){+.-.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:355 [inline]
dev_reset_queue+0x131/0x1a0 net/sched/sch_generic.c:1291
netdev_for_each_tx_queue include/linux/netdevice.h:2453 [inline]
dev_deactivate_many+0x525/0xaf0 net/sched/sch_generic.c:1359
dev_deactivate+0x177/0x270 net/sched/sch_generic.c:1382
linkwatch_do_dev+0x104/0x160 net/core/link_watch.c:166
__linkwatch_run_queue+0x448/0x6b0 net/core/link_watch.c:221
linkwatch_event+0x48/0x50 net/core/link_watch.c:264
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2292
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2439
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
IN-SOFTIRQ-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:350 [inline]
pie_timer+0x150/0x310 net/sched/sch_pie.c:428
call_timer_fn+0x19e/0x6b0 kernel/time/timer.c:1474
expire_timers kernel/time/timer.c:1519 [inline]
__run_timers+0x67c/0x890 kernel/time/timer.c:1790
run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1803
__do_softirq+0x2e9/0xa4c kernel/softirq.c:571
invoke_softirq kernel/softirq.c:445 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:650
irq_exit_rcu+0x5/0x20 kernel/softirq.c:662
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
lock_acquire+0x26f/0x5a0 kernel/locking/lockdep.c:5673
rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:306
rcu_read_lock include/linux/rcupdate.h:747 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
batadv_nc_worker+0xc1/0x5b0 net/batman-adv/network-coding.c:719
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2292
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2439
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:355 [inline]
dev_reset_queue+0x131/0x1a0 net/sched/sch_generic.c:1291
netdev_for_each_tx_queue include/linux/netdevice.h:2453 [inline]
dev_deactivate_many+0x525/0xaf0 net/sched/sch_generic.c:1359
dev_deactivate+0x177/0x270 net/sched/sch_generic.c:1382
linkwatch_do_dev+0x104/0x160 net/core/link_watch.c:166
__linkwatch_run_queue+0x448/0x6b0 net/core/link_watch.c:221
linkwatch_event+0x48/0x50 net/core/link_watch.c:264
process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2292
worker_thread+0xa5f/0x1210 kernel/workqueue.c:2439
kthread+0x26e/0x300 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
}
... key at: [<ffffffff920b6f60>] qdisc_alloc.__key+0x0/0x20

the dependencies between the lock to be acquired
and SOFTIRQ-irq-unsafe lock:
-> (fs_reclaim){+.+.}-{0:0} {
HARDIRQ-ON-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
kmalloc_trace+0x26/0xe0 mm/slab_common.c:1045
kmalloc include/linux/slab.h:553 [inline]
kzalloc include/linux/slab.h:689 [inline]
alloc_workqueue_attrs+0x46/0xc0 kernel/workqueue.c:3397
wq_numa_init+0x122/0x4b0 kernel/workqueue.c:5962
workqueue_init+0x22/0x59d kernel/workqueue.c:6089
kernel_init_freeable+0x40a/0x61f init/main.c:1614
kernel_init+0x19/0x290 init/main.c:1519
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
SOFTIRQ-ON-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
kmalloc_trace+0x26/0xe0 mm/slab_common.c:1045
kmalloc include/linux/slab.h:553 [inline]
kzalloc include/linux/slab.h:689 [inline]
alloc_workqueue_attrs+0x46/0xc0 kernel/workqueue.c:3397
wq_numa_init+0x122/0x4b0 kernel/workqueue.c:5962
workqueue_init+0x22/0x59d kernel/workqueue.c:6089
kernel_init_freeable+0x40a/0x61f init/main.c:1614
kernel_init+0x19/0x290 init/main.c:1519
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
kmalloc_trace+0x26/0xe0 mm/slab_common.c:1045
kmalloc include/linux/slab.h:553 [inline]
kzalloc include/linux/slab.h:689 [inline]
alloc_workqueue_attrs+0x46/0xc0 kernel/workqueue.c:3397
wq_numa_init+0x122/0x4b0 kernel/workqueue.c:5962
workqueue_init+0x22/0x59d kernel/workqueue.c:6089
kernel_init_freeable+0x40a/0x61f init/main.c:1614
kernel_init+0x19/0x290 init/main.c:1519
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
}
... key at: [<ffffffff8d202ba0>] __fs_reclaim_map+0x0/0xe0
... acquired at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:954 [inline]
__kmalloc_node+0xa2/0x230 mm/slab_common.c:962
kmalloc_node include/linux/slab.h:579 [inline]
kvmalloc_node+0x6e/0x180 mm/util.c:581
kvmalloc include/linux/slab.h:706 [inline]
get_dist_table+0x91/0x380 net/sched/sch_netem.c:788
netem_change+0x947/0x1ea0 net/sched/sch_netem.c:985
netem_init+0x58/0xb0 net/sched/sch_netem.c:1072
qdisc_create+0x8a1/0x1220 net/sched/sch_api.c:1314
tc_modify_qdisc+0x9e0/0x1da0 net/sched/sch_api.c:1723
rtnetlink_rcv_msg+0x776/0xf00 net/core/rtnetlink.c:6103
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2525
netlink_unicast_kernel net/netlink/af_netlink.c:1328 [inline]
netlink_unicast+0x7bf/0x990 net/netlink/af_netlink.c:1354
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1903
sock_sendmsg_nosec net/socket.c:716 [inline]
sock_sendmsg net/socket.c:736 [inline]
____sys_sendmsg+0x59e/0x8f0 net/socket.c:2482
___sys_sendmsg net/socket.c:2536 [inline]
__sys_sendmsg+0x2a9/0x390 net/socket.c:2565
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd


stack backtrace:
CPU: 0 PID: 2582 Comm: syz-executor.1 Not tainted 6.1.42-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2612 [inline]
check_irq_usage kernel/locking/lockdep.c:2851 [inline]
check_prev_add kernel/locking/lockdep.c:3102 [inline]
check_prevs_add kernel/locking/lockdep.c:3217 [inline]
validate_chain+0x4d2e/0x58e0 kernel/locking/lockdep.c:3832
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
__fs_reclaim_acquire mm/page_alloc.c:4683 [inline]
fs_reclaim_acquire+0x83/0x120 mm/page_alloc.c:4697
might_alloc include/linux/sched/mm.h:271 [inline]
slab_pre_alloc_hook+0x2a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:954 [inline]
__kmalloc_node+0xa2/0x230 mm/slab_common.c:962
kmalloc_node include/linux/slab.h:579 [inline]
kvmalloc_node+0x6e/0x180 mm/util.c:581
kvmalloc include/linux/slab.h:706 [inline]
get_dist_table+0x91/0x380 net/sched/sch_netem.c:788
netem_change+0x947/0x1ea0 net/sched/sch_netem.c:985
netem_init+0x58/0xb0 net/sched/sch_netem.c:1072
qdisc_create+0x8a1/0x1220 net/sched/sch_api.c:1314
tc_modify_qdisc+0x9e0/0x1da0 net/sched/sch_api.c:1723
rtnetlink_rcv_msg+0x776/0xf00 net/core/rtnetlink.c:6103
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2525
netlink_unicast_kernel net/netlink/af_netlink.c:1328 [inline]
netlink_unicast+0x7bf/0x990 net/netlink/af_netlink.c:1354
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1903
sock_sendmsg_nosec net/socket.c:716 [inline]
sock_sendmsg net/socket.c:736 [inline]
____sys_sendmsg+0x59e/0x8f0 net/socket.c:2482
___sys_sendmsg net/socket.c:2536 [inline]
__sys_sendmsg+0x2a9/0x390 net/socket.c:2565
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fa34ce7cae9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fa34db150c8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007fa34cf9bf80 RCX: 00007fa34ce7cae9
RDX: 0000000000000000 RSI: 00000000200007c0 RDI: 0000000000000004
RBP: 00007fa34cec847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fa34cf9bf80 R15: 00007ffc77a266f8
</TASK>
BUG: sleeping function called from invalid context at include/linux/sched/mm.h:274
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2582, name: syz-executor.1
preempt_count: 201, expected: 0
RCU nest depth: 0, expected: 0
INFO: lockdep is turned off.
Preemption disabled at:
[<0000000000000000>] 0x0
CPU: 0 PID: 2582 Comm: syz-executor.1 Not tainted 6.1.42-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
__might_resched+0x5cb/0x780 kernel/sched/core.c:9941
might_alloc include/linux/sched/mm.h:274 [inline]
slab_pre_alloc_hook+0x4a/0x2a0 mm/slab.h:710
slab_alloc_node mm/slub.c:3318 [inline]
__kmem_cache_alloc_node+0x47/0x260 mm/slub.c:3437
__do_kmalloc_node mm/slab_common.c:954 [inline]
__kmalloc_node+0xa2/0x230 mm/slab_common.c:962
kmalloc_node include/linux/slab.h:579 [inline]
kvmalloc_node+0x6e/0x180 mm/util.c:581
kvmalloc include/linux/slab.h:706 [inline]
get_dist_table+0x91/0x380 net/sched/sch_netem.c:788
netem_change+0x947/0x1ea0 net/sched/sch_netem.c:985
netem_init+0x58/0xb0 net/sched/sch_netem.c:1072
qdisc_create+0x8a1/0x1220 net/sched/sch_api.c:1314
tc_modify_qdisc+0x9e0/0x1da0 net/sched/sch_api.c:1723
rtnetlink_rcv_msg+0x776/0xf00 net/core/rtnetlink.c:6103
netlink_rcv_skb+0x1cd/0x410 net/netlink/af_netlink.c:2525
netlink_unicast_kernel net/netlink/af_netlink.c:1328 [inline]
netlink_unicast+0x7bf/0x990 net/netlink/af_netlink.c:1354
netlink_sendmsg+0xa26/0xd60 net/netlink/af_netlink.c:1903
sock_sendmsg_nosec net/socket.c:716 [inline]
sock_sendmsg net/socket.c:736 [inline]
____sys_sendmsg+0x59e/0x8f0 net/socket.c:2482
___sys_sendmsg net/socket.c:2536 [inline]
__sys_sendmsg+0x2a9/0x390 net/socket.c:2565
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fa34ce7cae9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fa34db150c8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007fa34cf9bf80 RCX: 00007fa34ce7cae9
RDX: 0000000000000000 RSI: 00000000200007c0 RDI: 0000000000000004
RBP: 00007fa34cec847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fa34cf9bf80 R15: 00007ffc77a266f8
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Nov 9, 2023, 9:26:15 PM11/9/23
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages