[v6.1] possible deadlock in pcpu_alloc

0 views
Skip to first unread message

syzbot

unread,
Apr 5, 2024, 7:38:25 PMApr 5
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 347385861c50 Linux 6.1.84
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=11a38503180000
kernel config: https://syzkaller.appspot.com/x/.config?x=40dfd13b04bfc094
dashboard link: https://syzkaller.appspot.com/bug?extid=29ce28af963e0eb23843
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/584c64d6360b/disk-34738586.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/ebe1b8334610/vmlinux-34738586.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e3d485b54a02/bzImage-34738586.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+29ce28...@syzkaller.appspotmail.com

=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.1.84-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor.1/3561 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff88807ca76820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932

and this task is already holding:
ffffffff8d1e9458 (pcpu_lock){-.-.}-{2:2}, at: free_percpu+0xab/0xea0 mm/percpu.c:2277
which would create a new lock dependency:
(pcpu_lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
(pcpu_lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pcpu_alloc+0x320/0x18f0 mm/percpu.c:1780
__alloc kernel/bpf/memalloc.c:135 [inline]
alloc_bulk+0x614/0x8d0 kernel/bpf/memalloc.c:174
irq_work_single+0xd5/0x230 kernel/irq_work.c:211
irq_work_run_list kernel/irq_work.c:242 [inline]
irq_work_run+0x187/0x350 kernel/irq_work.c:251
__sysvec_irq_work+0xbb/0x360 arch/x86/kernel/irq_work.c:22
sysvec_irq_work+0x89/0xb0 arch/x86/kernel/irq_work.c:17
asm_sysvec_irq_work+0x16/0x20 arch/x86/include/asm/idtentry.h:679
htab_unlock_bucket kernel/bpf/hashtab.c:180 [inline]
__htab_percpu_map_update_elem+0x6d2/0x7e0 kernel/bpf/hashtab.c:1294
bpf_percpu_hash_update+0x134/0x1f0 kernel/bpf/hashtab.c:2336
bpf_map_update_value+0x282/0x6f0 kernel/bpf/syscall.c:200
generic_map_update_batch+0x579/0x920 kernel/bpf/syscall.c:1684
bpf_map_do_batch+0x4d0/0x620
__sys_bpf+0x658/0x6c0
__do_sys_bpf kernel/bpf/syscall.c:5109 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5107 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5107
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd

to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1000
sock_map_update_elem_sys+0x5a0/0x910 net/core/sock_map.c:583
map_update_elem+0x503/0x680 kernel/bpf/syscall.c:1448
__sys_bpf+0x337/0x6c0 kernel/bpf/syscall.c:4993
__do_sys_bpf kernel/bpf/syscall.c:5109 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5107 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5107
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(pcpu_lock);
lock(&htab->buckets[i].lock);
<Interrupt>
lock(pcpu_lock);

*** DEADLOCK ***

2 locks held by syz-executor.1/3561:
#0: ffffffff8d1e9458 (pcpu_lock){-.-.}-{2:2}, at: free_percpu+0xab/0xea0 mm/percpu.c:2277
#1: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
#1: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
#1: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#1: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run3+0x146/0x440 kernel/trace/bpf_trace.c:2313

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (pcpu_lock){-.-.}-{2:2} {
IN-HARDIRQ-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pcpu_alloc+0x320/0x18f0 mm/percpu.c:1780
__alloc kernel/bpf/memalloc.c:135 [inline]
alloc_bulk+0x614/0x8d0 kernel/bpf/memalloc.c:174
irq_work_single+0xd5/0x230 kernel/irq_work.c:211
irq_work_run_list kernel/irq_work.c:242 [inline]
irq_work_run+0x187/0x350 kernel/irq_work.c:251
__sysvec_irq_work+0xbb/0x360 arch/x86/kernel/irq_work.c:22
sysvec_irq_work+0x89/0xb0 arch/x86/kernel/irq_work.c:17
asm_sysvec_irq_work+0x16/0x20 arch/x86/include/asm/idtentry.h:679
htab_unlock_bucket kernel/bpf/hashtab.c:180 [inline]
__htab_percpu_map_update_elem+0x6d2/0x7e0 kernel/bpf/hashtab.c:1294
bpf_percpu_hash_update+0x134/0x1f0 kernel/bpf/hashtab.c:2336
bpf_map_update_value+0x282/0x6f0 kernel/bpf/syscall.c:200
generic_map_update_batch+0x579/0x920 kernel/bpf/syscall.c:1684
bpf_map_do_batch+0x4d0/0x620
__sys_bpf+0x658/0x6c0
__do_sys_bpf kernel/bpf/syscall.c:5109 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5107 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5107
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
IN-SOFTIRQ-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
free_percpu+0xab/0xea0 mm/percpu.c:2277
blk_stat_free_callback_rcu+0x3e/0x70 block/blk-stat.c:176
rcu_do_batch kernel/rcu/tree.c:2296 [inline]
rcu_core+0xad4/0x17e0 kernel/rcu/tree.c:2556
__do_softirq+0x2e9/0xa4c kernel/softirq.c:571
invoke_softirq kernel/softirq.c:445 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:650
irq_exit_rcu+0x5/0x20 kernel/softirq.c:662
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
kasan_check_range+0x1/0x290 mm/kasan/generic.c:188
memset+0x1f/0x40 mm/kasan/shadow.c:44
lockdep_init_map_type+0x9d/0x900 kernel/locking/lockdep.c:4804
lockdep_init_map_waits include/linux/lockdep.h:191 [inline]
lockdep_init_map_wait include/linux/lockdep.h:198 [inline]
__raw_spin_lock_init+0x41/0x100 kernel/locking/spinlock_debug.c:24
__mutex_init+0x5f/0xf0 kernel/locking/mutex.c:49
blk_alloc_queue+0x342/0x570 block/blk-core.c:411
blk_mq_init_queue_data block/blk-mq.c:4094 [inline]
blk_mq_init_queue+0x66/0x120 block/blk-mq.c:4108
scsi_alloc_sdev+0x74b/0xb30 drivers/scsi/scsi_scan.c:335
scsi_probe_and_add_lun+0x1bf/0x4ac0 drivers/scsi/scsi_scan.c:1186
__scsi_scan_target+0x20d/0x11d0 drivers/scsi/scsi_scan.c:1718
scsi_scan_channel drivers/scsi/scsi_scan.c:1806 [inline]
scsi_scan_host_selected+0x37a/0x690 drivers/scsi/scsi_scan.c:1835
do_scsi_scan_host drivers/scsi/scsi_scan.c:1974 [inline]
do_scan_async+0x12e/0x780 drivers/scsi/scsi_scan.c:1984
async_run_entry_fn+0xa2/0x410 kernel/async.c:127
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pcpu_stats_chunk_alloc mm/percpu-internal.h:211 [inline]
pcpu_setup_first_chunk+0xdd2/0x172c mm/percpu.c:2775
pcpu_embed_first_chunk+0xb24/0xbd2 mm/percpu.c:3158
setup_per_cpu_areas+0xd4/0xbcb arch/x86/kernel/setup_percpu.c:156
start_kernel+0xc3/0x53f init/main.c:965
secondary_startup_64_no_verify+0xcf/0xdb
}
... key at: [<ffffffff8d1e9458>] pcpu_lock+0x18/0x160

the dependencies between the lock to be acquired
and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1000
sock_map_update_elem_sys+0x5a0/0x910 net/core/sock_map.c:583
map_update_elem+0x503/0x680 kernel/bpf/syscall.c:1448
__sys_bpf+0x337/0x6c0 kernel/bpf/syscall.c:4993
__do_sys_bpf kernel/bpf/syscall.c:5109 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5107 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5107
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_update_common+0x20c/0xa30 net/core/sock_map.c:1000
sock_map_update_elem_sys+0x5a0/0x910 net/core/sock_map.c:583
map_update_elem+0x503/0x680 kernel/bpf/syscall.c:1448
__sys_bpf+0x337/0x6c0 kernel/bpf/syscall.c:4993
__do_sys_bpf kernel/bpf/syscall.c:5109 [inline]
__se_sys_bpf kernel/bpf/syscall.c:5107 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5107
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
}
... key at: [<ffffffff920b1340>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313
trace_percpu_free_percpu+0x1d6/0x260 include/trace/events/percpu.h:54
free_percpu+0x91b/0xea0 mm/percpu.c:2304
cleanup_entry net/ipv4/netfilter/arp_tables.c:513 [inline]
__do_replace+0x8b4/0xc30 net/ipv4/netfilter/arp_tables.c:931
do_replace net/ipv4/netfilter/arp_tables.c:985 [inline]
do_arpt_set_ctl+0x2353/0x3260 net/ipv4/netfilter/arp_tables.c:1421
nf_setsockopt+0x28a/0x2b0 net/netfilter/nf_sockopt.c:101
__sys_setsockopt+0x57e/0xa00 net/socket.c:2283
__do_sys_setsockopt net/socket.c:2294 [inline]
__se_sys_setsockopt net/socket.c:2291 [inline]
__x64_sys_setsockopt+0xb1/0xc0 net/socket.c:2291
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd


stack backtrace:
CPU: 1 PID: 3561 Comm: syz-executor.1 Not tainted 6.1.84-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2604 [inline]
check_irq_usage kernel/locking/lockdep.c:2843 [inline]
check_prev_add kernel/locking/lockdep.c:3094 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x4d16/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:603 [inline]
bpf_prog_run include/linux/filter.h:610 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313
trace_percpu_free_percpu+0x1d6/0x260 include/trace/events/percpu.h:54
free_percpu+0x91b/0xea0 mm/percpu.c:2304
cleanup_entry net/ipv4/netfilter/arp_tables.c:513 [inline]
__do_replace+0x8b4/0xc30 net/ipv4/netfilter/arp_tables.c:931
do_replace net/ipv4/netfilter/arp_tables.c:985 [inline]
do_arpt_set_ctl+0x2353/0x3260 net/ipv4/netfilter/arp_tables.c:1421
nf_setsockopt+0x28a/0x2b0 net/netfilter/nf_sockopt.c:101
__sys_setsockopt+0x57e/0xa00 net/socket.c:2283
__do_sys_setsockopt net/socket.c:2294 [inline]
__se_sys_setsockopt net/socket.c:2291 [inline]
__x64_sys_setsockopt+0xb1/0xc0 net/socket.c:2291
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f1c3927fbba
Code: ff ff ff c3 0f 1f 40 00 48 c7 c2 b0 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 49 89 ca b8 36 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 06 c3 0f 1f 44 00 00 48 c7 c2 b0 ff ff ff f7
RSP: 002b:00007ffccc5c3498 EFLAGS: 00000246 ORIG_RAX: 0000000000000036
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f1c3927fbba
RDX: 0000000000000060 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007ffccc5c3500 R08: 0000000000000408 R09: 00007ffccc5c3897
R10: 00007f1c3937a4f0 R11: 0000000000000246 R12: 00007ffccc5c34ac
R13: 000000000001787c R14: 00000000000177e5 R15: 0000000000000002
</TASK>
------------[ cut here ]------------
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 3561 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10
Modules linked in:
CPU: 1 PID: 3561 Comm: syz-executor.1 Not tainted 6.1.84-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10
Code: 24 48 c7 c7 00 bc ea 8a e8 6c f5 fd ff 80 3d 2f 5b d5 03 00 74 01 c3 c6 05 25 5b d5 03 01 48 c7 c7 60 e6 eb 8a e8 23 64 c8 f6 <0f> 0b c3 41 56 53 48 83 ec 10 65 48 8b 04 25 28 00 00 00 48 89 44
RSP: 0018:ffffc900047bf678 EFLAGS: 00010246
RAX: 2e1d849a8f525a00 RBX: 1ffff920008f7ed4 RCX: ffff88807940bb80
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc900047bf710 R08: ffffffff81527eae R09: fffffbfff1ce6d46
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff920008f7ed0 R14: ffffc900047bf6a0 R15: 0000000000000246
FS: 0000555556a19480(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffc1ac57cd8 CR3: 000000005e518000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
_raw_spin_unlock_irqrestore+0x118/0x130 kernel/locking/spinlock.c:194
spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
free_percpu+0x92c/0xea0 mm/percpu.c:2306
cleanup_entry net/ipv4/netfilter/arp_tables.c:513 [inline]
__do_replace+0x8b4/0xc30 net/ipv4/netfilter/arp_tables.c:931
do_replace net/ipv4/netfilter/arp_tables.c:985 [inline]
do_arpt_set_ctl+0x2353/0x3260 net/ipv4/netfilter/arp_tables.c:1421
nf_setsockopt+0x28a/0x2b0 net/netfilter/nf_sockopt.c:101
__sys_setsockopt+0x57e/0xa00 net/socket.c:2283
__do_sys_setsockopt net/socket.c:2294 [inline]
__se_sys_setsockopt net/socket.c:2291 [inline]
__x64_sys_setsockopt+0xb1/0xc0 net/socket.c:2291
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f1c3927fbba
Code: ff ff ff c3 0f 1f 40 00 48 c7 c2 b0 ff ff ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 49 89 ca b8 36 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 06 c3 0f 1f 44 00 00 48 c7 c2 b0 ff ff ff f7
RSP: 002b:00007ffccc5c3498 EFLAGS: 00000246 ORIG_RAX: 0000000000000036
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f1c3927fbba
RDX: 0000000000000060 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007ffccc5c3500 R08: 0000000000000408 R09: 00007ffccc5c3897
R10: 00007f1c3937a4f0 R11: 0000000000000246 R12: 00007ffccc5c34ac
R13: 000000000001787c R14: 00000000000177e5 R15: 0000000000000002
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup
Reply all
Reply to author
Forward
0 new messages