[v5.15] possible deadlock in hrtimer_run_queues

0 views
Skip to first unread message

syzbot

unread,
Mar 15, 2024, 8:42:25 AMMar 15
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 574362648507 Linux 5.15.151
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=159c0a31180000
kernel config: https://syzkaller.appspot.com/x/.config?x=6c9a42d9e3519ca9
dashboard link: https://syzkaller.appspot.com/bug?extid=df17c7f1816a27780d00
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/f00d4062000b/disk-57436264.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3a74c2b6ca62/vmlinux-57436264.xz
kernel image: https://storage.googleapis.com/syzbot-assets/93bd706dc219/bzImage-57436264.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+df17c7...@syzkaller.appspotmail.com

EXT4-fs error (device loop1): ext4_do_update_inode:5160: inode #3: comm syz-executor.1: corrupted inode contents
EXT4-fs error (device loop1): ext4_dirty_inode:5993: inode #3: comm syz-executor.1: mark_inode_dirty error
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
5.15.151-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor.1/19910 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff8880835e5820 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

and this task is already holding:
ffff8880b9a39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
which would create a new lock dependency:
(&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+.-.}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
(&pool->lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
unwind_next_frame+0x146/0x1fa0 arch/x86/kernel/unwind_orc.c:448
arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0x8e/0xc0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
slab_alloc mm/slub.c:3228 [inline]
kmem_cache_alloc+0xf3/0x280 mm/slub.c:3233
__d_alloc+0x2a/0x700 fs/dcache.c:1745
d_alloc fs/dcache.c:1824 [inline]
d_alloc_parallel+0xca/0x1390 fs/dcache.c:2576
__lookup_slow+0x111/0x3d0 fs/namei.c:1648
lookup_one_len+0x187/0x2d0 fs/namei.c:2718
start_creating+0x111/0x200 fs/tracefs/inode.c:426
tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493
trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991
event_create_dir+0x9b4/0xdf0 kernel/trace/trace_events.c:2435
__trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491
early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664
event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824
tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897
do_one_initcall+0x22b/0x7a0 init/main.c:1299
do_initcall_level+0x157/0x207 init/main.c:1372
do_initcalls+0x49/0x86 init/main.c:1388
kernel_init_freeable+0x425/0x5b5 init/main.c:1612
kernel_init+0x19/0x290 init/main.c:1503
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298

to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+.-.}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(&pool->lock);
lock(&htab->buckets[i].lock);
<Interrupt>
lock(&pool->lock);

*** DEADLOCK ***

8 locks held by syz-executor.1/19910:
#0: ffff888073dee0e0 (&type->s_umount_key#28/1){+.+.}-{3:3}, at: alloc_super+0x210/0x940 fs/super.c:229
#1: ffff888082e08428 (&dquot->dq_lock){+.+.}-{3:3}, at: dquot_acquire+0x64/0x680 fs/quota/dquot.c:458
#2: ffff888073dee208 (&s->s_dquot.dqio_sem){++++}-{3:3}, at: v2_write_dquot+0x9b/0x190 fs/quota/quota_v2.c:354
#3: ffff888082d72a58 (&ei->i_data_sem/2){++++}-{3:3}, at: ext4_map_blocks+0x9e0/0x1e00 fs/ext4/inode.c:638
#4: ffff88801a43b760 (&fq->mq_flush_lock){..-.}-{2:2}, at: spin_lock_irq include/linux/spinlock.h:388 [inline]
#4: ffff88801a43b760 (&fq->mq_flush_lock){..-.}-{2:2}, at: blk_insert_flush+0x4d3/0x5e0 block/blk-flush.c:441
#5: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#6: ffff8880b9a39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
#7: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&pool->lock){-.-.}-{2:2} {
IN-HARDIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
unwind_next_frame+0x146/0x1fa0 arch/x86/kernel/unwind_orc.c:448
arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0x8e/0xc0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
slab_alloc mm/slub.c:3228 [inline]
kmem_cache_alloc+0xf3/0x280 mm/slub.c:3233
__d_alloc+0x2a/0x700 fs/dcache.c:1745
d_alloc fs/dcache.c:1824 [inline]
d_alloc_parallel+0xca/0x1390 fs/dcache.c:2576
__lookup_slow+0x111/0x3d0 fs/namei.c:1648
lookup_one_len+0x187/0x2d0 fs/namei.c:2718
start_creating+0x111/0x200 fs/tracefs/inode.c:426
tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493
trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991
event_create_dir+0x9b4/0xdf0 kernel/trace/trace_events.c:2435
__trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491
early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664
event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824
tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897
do_one_initcall+0x22b/0x7a0 init/main.c:1299
do_initcall_level+0x157/0x207 init/main.c:1372
do_initcalls+0x49/0x86 init/main.c:1388
kernel_init_freeable+0x425/0x5b5 init/main.c:1612
kernel_init+0x19/0x290 init/main.c:1503
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
IN-SOFTIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421
expire_timers kernel/time/timer.c:1461 [inline]
__run_timers+0x6a8/0x890 kernel/time/timer.c:1737
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
default_idle+0xb/0x10 arch/x86/kernel/process.c:717
default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
cpuidle_idle_call kernel/sched/idle.c:194 [inline]
do_idle+0x271/0x670 kernel/sched/idle.c:306
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
start_kernel+0x48c/0x535 init/main.c:1137
secondary_startup_64_no_verify+0xb1/0xbb
INITIAL USE at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3783
link_pwq kernel/workqueue.c:3849 [inline]
alloc_and_link_pwqs kernel/workqueue.c:4243 [inline]
alloc_workqueue+0xbb4/0x13f0 kernel/workqueue.c:4365
workqueue_init_early+0x7b2/0x96c kernel/workqueue.c:6099
start_kernel+0x1fa/0x535 init/main.c:1024
secondary_startup_64_no_verify+0xb1/0xbb
}
... key at: [<ffffffff8f5d8c60>] init_worker_pool.__key+0x0/0x20

the dependencies between the lock to be acquired
and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+.-.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
IN-SOFTIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
skb_page_frag_refill+0x220/0x4b0 net/core/sock.c:2650
add_recvbuf_mergeable drivers/net/virtio_net.c:1329 [inline]
try_fill_recv+0x48f/0x17c0 drivers/net/virtio_net.c:1370
virtnet_receive drivers/net/virtio_net.c:1484 [inline]
virtnet_poll+0x83b/0x1270 drivers/net/virtio_net.c:1585
__napi_poll+0xc7/0x440 net/core/dev.c:7035
napi_poll net/core/dev.c:7102 [inline]
net_rx_action+0x617/0xda0 net/core/dev.c:7189
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
common_interrupt+0xa4/0xc0 arch/x86/kernel/irq.c:240
asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:629
__sanitizer_cov_trace_pc+0x4/0x60 kernel/kcov.c:193
string_nocheck lib/vsprintf.c:645 [inline]
string+0x1f8/0x2b0 lib/vsprintf.c:724
vsnprintf+0x11fc/0x1c70 lib/vsprintf.c:2811
tomoyo_supervisor+0x145/0x12c0 security/tomoyo/common.c:2069
tomoyo_audit_path_log security/tomoyo/file.c:168 [inline]
tomoyo_path_permission+0x243/0x360 security/tomoyo/file.c:587
tomoyo_path_perm+0x436/0x6b0 security/tomoyo/file.c:838
tomoyo_path_unlink+0xcc/0x100 security/tomoyo/tomoyo.c:149
security_path_unlink+0xd7/0x130 security/security.c:1170
do_unlinkat+0x3dd/0x950 fs/namei.c:4344
__do_sys_unlink fs/namei.c:4396 [inline]
__se_sys_unlink fs/namei.c:4394 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4394
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
INITIAL USE at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
}
... key at: [<ffffffff91789700>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
save_stack+0x104/0x1e0 mm/page_owner.c:120
__set_page_owner+0x37/0x300 mm/page_owner.c:181
prep_new_page mm/page_alloc.c:2426 [inline]
get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159
__alloc_pages+0x272/0x700 mm/page_alloc.c:5421
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40
kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348
insert_work+0x54/0x3e0 kernel/workqueue.c:1366
__queue_work+0x963/0xd00 kernel/workqueue.c:1532
mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753
kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636
blk_flush_queue_rq block/blk-flush.c:136 [inline]
blk_flush_complete_seq+0x6f9/0xce0 block/blk-flush.c:191
blk_insert_flush+0x4e7/0x5e0 block/blk-flush.c:442
blk_mq_submit_bio+0x161b/0x1c40 block/blk-mq.c:2258
__submit_bio+0x813/0x850 block/blk-core.c:917
__submit_bio_noacct_mq block/blk-core.c:997 [inline]
submit_bio_noacct+0x955/0xb30 block/blk-core.c:1027
submit_bio+0x2dd/0x560 block/blk-core.c:1089
submit_bh fs/buffer.c:3062 [inline]
__sync_dirty_buffer+0x245/0x380 fs/buffer.c:3157
ext4_commit_super+0x323/0x430 fs/ext4/super.c:5512
ext4_handle_error+0x52d/0x7a0 fs/ext4/super.c:658
__ext4_error_inode+0x236/0x400 fs/ext4/super.c:786
__ext4_mark_inode_dirty+0x207/0x860 fs/ext4/inode.c:5967
ext4_dirty_inode+0xbf/0x100 fs/ext4/inode.c:5993
__mark_inode_dirty+0x2fd/0xd60 fs/fs-writeback.c:2464
mark_inode_dirty_sync include/linux/fs.h:2443 [inline]
dquot_alloc_space_nofail include/linux/quotaops.h:303 [inline]
dquot_alloc_block_nofail include/linux/quotaops.h:329 [inline]
ext4_mb_new_blocks+0x20d8/0x4d50 fs/ext4/mballoc.c:5666
ext4_ext_map_blocks+0x1b0a/0x7690 fs/ext4/extents.c:4316
ext4_map_blocks+0xaad/0x1e00 fs/ext4/inode.c:645
ext4_getblk+0x19f/0x710 fs/ext4/inode.c:846
ext4_bread+0x2a/0x170 fs/ext4/inode.c:899
ext4_quota_write+0x21e/0x580 fs/ext4/super.c:6546
write_blk fs/quota/quota_tree.c:64 [inline]
get_free_dqblk+0x3a9/0x800 fs/quota/quota_tree.c:125
do_insert_tree+0x2b4/0x1c20 fs/quota/quota_tree.c:335
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
dq_insert_tree fs/quota/quota_tree.c:392 [inline]
qtree_write_dquot+0x3b9/0x530 fs/quota/quota_tree.c:411
v2_write_dquot+0x11c/0x190 fs/quota/quota_v2.c:358
dquot_acquire+0x34d/0x680 fs/quota/dquot.c:470
ext4_acquire_dquot+0x2e6/0x400 fs/ext4/super.c:6180
dqget+0x74e/0xe30 fs/quota/dquot.c:984
__dquot_initialize+0x2d9/0xe10 fs/quota/dquot.c:1562
ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329
ext4_orphan_cleanup+0x9d9/0x1240 fs/ext4/orphan.c:474
ext4_fill_super+0x98de/0xa110 fs/ext4/super.c:4966
mount_bdev+0x2c9/0x3f0 fs/super.c:1387
legacy_get_tree+0xeb/0x180 fs/fs_context.c:611
vfs_get_tree+0x88/0x270 fs/super.c:1517
do_new_mount+0x2ba/0xb40 fs/namespace.c:3005
do_mount fs/namespace.c:3348 [inline]
__do_sys_mount fs/namespace.c:3556 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3533
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb


stack backtrace:
CPU: 0 PID: 19910 Comm: syz-executor.1 Not tainted 5.15.151-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline]
check_irq_usage kernel/locking/lockdep.c:2806 [inline]
check_prev_add kernel/locking/lockdep.c:3057 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
save_stack+0x104/0x1e0 mm/page_owner.c:120
__set_page_owner+0x37/0x300 mm/page_owner.c:181
prep_new_page mm/page_alloc.c:2426 [inline]
get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159
__alloc_pages+0x272/0x700 mm/page_alloc.c:5421
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40
kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348
insert_work+0x54/0x3e0 kernel/workqueue.c:1366
__queue_work+0x963/0xd00 kernel/workqueue.c:1532
mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753
kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636
blk_flush_queue_rq block/blk-flush.c:136 [inline]
blk_flush_complete_seq+0x6f9/0xce0 block/blk-flush.c:191
blk_insert_flush+0x4e7/0x5e0 block/blk-flush.c:442
blk_mq_submit_bio+0x161b/0x1c40 block/blk-mq.c:2258
__submit_bio+0x813/0x850 block/blk-core.c:917
__submit_bio_noacct_mq block/blk-core.c:997 [inline]
submit_bio_noacct+0x955/0xb30 block/blk-core.c:1027
submit_bio+0x2dd/0x560 block/blk-core.c:1089
submit_bh fs/buffer.c:3062 [inline]
__sync_dirty_buffer+0x245/0x380 fs/buffer.c:3157
ext4_commit_super+0x323/0x430 fs/ext4/super.c:5512
ext4_handle_error+0x52d/0x7a0 fs/ext4/super.c:658
__ext4_error_inode+0x236/0x400 fs/ext4/super.c:786
__ext4_mark_inode_dirty+0x207/0x860 fs/ext4/inode.c:5967
ext4_dirty_inode+0xbf/0x100 fs/ext4/inode.c:5993
__mark_inode_dirty+0x2fd/0xd60 fs/fs-writeback.c:2464
mark_inode_dirty_sync include/linux/fs.h:2443 [inline]
dquot_alloc_space_nofail include/linux/quotaops.h:303 [inline]
dquot_alloc_block_nofail include/linux/quotaops.h:329 [inline]
ext4_mb_new_blocks+0x20d8/0x4d50 fs/ext4/mballoc.c:5666
ext4_ext_map_blocks+0x1b0a/0x7690 fs/ext4/extents.c:4316
ext4_map_blocks+0xaad/0x1e00 fs/ext4/inode.c:645
ext4_getblk+0x19f/0x710 fs/ext4/inode.c:846
ext4_bread+0x2a/0x170 fs/ext4/inode.c:899
ext4_quota_write+0x21e/0x580 fs/ext4/super.c:6546
write_blk fs/quota/quota_tree.c:64 [inline]
get_free_dqblk+0x3a9/0x800 fs/quota/quota_tree.c:125
do_insert_tree+0x2b4/0x1c20 fs/quota/quota_tree.c:335
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
dq_insert_tree fs/quota/quota_tree.c:392 [inline]
qtree_write_dquot+0x3b9/0x530 fs/quota/quota_tree.c:411
v2_write_dquot+0x11c/0x190 fs/quota/quota_v2.c:358
dquot_acquire+0x34d/0x680 fs/quota/dquot.c:470
ext4_acquire_dquot+0x2e6/0x400 fs/ext4/super.c:6180
dqget+0x74e/0xe30 fs/quota/dquot.c:984
__dquot_initialize+0x2d9/0xe10 fs/quota/dquot.c:1562
ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329
ext4_orphan_cleanup+0x9d9/0x1240 fs/ext4/orphan.c:474
ext4_fill_super+0x98de/0xa110 fs/ext4/super.c:4966
mount_bdev+0x2c9/0x3f0 fs/super.c:1387
legacy_get_tree+0xeb/0x180 fs/fs_context.c:611
vfs_get_tree+0x88/0x270 fs/super.c:1517
do_new_mount+0x2ba/0xb40 fs/namespace.c:3005
do_mount fs/namespace.c:3348 [inline]
__do_sys_mount fs/namespace.c:3556 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3533
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fe14c8564aa
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 de 09 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe14add4ef8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007fe14add4f80 RCX: 00007fe14c8564aa
RDX: 0000000020000100 RSI: 0000000020000200 RDI: 00007fe14add4f40
RBP: 0000000020000100 R08: 00007fe14add4f80 R09: 0000000002000010
R10: 0000000002000010 R11: 0000000000000202 R12: 0000000020000200
R13: 00007fe14add4f40 R14: 00000000000004f3 R15: 0000000020000640
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Mar 15, 2024, 7:12:18 PMMar 15
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: d7543167affd Linux 6.1.82
git tree: linux-6.1.y
console output: https://syzkaller.appspot.com/x/log.txt?x=12a12ebe180000
kernel config: https://syzkaller.appspot.com/x/.config?x=59059e181681c079
dashboard link: https://syzkaller.appspot.com/bug?extid=6cd97514181bfcd46a5c
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12ef3585180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=17eb5ef1180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/a2421980b49a/disk-d7543167.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/52a6bb44161f/vmlinux-d7543167.xz
kernel image: https://storage.googleapis.com/syzbot-assets/9b3723bf43a9/bzImage-d7543167.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6cd975...@syzkaller.appspotmail.com

=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.1.82-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor356/3551 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff88807674d020 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932

and this task is already holding:
ffff8880b983a258 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x58c/0xf90
which would create a new lock dependency:
(&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
(&pool->lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x58c/0xf90
queue_work_on+0x14b/0x250 kernel/workqueue.c:1548
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1815 [inline]
update_process_times+0x7b/0x1b0 kernel/time/timer.c:1838
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
trace_event_eval_update+0x38e/0xfc0 kernel/trace/trace_events.c:2788
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307

to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x717/0x770 mm/page_alloc.c:5567
__get_free_pages mm/page_alloc.c:5595 [inline]
get_zeroed_page+0x13/0x30 mm/page_alloc.c:5604
__pud_alloc_one include/asm-generic/pgalloc.h:156 [inline]
pud_alloc_one include/asm-generic/pgalloc.h:171 [inline]
__pud_alloc+0x8b/0x220 mm/memory.c:5447
pud_alloc include/linux/mm.h:2333 [inline]
__handle_mm_fault mm/memory.c:5085 [inline]
handle_mm_fault+0x3287/0x5340 mm/memory.c:5276
do_user_addr_fault arch/x86/mm/fault.c:1380 [inline]
handle_page_fault arch/x86/mm/fault.c:1471 [inline]
exc_page_fault+0x26f/0x660 arch/x86/mm/fault.c:1527
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(&pool->lock);
lock(&htab->buckets[i].lock);
<Interrupt>
lock(&pool->lock);

*** DEADLOCK ***

5 locks held by syz-executor356/3551:
#0: ffffffff8d1713a8 (tracepoints_mutex){+.+.}-{3:3}, at: tracepoint_probe_unregister+0x2e/0x980 kernel/tracepoint.c:548
#1: ffffffff8d12ff38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
#1: ffffffff8d12ff38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3b0/0x8a0 kernel/rcu/tree_exp.h:949
#2: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
#2: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
#2: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: __queue_work+0xe5/0xf90 kernel/workqueue.c:1443
#3: ffff8880b983a258 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x58c/0xf90
#4: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline]
#4: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline]
#4: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
#4: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run4+0x16a/0x470 kernel/trace/bpf_trace.c:2314

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&pool->lock){-.-.}-{2:2} {
IN-HARDIRQ-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x58c/0xf90
queue_work_on+0x14b/0x250 kernel/workqueue.c:1548
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1815 [inline]
update_process_times+0x7b/0x1b0 kernel/time/timer.c:1838
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
__sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
trace_event_eval_update+0x38e/0xfc0 kernel/trace/trace_events.c:2788
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
IN-SOFTIRQ-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x58c/0xf90
call_timer_fn+0x1ad/0x6b0 kernel/time/timer.c:1474
expire_timers kernel/time/timer.c:1514 [inline]
__run_timers+0x6a8/0x890 kernel/time/timer.c:1790
run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1803
__do_softirq+0x2e9/0xa4c kernel/softirq.c:571
invoke_softirq kernel/softirq.c:445 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:650
irq_exit_rcu+0x5/0x20 kernel/softirq.c:662
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
check_kcov_mode kernel/kcov.c:175 [inline]
write_comp_data kernel/kcov.c:236 [inline]
__sanitizer_cov_trace_const_cmp1+0x30/0x80 kernel/kcov.c:290
trace_event_eval_update+0x6f3/0xfc0 kernel/trace/trace_events.c:2788
process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
kthread+0x28d/0x320 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3765
link_pwq kernel/workqueue.c:3831 [inline]
alloc_and_link_pwqs kernel/workqueue.c:4227 [inline]
alloc_workqueue+0xbf8/0x1440 kernel/workqueue.c:4349
workqueue_init_early+0x71a/0x927 kernel/workqueue.c:6055
start_kernel+0x208/0x53f init/main.c:1029
secondary_startup_64_no_verify+0xcf/0xdb
}
... key at: [<ffffffff8fe9fce0>] init_worker_pool.__key+0x0/0x20

the dependencies between the lock to be acquired
and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x717/0x770 mm/page_alloc.c:5567
__get_free_pages mm/page_alloc.c:5595 [inline]
get_zeroed_page+0x13/0x30 mm/page_alloc.c:5604
__pud_alloc_one include/asm-generic/pgalloc.h:156 [inline]
pud_alloc_one include/asm-generic/pgalloc.h:171 [inline]
__pud_alloc+0x8b/0x220 mm/memory.c:5447
pud_alloc include/linux/mm.h:2333 [inline]
__handle_mm_fault mm/memory.c:5085 [inline]
handle_mm_fault+0x3287/0x5340 mm/memory.c:5276
do_user_addr_fault arch/x86/mm/fault.c:1380 [inline]
handle_page_fault arch/x86/mm/fault.c:1471 [inline]
exc_page_fault+0x26f/0x660 arch/x86/mm/fault.c:1527
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
INITIAL USE at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x717/0x770 mm/page_alloc.c:5567
__get_free_pages mm/page_alloc.c:5595 [inline]
get_zeroed_page+0x13/0x30 mm/page_alloc.c:5604
__pud_alloc_one include/asm-generic/pgalloc.h:156 [inline]
pud_alloc_one include/asm-generic/pgalloc.h:171 [inline]
__pud_alloc+0x8b/0x220 mm/memory.c:5447
pud_alloc include/linux/mm.h:2333 [inline]
__handle_mm_fault mm/memory.c:5085 [inline]
handle_mm_fault+0x3287/0x5340 mm/memory.c:5276
do_user_addr_fault arch/x86/mm/fault.c:1380 [inline]
handle_page_fault arch/x86/mm/fault.c:1471 [inline]
exc_page_fault+0x26f/0x660 arch/x86/mm/fault.c:1527
asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
}
... key at: [<ffffffff920af300>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177
__traceiter_mm_page_alloc+0x35/0x50 include/trace/events/kmem.h:177
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x717/0x770 mm/page_alloc.c:5567
__stack_depot_save+0x372/0x470 lib/stackdepot.c:474
save_stack+0x104/0x1e0 mm/page_owner.c:128
__set_page_owner+0x26/0x390 mm/page_owner.c:190
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x18d/0x1b0 mm/page_alloc.c:2513
prep_new_page mm/page_alloc.c:2520 [inline]
get_page_from_freelist+0x31a1/0x3320 mm/page_alloc.c:4279
__alloc_pages+0x28d/0x770 mm/page_alloc.c:5545
__stack_depot_save+0x372/0x470 lib/stackdepot.c:474
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x60/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x65/0x70 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x52/0x3a0 mm/slab.h:737
slab_alloc_node mm/slub.c:3398 [inline]
slab_alloc mm/slub.c:3406 [inline]
__kmem_cache_alloc_lru mm/slub.c:3413 [inline]
kmem_cache_alloc+0x10c/0x2d0 mm/slub.c:3422
kmem_cache_zalloc include/linux/slab.h:682 [inline]
fill_pool lib/debugobjects.c:168 [inline]
debug_objects_fill_pool+0x7fd/0xa10 lib/debugobjects.c:606
debug_object_activate+0x32/0x4e0 lib/debugobjects.c:693
debug_work_activate kernel/workqueue.c:510 [inline]
__queue_work+0xb3a/0xf90 kernel/workqueue.c:1519
queue_work_on+0x14b/0x250 kernel/workqueue.c:1548
queue_work include/linux/workqueue.h:512 [inline]
synchronize_rcu_expedited_queue_work kernel/rcu/tree_exp.h:517 [inline]
synchronize_rcu_expedited+0x5fd/0x8a0 kernel/rcu/tree_exp.h:959
synchronize_rcu+0x11c/0x3f0 kernel/rcu/tree.c:3575
tp_rcu_cond_sync kernel/tracepoint.c:63 [inline]
tracepoint_remove_func kernel/tracepoint.c:439 [inline]
tracepoint_probe_unregister+0x7ef/0x980 kernel/tracepoint.c:551
bpf_raw_tp_link_release+0x5f/0x80 kernel/bpf/syscall.c:3175
bpf_link_free kernel/bpf/syscall.c:2749 [inline]
bpf_link_put+0x234/0x2c0 kernel/bpf/syscall.c:2775
bpf_link_release+0x37/0x40 kernel/bpf/syscall.c:2784
__fput+0x3b7/0x890 fs/file_table.c:320
task_work_run+0x246/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:297
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x63/0xcd


stack backtrace:
CPU: 0 PID: 3551 Comm: syz-executor356 Not tainted 6.1.82-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2604 [inline]
check_irq_usage kernel/locking/lockdep.c:2843 [inline]
check_prev_add kernel/locking/lockdep.c:3094 [inline]
check_prevs_add kernel/locking/lockdep.c:3209 [inline]
validate_chain+0x4d16/0x5950 kernel/locking/lockdep.c:3825
__lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
__bpf_prog_run include/linux/filter.h:600 [inline]
bpf_prog_run include/linux/filter.h:607 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177
__traceiter_mm_page_alloc+0x35/0x50 include/trace/events/kmem.h:177
trace_mm_page_alloc include/trace/events/kmem.h:177 [inline]
__alloc_pages+0x717/0x770 mm/page_alloc.c:5567
__stack_depot_save+0x372/0x470 lib/stackdepot.c:474
save_stack+0x104/0x1e0 mm/page_owner.c:128
__set_page_owner+0x26/0x390 mm/page_owner.c:190
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x18d/0x1b0 mm/page_alloc.c:2513
prep_new_page mm/page_alloc.c:2520 [inline]
get_page_from_freelist+0x31a1/0x3320 mm/page_alloc.c:4279
__alloc_pages+0x28d/0x770 mm/page_alloc.c:5545
__stack_depot_save+0x372/0x470 lib/stackdepot.c:474
kasan_save_stack mm/kasan/common.c:46 [inline]
kasan_set_track+0x60/0x70 mm/kasan/common.c:52
__kasan_slab_alloc+0x65/0x70 mm/kasan/common.c:328
kasan_slab_alloc include/linux/kasan.h:201 [inline]
slab_post_alloc_hook+0x52/0x3a0 mm/slab.h:737
slab_alloc_node mm/slub.c:3398 [inline]
slab_alloc mm/slub.c:3406 [inline]
__kmem_cache_alloc_lru mm/slub.c:3413 [inline]
kmem_cache_alloc+0x10c/0x2d0 mm/slub.c:3422
kmem_cache_zalloc include/linux/slab.h:682 [inline]
fill_pool lib/debugobjects.c:168 [inline]
debug_objects_fill_pool+0x7fd/0xa10 lib/debugobjects.c:606
debug_object_activate+0x32/0x4e0 lib/debugobjects.c:693
debug_work_activate kernel/workqueue.c:510 [inline]
__queue_work+0xb3a/0xf90 kernel/workqueue.c:1519
queue_work_on+0x14b/0x250 kernel/workqueue.c:1548
queue_work include/linux/workqueue.h:512 [inline]
synchronize_rcu_expedited_queue_work kernel/rcu/tree_exp.h:517 [inline]
synchronize_rcu_expedited+0x5fd/0x8a0 kernel/rcu/tree_exp.h:959
synchronize_rcu+0x11c/0x3f0 kernel/rcu/tree.c:3575
tp_rcu_cond_sync kernel/tracepoint.c:63 [inline]
tracepoint_remove_func kernel/tracepoint.c:439 [inline]
tracepoint_probe_unregister+0x7ef/0x980 kernel/tracepoint.c:551
bpf_raw_tp_link_release+0x5f/0x80 kernel/bpf/syscall.c:3175
bpf_link_free kernel/bpf/syscall.c:2749 [inline]
bpf_link_put+0x234/0x2c0 kernel/bpf/syscall.c:2775
bpf_link_release+0x37/0x40 kernel/bpf/syscall.c:2784
__fput+0x3b7/0x890 fs/file_table.c:320
task_work_run+0x246/0x300 kernel/task_work.c:179
resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:171
exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
__syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:297
do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:87
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7ff2ac6cb490
Code: ff f7 d8 64 89 02 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 80 3d f1 8b 07 00 00 74 17 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 48 c3 0f 1f 80 00 00 00 00 48 83 ec 18 89 7c
RSP: 002b:00007ffe702ea348 EFLAGS: 00000202 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000006 RCX: 00007ff2ac6cb490
RDX: 0000000000000010 RSI: 0000000020000080 RDI: 0000000000000005
RBP: 0000000000000000 R08: 000055555712d610 R09: 000055555712d610
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

syzbot

unread,
Mar 16, 2024, 10:32:18 PMMar 16
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: b95c01af2113 Linux 5.15.152
git tree: linux-5.15.y
console output: https://syzkaller.appspot.com/x/log.txt?x=100751c9180000
kernel config: https://syzkaller.appspot.com/x/.config?x=b26cb65e5b8ad5c7
dashboard link: https://syzkaller.appspot.com/bug?extid=df17c7f1816a27780d00
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12961546180000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1322324e180000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/2fc98856fcae/disk-b95c01af.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/3186db0dfe08/vmlinux-b95c01af.xz
kernel image: https://storage.googleapis.com/syzbot-assets/0df136a3e808/bzImage-b95c01af.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+df17c7...@syzkaller.appspotmail.com

=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
5.15.152-syzkaller #0 Not tainted
-----------------------------------------------------
kworker/0:3/2923 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff88807ab8d820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

and this task is already holding:
ffff8880b9a39b18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
which would create a new lock dependency:
(&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
(&pool->lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
default_idle+0xb/0x10 arch/x86/kernel/process.c:717
default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
cpuidle_idle_call kernel/sched/idle.c:194 [inline]
do_idle+0x271/0x670 kernel/sched/idle.c:306
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
start_kernel+0x48c/0x535 init/main.c:1137
secondary_startup_64_no_verify+0xb1/0xbb

to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298

other info that might help us debug this:

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(&pool->lock
);
lock(
&htab->buckets[i].lock);
<Interrupt>
lock(
&pool->lock);

*** DEADLOCK ***

6 locks held by kworker/0:3/2923:
#0: ffff888011c70938
((wq_completion)events
){+.+.}-{0:0}
, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
#1: ffffc9000c00fd20
((work_completion)(&map->work)
){+.+.}-{0:0}
, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
#2: ffffffff8c923ce8
(rcu_state.exp_mutex
){+.+.}-{3:3}
, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:845
#3: ffffffff8c91f720
(rcu_read_lock
){....}-{1:2}
, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#4: ffff8880b9a39b18
(&pool->lock
){-.-.}-{2:2}
, at: __queue_work+0x56d/0xd00
#5: ffffffff8c91f720
(rcu_read_lock
){....}-{1:2}
, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (
&pool->lock){-.-.}-{2:2}
{
IN-HARDIRQ-W
at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
){+...}-{2:2}
{
HARDIRQ-ON-W
at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
INITIAL USE
at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
}
... key at: [<ffffffff91789700>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0x400
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run3+0x1d1/0x380 kernel/trace/bpf_trace.c:1918
trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline]
__queue_work+0xc99/0xd00 kernel/workqueue.c:1512
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
queue_work include/linux/workqueue.h:512 [inline]
synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856
synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798
sock_hash_free+0x6e8/0x780 net/core/sock_map.c:1177
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298


stack backtrace:
CPU: 0 PID: 2923 Comm: kworker/0:3 Not tainted 5.15.152-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Workqueue: events bpf_map_free_deferred
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline]
check_irq_usage kernel/locking/lockdep.c:2806 [inline]
check_prev_add kernel/locking/lockdep.c:3057 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0x400
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run3+0x1d1/0x380 kernel/trace/bpf_trace.c:1918
trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline]
__queue_work+0xc99/0xd00 kernel/workqueue.c:1512
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
queue_work include/linux/workqueue.h:512 [inline]
synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856
synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798
sock_hash_free+0x6e8/0x780 net/core/sock_map.c:1177
process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
kthread+0x3f6/0x4f0 kernel/kthread.c:319
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
</TASK>


---
Reply all
Reply to author
Forward
0 new messages