Hello,
syzbot found the following issue on:
HEAD commit: 574362648507 Linux 5.15.151
git tree: linux-5.15.y
console output:
https://syzkaller.appspot.com/x/log.txt?x=159c0a31180000
kernel config:
https://syzkaller.appspot.com/x/.config?x=6c9a42d9e3519ca9
dashboard link:
https://syzkaller.appspot.com/bug?extid=df17c7f1816a27780d00
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image:
https://storage.googleapis.com/syzbot-assets/f00d4062000b/disk-57436264.raw.xz
vmlinux:
https://storage.googleapis.com/syzbot-assets/3a74c2b6ca62/vmlinux-57436264.xz
kernel image:
https://storage.googleapis.com/syzbot-assets/93bd706dc219/bzImage-57436264.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by:
syzbot+df17c7...@syzkaller.appspotmail.com
EXT4-fs error (device loop1): ext4_do_update_inode:5160: inode #3: comm syz-executor.1: corrupted inode contents
EXT4-fs error (device loop1): ext4_dirty_inode:5993: inode #3: comm syz-executor.1: mark_inode_dirty error
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
5.15.151-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor.1/19910 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff8880835e5820 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
and this task is already holding:
ffff8880b9a39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
which would create a new lock dependency:
(&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+.-.}-{2:2}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&pool->lock){-.-.}-{2:2}
... which became HARDIRQ-irq-safe at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
unwind_next_frame+0x146/0x1fa0 arch/x86/kernel/unwind_orc.c:448
arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0x8e/0xc0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
slab_alloc mm/slub.c:3228 [inline]
kmem_cache_alloc+0xf3/0x280 mm/slub.c:3233
__d_alloc+0x2a/0x700 fs/dcache.c:1745
d_alloc fs/dcache.c:1824 [inline]
d_alloc_parallel+0xca/0x1390 fs/dcache.c:2576
__lookup_slow+0x111/0x3d0 fs/namei.c:1648
lookup_one_len+0x187/0x2d0 fs/namei.c:2718
start_creating+0x111/0x200 fs/tracefs/inode.c:426
tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493
trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991
event_create_dir+0x9b4/0xdf0 kernel/trace/trace_events.c:2435
__trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491
early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664
event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824
tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897
do_one_initcall+0x22b/0x7a0 init/main.c:1299
do_initcall_level+0x157/0x207 init/main.c:1372
do_initcalls+0x49/0x86 init/main.c:1388
kernel_init_freeable+0x425/0x5b5 init/main.c:1612
kernel_init+0x19/0x290 init/main.c:1503
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+.-.}-{2:2}
... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(&pool->lock);
lock(&htab->buckets[i].lock);
<Interrupt>
lock(&pool->lock);
*** DEADLOCK ***
8 locks held by syz-executor.1/19910:
#0: ffff888073dee0e0 (&type->s_umount_key#28/1){+.+.}-{3:3}, at: alloc_super+0x210/0x940 fs/super.c:229
#1: ffff888082e08428 (&dquot->dq_lock){+.+.}-{3:3}, at: dquot_acquire+0x64/0x680 fs/quota/dquot.c:458
#2: ffff888073dee208 (&s->s_dquot.dqio_sem){++++}-{3:3}, at: v2_write_dquot+0x9b/0x190 fs/quota/quota_v2.c:354
#3: ffff888082d72a58 (&ei->i_data_sem/2){++++}-{3:3}, at: ext4_map_blocks+0x9e0/0x1e00 fs/ext4/inode.c:638
#4: ffff88801a43b760 (&fq->mq_flush_lock){..-.}-{2:2}, at: spin_lock_irq include/linux/spinlock.h:388 [inline]
#4: ffff88801a43b760 (&fq->mq_flush_lock){..-.}-{2:2}, at: blk_insert_flush+0x4d3/0x5e0 block/blk-flush.c:441
#5: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
#6: ffff8880b9a39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
#7: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&pool->lock){-.-.}-{2:2} {
IN-HARDIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
run_local_timers kernel/time/timer.c:1762 [inline]
update_process_times+0xca/0x200 kernel/time/timer.c:1787
tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
__sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
unwind_next_frame+0x146/0x1fa0 arch/x86/kernel/unwind_orc.c:448
arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:38 [inline]
kasan_set_track mm/kasan/common.c:46 [inline]
set_alloc_info mm/kasan/common.c:434 [inline]
__kasan_slab_alloc+0x8e/0xc0 mm/kasan/common.c:467
kasan_slab_alloc include/linux/kasan.h:254 [inline]
slab_post_alloc_hook+0x53/0x380 mm/slab.h:519
slab_alloc_node mm/slub.c:3220 [inline]
slab_alloc mm/slub.c:3228 [inline]
kmem_cache_alloc+0xf3/0x280 mm/slub.c:3233
__d_alloc+0x2a/0x700 fs/dcache.c:1745
d_alloc fs/dcache.c:1824 [inline]
d_alloc_parallel+0xca/0x1390 fs/dcache.c:2576
__lookup_slow+0x111/0x3d0 fs/namei.c:1648
lookup_one_len+0x187/0x2d0 fs/namei.c:2718
start_creating+0x111/0x200 fs/tracefs/inode.c:426
tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493
trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991
event_create_dir+0x9b4/0xdf0 kernel/trace/trace_events.c:2435
__trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491
early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664
event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824
tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897
do_one_initcall+0x22b/0x7a0 init/main.c:1299
do_initcall_level+0x157/0x207 init/main.c:1372
do_initcalls+0x49/0x86 init/main.c:1388
kernel_init_freeable+0x425/0x5b5 init/main.c:1612
kernel_init+0x19/0x290 init/main.c:1503
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
IN-SOFTIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
__queue_work+0x56d/0xd00
call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421
expire_timers kernel/time/timer.c:1461 [inline]
__run_timers+0x6a8/0x890 kernel/time/timer.c:1737
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096
asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
default_idle+0xb/0x10 arch/x86/kernel/process.c:717
default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
cpuidle_idle_call kernel/sched/idle.c:194 [inline]
do_idle+0x271/0x670 kernel/sched/idle.c:306
cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
start_kernel+0x48c/0x535 init/main.c:1137
secondary_startup_64_no_verify+0xb1/0xbb
INITIAL USE at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3783
link_pwq kernel/workqueue.c:3849 [inline]
alloc_and_link_pwqs kernel/workqueue.c:4243 [inline]
alloc_workqueue+0xbb4/0x13f0 kernel/workqueue.c:4365
workqueue_init_early+0x7b2/0x96c kernel/workqueue.c:6099
start_kernel+0x1fa/0x535 init/main.c:1024
secondary_startup_64_no_verify+0xb1/0xbb
}
... key at: [<ffffffff8f5d8c60>] init_worker_pool.__key+0x0/0x20
the dependencies between the lock to be acquired
and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+.-.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
IN-SOFTIRQ-W at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
skb_page_frag_refill+0x220/0x4b0 net/core/sock.c:2650
add_recvbuf_mergeable drivers/net/virtio_net.c:1329 [inline]
try_fill_recv+0x48f/0x17c0 drivers/net/virtio_net.c:1370
virtnet_receive drivers/net/virtio_net.c:1484 [inline]
virtnet_poll+0x83b/0x1270 drivers/net/virtio_net.c:1585
__napi_poll+0xc7/0x440 net/core/dev.c:7035
napi_poll net/core/dev.c:7102 [inline]
net_rx_action+0x617/0xda0 net/core/dev.c:7189
__do_softirq+0x3b3/0x93a kernel/softirq.c:558
invoke_softirq kernel/softirq.c:432 [inline]
__irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
common_interrupt+0xa4/0xc0 arch/x86/kernel/irq.c:240
asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:629
__sanitizer_cov_trace_pc+0x4/0x60 kernel/kcov.c:193
string_nocheck lib/vsprintf.c:645 [inline]
string+0x1f8/0x2b0 lib/vsprintf.c:724
vsnprintf+0x11fc/0x1c70 lib/vsprintf.c:2811
tomoyo_supervisor+0x145/0x12c0 security/tomoyo/common.c:2069
tomoyo_audit_path_log security/tomoyo/file.c:168 [inline]
tomoyo_path_permission+0x243/0x360 security/tomoyo/file.c:587
tomoyo_path_perm+0x436/0x6b0 security/tomoyo/file.c:838
tomoyo_path_unlink+0xcc/0x100 security/tomoyo/tomoyo.c:149
security_path_unlink+0xd7/0x130 security/security.c:1170
do_unlinkat+0x3dd/0x950 fs/namei.c:4344
__do_sys_unlink fs/namei.c:4396 [inline]
__se_sys_unlink fs/namei.c:4394 [inline]
__x64_sys_unlink+0x45/0x50 fs/namei.c:4394
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
INITIAL USE at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
map_create+0x185e/0x2070 kernel/bpf/syscall.c:933
__sys_bpf+0x276/0x670 kernel/bpf/syscall.c:4611
__do_sys_bpf kernel/bpf/syscall.c:4733 [inline]
__se_sys_bpf kernel/bpf/syscall.c:4731 [inline]
__x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4731
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
}
... key at: [<ffffffff91789700>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
save_stack+0x104/0x1e0 mm/page_owner.c:120
__set_page_owner+0x37/0x300 mm/page_owner.c:181
prep_new_page mm/page_alloc.c:2426 [inline]
get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159
__alloc_pages+0x272/0x700 mm/page_alloc.c:5421
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40
kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348
insert_work+0x54/0x3e0 kernel/workqueue.c:1366
__queue_work+0x963/0xd00 kernel/workqueue.c:1532
mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753
kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636
blk_flush_queue_rq block/blk-flush.c:136 [inline]
blk_flush_complete_seq+0x6f9/0xce0 block/blk-flush.c:191
blk_insert_flush+0x4e7/0x5e0 block/blk-flush.c:442
blk_mq_submit_bio+0x161b/0x1c40 block/blk-mq.c:2258
__submit_bio+0x813/0x850 block/blk-core.c:917
__submit_bio_noacct_mq block/blk-core.c:997 [inline]
submit_bio_noacct+0x955/0xb30 block/blk-core.c:1027
submit_bio+0x2dd/0x560 block/blk-core.c:1089
submit_bh fs/buffer.c:3062 [inline]
__sync_dirty_buffer+0x245/0x380 fs/buffer.c:3157
ext4_commit_super+0x323/0x430 fs/ext4/super.c:5512
ext4_handle_error+0x52d/0x7a0 fs/ext4/super.c:658
__ext4_error_inode+0x236/0x400 fs/ext4/super.c:786
__ext4_mark_inode_dirty+0x207/0x860 fs/ext4/inode.c:5967
ext4_dirty_inode+0xbf/0x100 fs/ext4/inode.c:5993
__mark_inode_dirty+0x2fd/0xd60 fs/fs-writeback.c:2464
mark_inode_dirty_sync include/linux/fs.h:2443 [inline]
dquot_alloc_space_nofail include/linux/quotaops.h:303 [inline]
dquot_alloc_block_nofail include/linux/quotaops.h:329 [inline]
ext4_mb_new_blocks+0x20d8/0x4d50 fs/ext4/mballoc.c:5666
ext4_ext_map_blocks+0x1b0a/0x7690 fs/ext4/extents.c:4316
ext4_map_blocks+0xaad/0x1e00 fs/ext4/inode.c:645
ext4_getblk+0x19f/0x710 fs/ext4/inode.c:846
ext4_bread+0x2a/0x170 fs/ext4/inode.c:899
ext4_quota_write+0x21e/0x580 fs/ext4/super.c:6546
write_blk fs/quota/quota_tree.c:64 [inline]
get_free_dqblk+0x3a9/0x800 fs/quota/quota_tree.c:125
do_insert_tree+0x2b4/0x1c20 fs/quota/quota_tree.c:335
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
dq_insert_tree fs/quota/quota_tree.c:392 [inline]
qtree_write_dquot+0x3b9/0x530 fs/quota/quota_tree.c:411
v2_write_dquot+0x11c/0x190 fs/quota/quota_v2.c:358
dquot_acquire+0x34d/0x680 fs/quota/dquot.c:470
ext4_acquire_dquot+0x2e6/0x400 fs/ext4/super.c:6180
dqget+0x74e/0xe30 fs/quota/dquot.c:984
__dquot_initialize+0x2d9/0xe10 fs/quota/dquot.c:1562
ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329
ext4_orphan_cleanup+0x9d9/0x1240 fs/ext4/orphan.c:474
ext4_fill_super+0x98de/0xa110 fs/ext4/super.c:4966
mount_bdev+0x2c9/0x3f0 fs/super.c:1387
legacy_get_tree+0xeb/0x180 fs/fs_context.c:611
vfs_get_tree+0x88/0x270 fs/super.c:1517
do_new_mount+0x2ba/0xb40 fs/namespace.c:3005
do_mount fs/namespace.c:3348 [inline]
__do_sys_mount fs/namespace.c:3556 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3533
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
stack backtrace:
CPU: 0 PID: 19910 Comm: syz-executor.1 Not tainted 5.15.151-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline]
check_irq_usage kernel/locking/lockdep.c:2806 [inline]
check_prev_add kernel/locking/lockdep.c:3057 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788
__lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
_raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
bpf_prog_2c29ac5cdc6b1842+0x3a/0xa8c
bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline]
__bpf_prog_run include/linux/filter.h:625 [inline]
bpf_prog_run include/linux/filter.h:632 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919
__bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201
trace_mm_page_alloc include/trace/events/kmem.h:201 [inline]
__alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
save_stack+0x104/0x1e0 mm/page_owner.c:120
__set_page_owner+0x37/0x300 mm/page_owner.c:181
prep_new_page mm/page_alloc.c:2426 [inline]
get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159
__alloc_pages+0x272/0x700 mm/page_alloc.c:5421
stack_depot_save+0x319/0x440 lib/stackdepot.c:302
kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40
kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348
insert_work+0x54/0x3e0 kernel/workqueue.c:1366
__queue_work+0x963/0xd00 kernel/workqueue.c:1532
mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753
kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636
blk_flush_queue_rq block/blk-flush.c:136 [inline]
blk_flush_complete_seq+0x6f9/0xce0 block/blk-flush.c:191
blk_insert_flush+0x4e7/0x5e0 block/blk-flush.c:442
blk_mq_submit_bio+0x161b/0x1c40 block/blk-mq.c:2258
__submit_bio+0x813/0x850 block/blk-core.c:917
__submit_bio_noacct_mq block/blk-core.c:997 [inline]
submit_bio_noacct+0x955/0xb30 block/blk-core.c:1027
submit_bio+0x2dd/0x560 block/blk-core.c:1089
submit_bh fs/buffer.c:3062 [inline]
__sync_dirty_buffer+0x245/0x380 fs/buffer.c:3157
ext4_commit_super+0x323/0x430 fs/ext4/super.c:5512
ext4_handle_error+0x52d/0x7a0 fs/ext4/super.c:658
__ext4_error_inode+0x236/0x400 fs/ext4/super.c:786
__ext4_mark_inode_dirty+0x207/0x860 fs/ext4/inode.c:5967
ext4_dirty_inode+0xbf/0x100 fs/ext4/inode.c:5993
__mark_inode_dirty+0x2fd/0xd60 fs/fs-writeback.c:2464
mark_inode_dirty_sync include/linux/fs.h:2443 [inline]
dquot_alloc_space_nofail include/linux/quotaops.h:303 [inline]
dquot_alloc_block_nofail include/linux/quotaops.h:329 [inline]
ext4_mb_new_blocks+0x20d8/0x4d50 fs/ext4/mballoc.c:5666
ext4_ext_map_blocks+0x1b0a/0x7690 fs/ext4/extents.c:4316
ext4_map_blocks+0xaad/0x1e00 fs/ext4/inode.c:645
ext4_getblk+0x19f/0x710 fs/ext4/inode.c:846
ext4_bread+0x2a/0x170 fs/ext4/inode.c:899
ext4_quota_write+0x21e/0x580 fs/ext4/super.c:6546
write_blk fs/quota/quota_tree.c:64 [inline]
get_free_dqblk+0x3a9/0x800 fs/quota/quota_tree.c:125
do_insert_tree+0x2b4/0x1c20 fs/quota/quota_tree.c:335
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
do_insert_tree+0x6d0/0x1c20 fs/quota/quota_tree.c:366
dq_insert_tree fs/quota/quota_tree.c:392 [inline]
qtree_write_dquot+0x3b9/0x530 fs/quota/quota_tree.c:411
v2_write_dquot+0x11c/0x190 fs/quota/quota_v2.c:358
dquot_acquire+0x34d/0x680 fs/quota/dquot.c:470
ext4_acquire_dquot+0x2e6/0x400 fs/ext4/super.c:6180
dqget+0x74e/0xe30 fs/quota/dquot.c:984
__dquot_initialize+0x2d9/0xe10 fs/quota/dquot.c:1562
ext4_process_orphan+0x57/0x2d0 fs/ext4/orphan.c:329
ext4_orphan_cleanup+0x9d9/0x1240 fs/ext4/orphan.c:474
ext4_fill_super+0x98de/0xa110 fs/ext4/super.c:4966
mount_bdev+0x2c9/0x3f0 fs/super.c:1387
legacy_get_tree+0xeb/0x180 fs/fs_context.c:611
vfs_get_tree+0x88/0x270 fs/super.c:1517
do_new_mount+0x2ba/0xb40 fs/namespace.c:3005
do_mount fs/namespace.c:3348 [inline]
__do_sys_mount fs/namespace.c:3556 [inline]
__se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3533
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fe14c8564aa
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 de 09 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe14add4ef8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007fe14add4f80 RCX: 00007fe14c8564aa
RDX: 0000000020000100 RSI: 0000000020000200 RDI: 00007fe14add4f40
RBP: 0000000020000100 R08: 00007fe14add4f80 R09: 0000000002000010
R10: 0000000002000010 R11: 0000000000000202 R12: 0000000020000200
R13: 00007fe14add4f40 R14: 00000000000004f3 R15: 0000000020000640
</TASK>
---
This report is generated by a bot. It may contain errors.
See
https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at
syzk...@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup