[v6.6] INFO: task hung in path_openat

0 views
Skip to first unread message

syzbot

unread,
Sep 15, 2025, 12:26:31 PM (2 days ago) Sep 15
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: 60a9e718726f Linux 6.6.106
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=161fe762580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=c4d0a5e21a35f5f093c5
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/eca27e056a5a/disk-60a9e718.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/bc64d4eeb7f6/vmlinux-60a9e718.xz
kernel image: https://storage.googleapis.com/syzbot-assets/a345041561ac/bzImage-60a9e718.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c4d0a5...@syzkaller.appspotmail.com

INFO: task syz.4.622:8571 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.622 state:D stack:26512 pid:8571 ppid:6759 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
rwsem_down_read_slowpath+0x4f8/0x840 kernel/locking/rwsem.c:1086
__down_read_common kernel/locking/rwsem.c:1250 [inline]
__down_read kernel/locking/rwsem.c:1263 [inline]
down_read+0x98/0x2e0 kernel/locking/rwsem.c:1522
inode_lock_shared include/linux/fs.h:814 [inline]
open_last_lookups fs/namei.c:3555 [inline]
path_openat+0x7b7/0x3190 fs/namei.c:3786
do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
do_open_execat+0x133/0x3d0 fs/exec.c:924
bprm_execve+0x567/0x16f0 fs/exec.c:1867
do_execveat_common+0x51b/0x6c0 fs/exec.c:1998
do_execveat fs/exec.c:2083 [inline]
__do_sys_execveat fs/exec.c:2157 [inline]
__se_sys_execveat fs/exec.c:2151 [inline]
__x64_sys_execveat+0xc4/0xe0 fs/exec.c:2151
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f0911d8eba9
RSP: 002b:00007f0912ca6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000142
RAX: ffffffffffffffda RBX: 00007f0911fd6090 RCX: 00007f0911d8eba9
RDX: 0000000000000000 RSI: 0000200000000140 RDI: ffffffffffffff9c
RBP: 00007f0911e11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0911fd6128 R14: 00007f0911fd6090 R15: 00007fff772734d8
</TASK>

Showing all locks held in the system:
3 locks held by kworker/u4:0/11:
#0: ffff8880b8e3c458 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
#1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:189 [inline]
#1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: __schedule+0x20ee/0x44d0 kernel/sched/core.c:6694
#2: ffff8880b8e2a898 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x123/0x270 kernel/time/timer.c:999
1 lock held by khungtaskd/28:
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by getty/5549:
#0: ffff88814cd8b0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
2 locks held by syz.4.622/8566:
2 locks held by syz.4.622/8571:
#0: ffff88802c209988 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff88802c209988 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88804c0a4198 (&type->i_mutex_dir_key#26){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88804c0a4198 (&type->i_mutex_dir_key#26){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88804c0a4198 (&type->i_mutex_dir_key#26){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
<TASK>
dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xf41/0xf80 kernel/hung_task.c:379
kthread+0x2fa/0x390 kernel/kthread.c:388
ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 8566 Comm: syz.4.622 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
RIP: 0010:__sanitizer_cov_trace_switch+0x7c/0x120 kernel/kcov.c:-1
Code: c0 75 13 e9 bc 00 00 00 b9 01 00 00 00 48 85 c0 0f 84 ae 00 00 00 41 57 41 56 41 54 53 48 8b 54 24 20 65 4c 8b 05 84 24 7e 7e <45> 31 c9 eb 08 49 ff c1 4c 39 c8 74 77 4e 8b 54 ce 10 65 44 8b 1d
RSP: 0018:ffffc9000c96d9f0 EFLAGS: 00000206
RAX: 0000000000000003 RBX: ffffffff8ed92e55 RCX: 0000000000000003
RDX: ffffffff813ab106 RSI: ffffffff8cb9e7b0 RDI: 0000000000000002
RBP: ffffc9000c96db38 R08: ffff888027a78000 R09: 0000000000000008
R10: 0000000000000009 R11: 0000000000000002 R12: ffffc9000c96dae8
R13: dffffc0000000000 R14: 0000000000000002 R15: ffffffff8ed92e54
FS: 00007f0912cc76c0(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7659fa4198 CR3: 0000000064162000 CR4: 00000000003506f0
Call Trace:
<TASK>
unwind_next_frame+0xe46/0x2970 arch/x86/kernel/unwind_orc.c:581
arch_stack_walk+0x144/0x190 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
kasan_save_stack mm/kasan/common.c:45 [inline]
kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
kasan_slab_free include/linux/kasan.h:164 [inline]
slab_free_hook mm/slub.c:1811 [inline]
slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1837
slab_free mm/slub.c:3830 [inline]
kmem_cache_free+0xf8/0x280 mm/slub.c:3852
free_buffer_head+0x56/0x220 fs/buffer.c:3043
try_to_free_buffers+0x255/0x520 fs/buffer.c:2984
shrink_folio_list+0x2024/0x7980 mm/vmscan.c:2075
evict_folios+0xa85/0x2290 mm/vmscan.c:5208
try_to_shrink_lruvec+0x856/0xb80 mm/vmscan.c:5409
lru_gen_shrink_lruvec mm/vmscan.c:5553 [inline]
shrink_lruvec+0x4ce/0x28b0 mm/vmscan.c:6303
shrink_node_memcgs mm/vmscan.c:6523 [inline]
shrink_node+0x920/0x3790 mm/vmscan.c:6558
shrink_zones mm/vmscan.c:6800 [inline]
do_try_to_free_pages+0x65f/0x1aa0 mm/vmscan.c:6862
try_to_free_mem_cgroup_pages+0x2f5/0x790 mm/vmscan.c:7177
try_charge_memcg+0x61c/0x1810 mm/memcontrol.c:2694
obj_cgroup_charge_pages mm/memcontrol.c:3109 [inline]
obj_cgroup_charge+0x358/0x620 mm/memcontrol.c:3400
memcg_slab_pre_alloc_hook mm/slab.h:508 [inline]
slab_pre_alloc_hook+0x2eb/0x310 mm/slab.h:719
slab_alloc_node mm/slub.c:3477 [inline]
slab_alloc mm/slub.c:3503 [inline]
__kmem_cache_alloc_lru mm/slub.c:3510 [inline]
kmem_cache_alloc+0x5a/0x2e0 mm/slub.c:3519
kmem_cache_zalloc include/linux/slab.h:711 [inline]
alloc_buffer_head+0x2d/0x280 fs/buffer.c:3027
folio_alloc_buffers+0x39b/0x990 fs/buffer.c:935
folio_create_empty_buffers+0x3a/0x730 fs/buffer.c:1648
block_read_full_folio+0x213/0xf40 fs/buffer.c:2386
filemap_read_folio+0x167/0x760 mm/filemap.c:2420
do_read_cache_folio+0x470/0x7e0 mm/filemap.c:3789
do_read_cache_page+0x32/0x250 mm/filemap.c:3855
read_mapping_page include/linux/pagemap.h:892 [inline]
dir_get_page fs/sysv/dir.c:64 [inline]
sysv_find_entry+0x196/0x3f0 fs/sysv/dir.c:157
sysv_inode_by_name+0x31/0x140 fs/sysv/dir.c:374
sysv_lookup+0x6a/0xe0 fs/sysv/namei.c:38
lookup_open fs/namei.c:3466 [inline]
open_last_lookups fs/namei.c:3556 [inline]
path_openat+0x10b8/0x3190 fs/namei.c:3786
do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
do_sys_open fs/open.c:1434 [inline]
__do_sys_openat fs/open.c:1450 [inline]
__se_sys_openat fs/open.c:1445 [inline]
__x64_sys_openat+0x139/0x160 fs/open.c:1445
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f0911d8eba9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f0912cc7038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f0911fd5fa0 RCX: 00007f0911d8eba9
RDX: 0000000000000040 RSI: 0000200000000100 RDI: ffffffffffffff9c
RBP: 00007f0911e11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000001ff R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0911fd6038 R14: 00007f0911fd5fa0 R15: 00007fff772734d8
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

syzbot

unread,
Sep 15, 2025, 2:35:33 PM (2 days ago) Sep 15
to syzkaller...@googlegroups.com
syzbot has found a reproducer for the following issue on:

HEAD commit: 60a9e718726f Linux 6.6.106
git tree: linux-6.6.y
console output: https://syzkaller.appspot.com/x/log.txt?x=10337b62580000
kernel config: https://syzkaller.appspot.com/x/.config?x=12606d4b8832c7e4
dashboard link: https://syzkaller.appspot.com/bug?extid=c4d0a5e21a35f5f093c5
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=152b947c580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=14337b62580000
mounted in repro: https://storage.googleapis.com/syzbot-assets/db3c14660283/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+c4d0a5...@syzkaller.appspotmail.com

INFO: task syz.3.20:5950 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.20 state:D stack:28072 pid:5950 ppid:5907 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
rwsem_down_read_slowpath+0x4f8/0x840 kernel/locking/rwsem.c:1086
__down_read_common kernel/locking/rwsem.c:1250 [inline]
__down_read kernel/locking/rwsem.c:1263 [inline]
down_read+0x98/0x2e0 kernel/locking/rwsem.c:1522
inode_lock_shared include/linux/fs.h:814 [inline]
open_last_lookups fs/namei.c:3555 [inline]
path_openat+0x7b7/0x3190 fs/namei.c:3786
do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
do_open_execat+0x133/0x3d0 fs/exec.c:924
bprm_execve+0x567/0x16f0 fs/exec.c:1867
do_execveat_common+0x51b/0x6c0 fs/exec.c:1998
do_execveat fs/exec.c:2083 [inline]
__do_sys_execveat fs/exec.c:2157 [inline]
__se_sys_execveat fs/exec.c:2151 [inline]
__x64_sys_execveat+0xc4/0xe0 fs/exec.c:2151
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f7f9cd8eba9
RSP: 002b:00007f7f9dc76038 EFLAGS: 00000246 ORIG_RAX: 0000000000000142
RAX: ffffffffffffffda RBX: 00007f7f9cfd6090 RCX: 00007f7f9cd8eba9
RDX: 0000000000000000 RSI: 0000200000000140 RDI: ffffffffffffff9c
RBP: 00007f7f9ce11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f7f9cfd6128 R14: 00007f7f9cfd6090 R15: 00007fffbbf2e828
</TASK>
INFO: task syz.1.18:6011 blocked for more than 146 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.18 state:D stack:26512 pid:6011 ppid:5905 flags:0x00004004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5380 [inline]
__schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
schedule+0xbd/0x170 kernel/sched/core.c:6773
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6832
rwsem_down_read_slowpath+0x4f8/0x840 kernel/locking/rwsem.c:1086
__down_read_common kernel/locking/rwsem.c:1250 [inline]
__down_read kernel/locking/rwsem.c:1263 [inline]
down_read+0x98/0x2e0 kernel/locking/rwsem.c:1522
inode_lock_shared include/linux/fs.h:814 [inline]
open_last_lookups fs/namei.c:3555 [inline]
path_openat+0x7b7/0x3190 fs/namei.c:3786
do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
do_open_execat+0x133/0x3d0 fs/exec.c:924
bprm_execve+0x567/0x16f0 fs/exec.c:1867
do_execveat_common+0x51b/0x6c0 fs/exec.c:1998
do_execveat fs/exec.c:2083 [inline]
__do_sys_execveat fs/exec.c:2157 [inline]
__se_sys_execveat fs/exec.c:2151 [inline]
__x64_sys_execveat+0xc4/0xe0 fs/exec.c:2151
do_syscall_x64 arch/x86/entry/common.c:51 [inline]
do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f49ce58eba9
RSP: 002b:00007f49cf4ad038 EFLAGS: 00000246 ORIG_RAX: 0000000000000142
RAX: ffffffffffffffda RBX: 00007f49ce7d6090 RCX: 00007f49ce58eba9
RDX: 0000000000000000 RSI: 0000200000000140 RDI: ffffffffffffff9c
RBP: 00007f49ce611e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f49ce7d6128 R14: 00007f49ce7d6090 R15: 00007ffd6fa51728
</TASK>

Showing all locks held in the system:
2 locks held by kworker/u4:1/12:
2 locks held by kworker/1:1/28:
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90000a4fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90000a4fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by khungtaskd/29:
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
#0: ffffffff8cd2fe20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
1 lock held by kswapd0/86:
3 locks held by kworker/u4:6/1112:
2 locks held by kworker/u4:7/2928:
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:189 [inline]
#1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: __schedule+0x20ee/0x44d0 kernel/sched/core.c:6694
1 lock held by udevd/5157:
2 locks held by getty/5545:
#0: ffff88814cd6a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by kworker/u5:4/5906:
#0: ffff88802fcb9538 ((wq_completion)hci8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#0: ffff88802fcb9538 ((wq_completion)hci8){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#1: ffffc90003387d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
#1: ffffc90003387d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
#2: ffff88807719ce70 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1d4/0x390 net/bluetooth/hci_sync.c:326
1 lock held by syz.3.20/5949:
2 locks held by syz.3.20/5950:
#0: ffff888023f8b488 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff888023f8b488 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a154198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a154198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a154198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
1 lock held by syz.1.18/6010:
2 locks held by syz.1.18/6011:
#0: ffff888027ef5648 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff888027ef5648 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a1546e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a1546e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a1546e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
1 lock held by syz.2.19/6013:
2 locks held by syz.2.19/6014:
#0: ffff888027ef4f88 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff888027ef4f88 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88804f618198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88804f618198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88804f618198 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
2 locks held by syz.0.17/6016:
2 locks held by syz.0.17/6017:
#0: ffff8880270ac8c8 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff8880270ac8c8 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a154c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a154c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a154c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
1 lock held by syz.4.21/6047:
2 locks held by syz.4.21/6048:
#0: ffff888027ef1988 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff888027ef1988 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88804f6186e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88804f6186e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88804f6186e0 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
2 locks held by syz.5.22/6141:
2 locks held by syz.5.22/6142:
#0: ffff88802a875d08 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff88802a875d08 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a155170 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a155170 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a155170 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
9 locks held by syz.7.24/6144:
2 locks held by syz.7.24/6145:
#0: ffff88802ab18c08 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff88802ab18c08 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88804f618c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88804f618c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88804f618c28 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
2 locks held by syz.6.23/6149:
2 locks held by syz.6.23/6155:
#0: ffff88802ab18548 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff88802ab18548 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a1556b8 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a1556b8 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a1556b8 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
2 locks held by syz.8.25/6168:
2 locks held by syz.8.25/6169:
#0: ffff8880298592c8 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: prepare_bprm_creds fs/exec.c:1499 [inline]
#0: ffff8880298592c8 (&sig->cred_guard_mutex){+.+.}-{3:3}, at: bprm_execve+0xca/0x16f0 fs/exec.c:1854
#1: ffff88805a155c00 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:814 [inline]
#1: ffff88805a155c00 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: open_last_lookups fs/namei.c:3555 [inline]
#1: ffff88805a155c00 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: path_openat+0x7b7/0x3190 fs/namei.c:3786
7 locks held by syz-executor/6181:
3 locks held by syz-executor/6186:
2 locks held by syz.9.26/6257:
2 locks held by syz.9.26/6259:


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
Reply all
Reply to author
Forward
0 new messages