possible deadlock in proc_pid_attr_write (2)

6 views
Skip to first unread message

syzbot

unread,
Apr 23, 2021, 5:25:20 AM4/23/21
to syzkaller...@googlegroups.com
Hello,

syzbot found the following issue on:

HEAD commit: cf256fbc Linux 4.14.231
git tree: linux-4.14.y
console output: https://syzkaller.appspot.com/x/log.txt?x=17ed7e05d00000
kernel config: https://syzkaller.appspot.com/x/.config?x=403e68efdb1dcca6
dashboard link: https://syzkaller.appspot.com/bug?extid=bf5c5ea4531aebc9adf0

Unfortunately, I don't have any reproducer for this issue yet.

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+bf5c5e...@syzkaller.appspotmail.com

x_tables: ip6_tables: icmp6 match: only valid for protocol 58
======================================================
WARNING: possible circular locking dependency detected
4.14.231-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.2/19140 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.}, at: [<ffffffff81a04e92>] proc_pid_attr_write+0x152/0x280 fs/proc/base.c:2584

but task is already holding lock:
(&pipe->mutex/1){+.+.}, at: [<ffffffff81886798>] pipe_lock_nested fs/pipe.c:67 [inline]
(&pipe->mutex/1){+.+.}, at: [<ffffffff81886798>] pipe_lock+0x58/0x70 fs/pipe.c:75

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&pipe->mutex/1){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
pipe_lock_nested fs/pipe.c:67 [inline]
pipe_lock+0x58/0x70 fs/pipe.c:75
iter_file_splice_write+0x15e/0xa90 fs/splice.c:699
do_splice_from fs/splice.c:851 [inline]
do_splice fs/splice.c:1147 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xd59/0x1380 fs/splice.c:1382
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

-> #2 (sb_writers#3){.+.+}:
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
__sb_start_write+0x64/0x260 fs/super.c:1342
sb_start_write include/linux/fs.h:1549 [inline]
mnt_want_write+0x3a/0xb0 fs/namespace.c:386
ovl_do_remove+0x67/0xb90 fs/overlayfs/dir.c:759
vfs_rmdir.part.0+0x144/0x390 fs/namei.c:3908
vfs_rmdir fs/namei.c:3893 [inline]
do_rmdir+0x334/0x3c0 fs/namei.c:3968
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

-> #1 (&ovl_i_mutex_dir_key[depth]){++++}:
down_read+0x36/0x80 kernel/locking/rwsem.c:24
inode_lock_shared include/linux/fs.h:729 [inline]
do_last fs/namei.c:3333 [inline]
path_openat+0x149b/0x2970 fs/namei.c:3569
do_filp_open+0x179/0x3c0 fs/namei.c:3603
do_open_execat+0xd3/0x450 fs/exec.c:849
do_execveat_common+0x711/0x1f30 fs/exec.c:1755
do_execve fs/exec.c:1860 [inline]
SYSC_execve fs/exec.c:1941 [inline]
SyS_execve+0x3b/0x50 fs/exec.c:1936
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

-> #0 (&sig->cred_guard_mutex){+.+.}:
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
proc_pid_attr_write+0x152/0x280 fs/proc/base.c:2584
__vfs_write+0xe4/0x630 fs/read_write.c:480
__kernel_write+0xf5/0x330 fs/read_write.c:501
write_pipe_buf+0x143/0x1c0 fs/splice.c:797
splice_from_pipe_feed fs/splice.c:502 [inline]
__splice_from_pipe+0x326/0x7a0 fs/splice.c:626
splice_from_pipe fs/splice.c:661 [inline]
default_file_splice_write+0xc5/0x150 fs/splice.c:809
do_splice_from fs/splice.c:851 [inline]
do_splice fs/splice.c:1147 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xd59/0x1380 fs/splice.c:1382
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb

other info that might help us debug this:

Chain exists of:
&sig->cred_guard_mutex --> sb_writers#3 --> &pipe->mutex/1

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(sb_writers#3);
lock(&pipe->mutex/1);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

2 locks held by syz-executor.2/19140:
#0: (sb_writers#4){.+.+}, at: [<ffffffff8191e748>] file_start_write include/linux/fs.h:2712 [inline]
#0: (sb_writers#4){.+.+}, at: [<ffffffff8191e748>] do_splice fs/splice.c:1146 [inline]
#0: (sb_writers#4){.+.+}, at: [<ffffffff8191e748>] SYSC_splice fs/splice.c:1402 [inline]
#0: (sb_writers#4){.+.+}, at: [<ffffffff8191e748>] SyS_splice+0xef8/0x1380 fs/splice.c:1382
#1: (&pipe->mutex/1){+.+.}, at: [<ffffffff81886798>] pipe_lock_nested fs/pipe.c:67 [inline]
#1: (&pipe->mutex/1){+.+.}, at: [<ffffffff81886798>] pipe_lock+0x58/0x70 fs/pipe.c:75

stack backtrace:
CPU: 1 PID: 19140 Comm: syz-executor.2 Not tainted 4.14.231-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0x1b2/0x281 lib/dump_stack.c:58
print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1905 [inline]
check_prevs_add kernel/locking/lockdep.c:2022 [inline]
validate_chain kernel/locking/lockdep.c:2464 [inline]
__lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491
lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
proc_pid_attr_write+0x152/0x280 fs/proc/base.c:2584
__vfs_write+0xe4/0x630 fs/read_write.c:480
__kernel_write+0xf5/0x330 fs/read_write.c:501
write_pipe_buf+0x143/0x1c0 fs/splice.c:797
splice_from_pipe_feed fs/splice.c:502 [inline]
__splice_from_pipe+0x326/0x7a0 fs/splice.c:626
splice_from_pipe fs/splice.c:661 [inline]
default_file_splice_write+0xc5/0x150 fs/splice.c:809
do_splice_from fs/splice.c:851 [inline]
do_splice fs/splice.c:1147 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xd59/0x1380 fs/splice.c:1382
do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
entry_SYSCALL_64_after_hwframe+0x46/0xbb
RIP: 0033:0x4665f9
RSP: 002b:00007f18fca11188 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 000000000056c008 RCX: 00000000004665f9
RDX: 0000000000000007 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00000000004bfbb9 R08: 000000000004ff9c R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000056c008
R13: 00007ffff48f347f R14: 00007f18fca11300 R15: 0000000000022000
x_tables: ip6_tables: icmp6 match: only valid for protocol 58
x_tables: ip6_tables: icmp6 match: only valid for protocol 58
x_tables: ip6_tables: icmp6 match: only valid for protocol 58
x_tables: ip6_tables: icmp6 match: only valid for protocol 58


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Aug 21, 2021, 5:25:18 AM8/21/21
to syzkaller...@googlegroups.com
Auto-closing this bug as obsolete.
Crashes did not happen for a while, no reproducer and no activity.
Reply all
Reply to author
Forward
0 new messages