possible deadlock in fifo_open

10 views
Skip to first unread message

syzbot

unread,
Apr 10, 2019, 8:00:19 PM4/10/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 666c420f FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.14
console output: https://syzkaller.appspot.com/x/log.txt?x=129ae24e400000
kernel config: https://syzkaller.appspot.com/x/.config?x=89d929f317ea847c
dashboard link: https://syzkaller.appspot.com/bug?extid=2539f886ed2884843fa6
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11c15859400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=13500b9e400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+2539f8...@syzkaller.appspotmail.com

urandom_read: 1 callbacks suppressed
random: sshd: uninitialized urandom read (32 bytes read)
audit: type=1400 audit(1537724554.660:7): avc: denied { map } for
pid=1783 comm="syz-executor051" path="/root/syz-executor051166624"
dev="sda1" ino=16481 scontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023
tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=1

======================================================
WARNING: possible circular locking dependency detected
4.14.71+ #8 Not tainted
------------------------------------------------------
syz-executor051/1783 is trying to acquire lock:
(&pipe->mutex/1){+.+.}, at: [<ffffffff84975776>] __pipe_lock fs/pipe.c:88
[inline]
(&pipe->mutex/1){+.+.}, at: [<ffffffff84975776>] fifo_open+0x156/0x9d0
fs/pipe.c:921

but task is already holding lock:
(&sig->cred_guard_mutex){+.+.}, at: [<ffffffff8496fbbe>]
prepare_bprm_creds+0x4e/0x110 fs/exec.c:1389

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&sig->cred_guard_mutex){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
lock_trace+0x3f/0xc0 fs/proc/base.c:408
proc_pid_personality+0x17/0xc0 fs/proc/base.c:2905
proc_single_show+0xf1/0x160 fs/proc/base.c:748
seq_read+0x4e0/0x11d0 fs/seq_file.c:237
__vfs_read+0xf4/0x5b0 fs/read_write.c:411
vfs_read+0x11e/0x330 fs/read_write.c:447
SYSC_read fs/read_write.c:577 [inline]
SyS_read+0xc2/0x1a0 fs/read_write.c:570
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #1 (&p->lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
seq_read+0xd4/0x11d0 fs/seq_file.c:165
proc_reg_read+0xef/0x170 fs/proc/inode.c:217
do_loop_readv_writev fs/read_write.c:698 [inline]
do_iter_read+0x3cc/0x580 fs/read_write.c:922
vfs_readv+0xe6/0x150 fs/read_write.c:984
kernel_readv fs/splice.c:361 [inline]
default_file_splice_read+0x495/0x860 fs/splice.c:416
do_splice_to+0x102/0x150 fs/splice.c:880
do_splice fs/splice.c:1173 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xf4d/0x12a0 fs/splice.c:1382
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #0 (&pipe->mutex/1){+.+.}:
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
__pipe_lock fs/pipe.c:88 [inline]
fifo_open+0x156/0x9d0 fs/pipe.c:921
do_dentry_open+0x426/0xda0 fs/open.c:764
vfs_open+0x11c/0x210 fs/open.c:878
do_last fs/namei.c:3408 [inline]
path_openat+0x4eb/0x23a0 fs/namei.c:3550
do_filp_open+0x197/0x270 fs/namei.c:3584
do_open_execat+0x10d/0x5b0 fs/exec.c:849
do_execveat_common.isra.14+0x6cb/0x1d60 fs/exec.c:1740
do_execve fs/exec.c:1847 [inline]
SYSC_execve fs/exec.c:1928 [inline]
SyS_execve+0x34/0x40 fs/exec.c:1923
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

other info that might help us debug this:

Chain exists of:
&pipe->mutex/1 --> &p->lock --> &sig->cred_guard_mutex

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sig->cred_guard_mutex);
lock(&p->lock);
lock(&sig->cred_guard_mutex);
lock(&pipe->mutex/1);

*** DEADLOCK ***

1 lock held by syz-executor051/1783:
#0: (&sig->cred_guard_mutex){+.+.}, at: [<ffffffff8496fbbe>]
prepare_bprm_creds+0x4e/0x110 fs/exec.c:1389

stack backtrace:
CPU: 0 PID: 1783 Comm: syz-executor051 Not tainted 4.14.71+ #8
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0xb9/0x11b lib/dump_stack.c:53
print_circular_bug.isra.18.cold.43+0x2d3/0x40c
kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1901 [inline]
check_prevs_add kernel/locking/lockdep.c:2018 [inline]
validate_chain kernel/locking/lockdep.c:2460 [inline]
__lock_acquire+0x2ff9/0x4320 kernel/locking/lockdep.c:3487
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
__pipe_lock fs/pipe.c:88 [inline]
fifo_open+0x156/0x9d0 fs/pipe.c:921
do_dentry_open+0x426/0xda0 fs/open.c:764
vfs_open+0x11c/0x210 fs/open.c:878
do_last fs/namei.c:3408 [inline]
path_openat+0x4eb/0x23a0 fs/namei.c:3550
do_filp_open+0x197/0x270 fs/namei.c:3584
do_open_execat+0x10d/0x5b0 fs/exec.c:849
do_execveat_common.isra.14+0x6cb/0x1d60 fs/exec.c:1740
do_execve fs/exec.c:1847 [inline]
SYSC_execve fs/exec.c:1928 [inline]
SyS_execve+0x34/0x40 fs/exec.c:1923
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x4401a9
RSP: 002b:00007ffee9331d68 EFLAGS: 00000217 ORIG_RAX: 000000000000003b
RAX: ffffffffffffffda RBX: 0030656c69662f2e RCX: 00000000004401a9
RDX: 0000000020000800 RSI: 0000000020000840 RDI: 00000000200003c0
RBP: 00000000006ca018 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000217 R12: 0000000000401a90
R13: 0000000000401b20 R14: 0000000000000000 R15: 0000000000000000


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Apr 11, 2019, 8:00:52 PM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 62872f95 Merge 4.4.174 into android-4.4
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=12f33243200000
kernel config: https://syzkaller.appspot.com/x/.config?x=47bc4dd423780c4a
dashboard link: https://syzkaller.appspot.com/bug?extid=bb7d9c372e85f8619e2f
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16a0f1df200000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12966407200000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+bb7d9c...@syzkaller.appspotmail.com

======================================================
[ INFO: possible circular locking dependency detected ]
4.4.174+ #4 Not tainted
-------------------------------------------------------
syz-executor057/2085 is trying to acquire lock:
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814b28fd>] __pipe_lock
fs/pipe.c:86 [inline]
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814b28fd>] fifo_open+0x15d/0xa00
fs/pipe.c:896

but task is already holding lock:
(&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff814acb45>]
prepare_bprm_creds+0x55/0x120 fs/exec.c:1225

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270e5a2>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270e5a2>] mutex_lock_interruptible_nested+0xd2/0xce0
kernel/locking/mutex.c:650
[<ffffffff815e7f78>] proc_pid_attr_write+0x1a8/0x2a0
fs/proc/base.c:2524
[<ffffffff81496916>] __vfs_write+0x116/0x3d0 fs/read_write.c:491
[<ffffffff81496ce2>] __kernel_write+0x112/0x370 fs/read_write.c:513
[<ffffffff81532e6d>] write_pipe_buf+0x15d/0x1f0 fs/splice.c:1074
[<ffffffff81533b6e>] splice_from_pipe_feed fs/splice.c:776 [inline]
[<ffffffff81533b6e>] __splice_from_pipe+0x37e/0x7a0 fs/splice.c:901
[<ffffffff81536be8>] splice_from_pipe+0x108/0x170 fs/splice.c:936
[<ffffffff81536cdc>] default_file_splice_write+0x3c/0x80
fs/splice.c:1086
[<ffffffff81537d31>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff81537d31>] do_splice fs/splice.c:1404 [inline]
[<ffffffff81537d31>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff81537d31>] SyS_splice+0xd71/0x13a0 fs/splice.c:1690
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202d86>] check_prev_add kernel/locking/lockdep.c:1853
[inline]
[<ffffffff81202d86>] check_prevs_add kernel/locking/lockdep.c:1958
[inline]
[<ffffffff81202d86>] validate_chain kernel/locking/lockdep.c:2144
[inline]
[<ffffffff81202d86>] __lock_acquire+0x37d6/0x4f50
kernel/locking/lockdep.c:3213
[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff814b28fd>] __pipe_lock fs/pipe.c:86 [inline]
[<ffffffff814b28fd>] fifo_open+0x15d/0xa00 fs/pipe.c:896
[<ffffffff8149154f>] do_dentry_open+0x38f/0xbd0 fs/open.c:749
[<ffffffff81494d3b>] vfs_open+0x10b/0x210 fs/open.c:862
[<ffffffff814c5ddf>] do_last fs/namei.c:3269 [inline]
[<ffffffff814c5ddf>] path_openat+0x136f/0x4470 fs/namei.c:3406
[<ffffffff814ccab1>] do_filp_open+0x1a1/0x270 fs/namei.c:3440
[<ffffffff814a7c8c>] do_open_execat+0x10c/0x6e0 fs/exec.c:805
[<ffffffff814ad306>] do_execveat_common.isra.0+0x6f6/0x1e90
fs/exec.c:1577
[<ffffffff814af422>] do_execve fs/exec.c:1683 [inline]
[<ffffffff814af422>] SYSC_execve fs/exec.c:1764 [inline]
[<ffffffff814af422>] SyS_execve+0x42/0x50 fs/exec.c:1759
[<ffffffff82718ef5>] return_from_execve+0x0/0x23

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sig->cred_guard_mutex);
lock(&pipe->mutex/1);
lock(&sig->cred_guard_mutex);
lock(&pipe->mutex/1);

*** DEADLOCK ***

1 lock held by syz-executor057/2085:
#0: (&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff814acb45>]
prepare_bprm_creds+0x55/0x120 fs/exec.c:1225

stack backtrace:
CPU: 0 PID: 2085 Comm: syz-executor057 Not tainted 4.4.174+ #4
0000000000000000 f4ad12c1bed6f2ea ffff8800b663f530 ffffffff81aad1a1
ffffffff84057a80 ffff8800b7178000 ffffffff83abd460 ffffffff83ab6500
ffffffff83abd460 ffff8800b663f580 ffffffff813abcda ffff8800b663f660
Call Trace:
[<ffffffff81aad1a1>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81aad1a1>] dump_stack+0xc1/0x120 lib/dump_stack.c:51
[<ffffffff813abcda>] print_circular_bug.cold+0x2f7/0x44e
kernel/locking/lockdep.c:1226
[<ffffffff81202d86>] check_prev_add kernel/locking/lockdep.c:1853 [inline]
[<ffffffff81202d86>] check_prevs_add kernel/locking/lockdep.c:1958 [inline]
[<ffffffff81202d86>] validate_chain kernel/locking/lockdep.c:2144 [inline]
[<ffffffff81202d86>] __lock_acquire+0x37d6/0x4f50
kernel/locking/lockdep.c:3213
[<ffffffff81205f6e>] lock_acquire+0x15e/0x450 kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff814b28fd>] __pipe_lock fs/pipe.c:86 [inline]
[<ffffffff814b28fd>] fifo_open+0x15d/0xa00 fs/pipe.c:896
[<ffffffff8149154f>] do_dentry_open+0x38f/0xbd0 fs/open.c:749
[<ffffffff81494d3b>] vfs_open+0x10b/0x210 fs/open.c:862
[<ffffffff814c5ddf>] do_last fs/namei.c:3269 [inline]
[<ffffffff814c5ddf>] path_openat+0x136f/0x4470 fs/namei.c:3406
[<ffffffff814ccab1>] do_filp_open+0x1a1/0x270 fs/namei.c:3440
[<ffffffff814a7c8c>] do_open_execat+0x10c/0x6e0 fs/exec.c:805
[<ffffffff814ad306>] do_execveat_common.isra.0+0x6f6/0x1e90 fs/exec.c:1577
[<ffffffff814af422>] do_execve fs/exec.c:1683 [inline]
[<ffffffff814af422>] SYSC_execve fs/exec.c:1764 [inline]
[<ffffffff814af422>] SyS_execve+0x42/0x50 fs/exec.c:1759
[<ffffffff82718ef5>] stub_execve+0x5/0x5 arch/x86/entry/entry_64.S:440
Reply all
Reply to author
Forward
0 new messages