possible deadlock in do_io_accounting

16 views
Skip to first unread message

syzbot

unread,
Apr 11, 2019, 4:44:53 AM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: cb28adba FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=15f5d511400000
kernel config: https://syzkaller.appspot.com/x/.config?x=3dd83bdad246650b
dashboard link: https://syzkaller.appspot.com/bug?extid=f788c5ed82d7038c2d3d
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=177f59fa400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=128fce1a400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+f788c5...@syzkaller.appspotmail.com


======================================================
[ INFO: possible circular locking dependency detected ]
4.4.157+ #101 Not tainted
-------------------------------------------------------
syz-executor405/2092 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff815b52ab>]
do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2647

but task is already holding lock:
(&p->lock){+.+.+.}, at: [<ffffffff814e05fd>] seq_read+0xdd/0x12b0
fs/seq_file.c:178

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268682b>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8268682b>] mutex_lock_nested+0xbb/0x840
kernel/locking/mutex.c:621
[<ffffffff814e05fd>] seq_read+0xdd/0x12b0 fs/seq_file.c:178
[<ffffffff815b0c6d>] proc_reg_read+0xfd/0x180 fs/proc/inode.c:202
[<ffffffff81473378>] do_loop_readv_writev+0x148/0x1e0
fs/read_write.c:680
[<ffffffff814751b1>] do_readv_writev+0x581/0x6f0 fs/read_write.c:810
[<ffffffff81475398>] vfs_readv+0x78/0xb0 fs/read_write.c:834
[<ffffffff8150cd7b>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8150cd7b>] default_file_splice_read+0x4fb/0x8d0
fs/splice.c:662
[<ffffffff81508c67>] do_splice_to+0xf7/0x140 fs/splice.c:1154
[<ffffffff81508ef2>] splice_direct_to_actor+0x242/0x830
fs/splice.c:1226
[<ffffffff81509683>] do_splice_direct+0x1a3/0x270 fs/splice.c:1337
[<ffffffff814765d4>] do_sendfile+0x4e4/0xb80 fs/read_write.c:1227
[<ffffffff814785c3>] SYSC_sendfile64 fs/read_write.c:1282 [inline]
[<ffffffff814785c3>] SyS_sendfile64+0xc3/0x150 fs/read_write.c:1274
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8147acae>] percpu_down_read
include/linux/percpu-rwsem.h:26 [inline]
[<ffffffff8147acae>] __sb_start_write+0x1ae/0x310 fs/super.c:1221
[<ffffffff81690ef7>] sb_start_write include/linux/fs.h:1515 [inline]
[<ffffffff81690ef7>] ext4_run_li_request fs/ext4/super.c:2674
[inline]
[<ffffffff81690ef7>] ext4_lazyinit_thread+0x1a7/0x750
fs/ext4/super.c:2773
[<ffffffff8112f458>] kthread+0x268/0x300 kernel/kthread.c:211
[<ffffffff82691345>] ret_from_fork+0x55/0x80
arch/x86/entry/entry_64.S:510

-> #5 (&eli->li_list_mtx){+.+...}:
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268682b>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8268682b>] mutex_lock_nested+0xbb/0x840
kernel/locking/mutex.c:621
[<ffffffff8169b0e4>] ext4_register_li_request+0x304/0x6c0
fs/ext4/super.c:2961
[<ffffffff8169c808>] ext4_remount+0x1368/0x1bb0 fs/ext4/super.c:4909
[<ffffffff8147dde8>] do_remount_sb2+0x428/0x7d0 fs/super.c:771
[<ffffffff814db4fe>] do_remount fs/namespace.c:2335 [inline]
[<ffffffff814db4fe>] do_mount+0x101e/0x28f0 fs/namespace.c:2848
[<ffffffff814dd841>] SYSC_mount fs/namespace.c:3051 [inline]
[<ffffffff814dd841>] SyS_mount+0x191/0x1c0 fs/namespace.c:3029
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

-> #4 (&ext4_li_mtx){+.+.+.}:
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268682b>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8268682b>] mutex_lock_nested+0xbb/0x840
kernel/locking/mutex.c:621
[<ffffffff8169ae67>] ext4_register_li_request+0x87/0x6c0
fs/ext4/super.c:2934
[<ffffffff8169c808>] ext4_remount+0x1368/0x1bb0 fs/ext4/super.c:4909
[<ffffffff8147dde8>] do_remount_sb2+0x428/0x7d0 fs/super.c:771
[<ffffffff814db4fe>] do_remount fs/namespace.c:2335 [inline]
[<ffffffff814db4fe>] do_mount+0x101e/0x28f0 fs/namespace.c:2848
[<ffffffff814dd841>] SYSC_mount fs/namespace.c:3051 [inline]
[<ffffffff814dd841>] SyS_mount+0x191/0x1c0 fs/namespace.c:3029
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

-> #3 (&type->s_umount_key#34){++++++}:
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268c3d2>] down_read+0x42/0x60 kernel/locking/rwsem.c:22
[<ffffffff8147d501>] iterate_supers+0xe1/0x260 fs/super.c:537
[<ffffffff8195fde3>] selinux_complete_init+0x2f/0x31
security/selinux/hooks.c:6154
[<ffffffff81951b16>] security_load_policy+0x886/0x9b0
security/selinux/ss/services.c:2060
[<ffffffff819280f1>] sel_write_load+0x191/0xfc0
security/selinux/selinuxfs.c:535
[<ffffffff81472c4c>] __vfs_write+0x11c/0x3e0 fs/read_write.c:489
[<ffffffff814748ce>] vfs_write+0x17e/0x4e0 fs/read_write.c:538
[<ffffffff81476f09>] SYSC_write fs/read_write.c:585 [inline]
[<ffffffff81476f09>] SyS_write+0xd9/0x1c0 fs/read_write.c:577
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

-> #2 (sel_mutex){+.+.+.}:
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268682b>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8268682b>] mutex_lock_nested+0xbb/0x840
kernel/locking/mutex.c:621
[<ffffffff81925bd7>] sel_commit_bools_write+0x87/0x250
security/selinux/selinuxfs.c:1142
[<ffffffff81472c4c>] __vfs_write+0x11c/0x3e0 fs/read_write.c:489
[<ffffffff81473000>] __kernel_write+0xf0/0x320 fs/read_write.c:511
[<ffffffff815098ad>] write_pipe_buf+0x15d/0x1f0 fs/splice.c:1074
[<ffffffff8150a744>] splice_from_pipe_feed fs/splice.c:776 [inline]
[<ffffffff8150a744>] __splice_from_pipe+0x364/0x790 fs/splice.c:901
[<ffffffff8150d7e9>] splice_from_pipe+0xf9/0x170 fs/splice.c:936
[<ffffffff8150d8ec>] default_file_splice_write+0x3c/0x80
fs/splice.c:1086
[<ffffffff8150e9b1>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff8150e9b1>] do_splice fs/splice.c:1404 [inline]
[<ffffffff8150e9b1>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff8150e9b1>] SyS_splice+0xde1/0x1430 fs/splice.c:1690
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

-> #1 (&pipe->mutex/1){+.+.+.}:
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8268682b>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8268682b>] mutex_lock_nested+0xbb/0x840
kernel/locking/mutex.c:621
[<ffffffff8148e2ec>] __pipe_lock fs/pipe.c:86 [inline]
[<ffffffff8148e2ec>] fifo_open+0x15c/0x9e0 fs/pipe.c:896
[<ffffffff8146da3d>] do_dentry_open+0x38d/0xbd0 fs/open.c:749
[<ffffffff814711aa>] vfs_open+0x12a/0x210 fs/open.c:862
[<ffffffff814a063c>] do_last fs/namei.c:3222 [inline]
[<ffffffff814a063c>] path_openat+0x50c/0x39a0 fs/namei.c:3359
[<ffffffff814a7767>] do_filp_open+0x197/0x270 fs/namei.c:3393
[<ffffffff8148375f>] do_open_execat+0x10f/0x6f0 fs/exec.c:800
[<ffffffff81488d81>] do_execveat_common.isra.15+0x6a1/0x1f00
fs/exec.c:1573
[<ffffffff8148af52>] do_execve fs/exec.c:1679 [inline]
[<ffffffff8148af52>] SYSC_execve fs/exec.c:1760 [inline]
[<ffffffff8148af52>] SyS_execve+0x42/0x50 fs/exec.c:1755
[<ffffffff82691275>] return_from_execve+0x0/0x23

-> #0 (&sig->cred_guard_mutex){+.+.+.}:
[<ffffffff811f740e>] check_prev_add kernel/locking/lockdep.c:1853
[inline]
[<ffffffff811f740e>] check_prevs_add kernel/locking/lockdep.c:1958
[inline]
[<ffffffff811f740e>] validate_chain kernel/locking/lockdep.c:2144
[inline]
[<ffffffff811f740e>] __lock_acquire+0x3b6e/0x5ba0
kernel/locking/lockdep.c:3213
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826878bc>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826878bc>] mutex_lock_killable_nested+0xcc/0x980
kernel/locking/mutex.c:641
[<ffffffff815b52ab>] do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2647
[<ffffffff815b58b2>] proc_tgid_io_accounting+0x22/0x30
fs/proc/base.c:2696
[<ffffffff815b348d>] proc_single_show+0xfd/0x170 fs/proc/base.c:791
[<ffffffff814e09d6>] seq_read+0x4b6/0x12b0 fs/seq_file.c:240
[<ffffffff8147287c>] __vfs_read+0x11c/0x3d0 fs/read_write.c:432
[<ffffffff81474520>] vfs_read+0x130/0x360 fs/read_write.c:454
[<ffffffff81477135>] SYSC_pread64 fs/read_write.c:607 [inline]
[<ffffffff81477135>] SyS_pread64+0x145/0x170 fs/read_write.c:594
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

other info that might help us debug this:

Chain exists of:
&sig->cred_guard_mutex --> sb_writers#4 --> &p->lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(sb_writers#4);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

1 lock held by syz-executor405/2092:
#0: (&p->lock){+.+.+.}, at: [<ffffffff814e05fd>] seq_read+0xdd/0x12b0
fs/seq_file.c:178

stack backtrace:
CPU: 1 PID: 2092 Comm: syz-executor405 Not tainted 4.4.157+ #101
0000000000000000 412908aa696cbf82 ffff8800b9c576b8 ffffffff81a559fd
ffffffff83ab26a0 ffffffff83aae320 ffffffff83aac9d0 ffff8800b7ecb868
ffff8800b7ecaf80 ffff8800b9c57700 ffffffff813924cf 0000000000000001
Call Trace:
[<ffffffff81a559fd>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81a559fd>] dump_stack+0xc1/0x124 lib/dump_stack.c:51
[<ffffffff813924cf>] print_circular_bug.cold.34+0x2f7/0x432
kernel/locking/lockdep.c:1226
[<ffffffff811f740e>] check_prev_add kernel/locking/lockdep.c:1853 [inline]
[<ffffffff811f740e>] check_prevs_add kernel/locking/lockdep.c:1958 [inline]
[<ffffffff811f740e>] validate_chain kernel/locking/lockdep.c:2144 [inline]
[<ffffffff811f740e>] __lock_acquire+0x3b6e/0x5ba0
kernel/locking/lockdep.c:3213
[<ffffffff811fad5e>] lock_acquire+0x15e/0x450 kernel/locking/lockdep.c:3592
[<ffffffff826878bc>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826878bc>] mutex_lock_killable_nested+0xcc/0x980
kernel/locking/mutex.c:641
[<ffffffff815b52ab>] do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2647
[<ffffffff815b58b2>] proc_tgid_io_accounting+0x22/0x30 fs/proc/base.c:2696
[<ffffffff815b348d>] proc_single_show+0xfd/0x170 fs/proc/base.c:791
[<ffffffff814e09d6>] seq_read+0x4b6/0x12b0 fs/seq_file.c:240
[<ffffffff8147287c>] __vfs_read+0x11c/0x3d0 fs/read_write.c:432
[<ffffffff81474520>] vfs_read+0x130/0x360 fs/read_write.c:454
[<ffffffff81477135>] SYSC_pread64 fs/read_write.c:607 [inline]
[<ffffffff81477135>] SyS_pread64+0x145/0x170 fs/read_write.c:594
[<ffffffff82690f61>] entry_SYSCALL_64_fastpath+0x1e/0x9a


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Apr 11, 2019, 8:00:42 PM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 1c57ba4f FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=1198a281400000
kernel config: https://syzkaller.appspot.com/x/.config?x=ce644b18d115ba72
dashboard link: https://syzkaller.appspot.com/bug?extid=6239eb16338efe02b7eb
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
userspace arch: i386
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15aa424e400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=130cbbae400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+6239eb...@syzkaller.appspotmail.com


======================================================
[ INFO: possible circular locking dependency detected ]
4.9.128+ #45 Not tainted
-------------------------------------------------------
syz-executor276/4173 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff816384db>]
do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2676
but task is already holding lock:
(&p->lock){+.+.+.}, at: [<ffffffff8155eafd>] seq_read+0xdd/0x12d0
fs/seq_file.c:178
which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_nested+0xc0/0x870 kernel/locking/mutex.c:621
seq_read+0xdd/0x12d0 fs/seq_file.c:178
proc_reg_read+0xfd/0x180 fs/proc/inode.c:203
do_loop_readv_writev.part.1+0xd5/0x280 fs/read_write.c:718
do_loop_readv_writev fs/read_write.c:707 [inline]
do_readv_writev+0x56e/0x7b0 fs/read_write.c:873
vfs_readv+0x84/0xc0 fs/read_write.c:897
kernel_readv fs/splice.c:363 [inline]
default_file_splice_read+0x44b/0x7e0 fs/splice.c:435
do_splice_to+0x10c/0x170 fs/splice.c:899
do_splice fs/splice.c:1192 [inline]
SYSC_splice fs/splice.c:1416 [inline]
SyS_splice+0x10d2/0x14d0 fs/splice.c:1399
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_nested+0xc0/0x870 kernel/locking/mutex.c:621
__pipe_lock fs/pipe.c:87 [inline]
fifo_open+0x15c/0x9e0 fs/pipe.c:921
do_dentry_open+0x3ef/0xc90 fs/open.c:766
vfs_open+0x11c/0x210 fs/open.c:879
do_last fs/namei.c:3410 [inline]
path_openat+0x542/0x2790 fs/namei.c:3534
do_filp_open+0x197/0x270 fs/namei.c:3568
do_open_execat+0x10f/0x640 fs/exec.c:844
do_execveat_common.isra.15+0x687/0x1f80 fs/exec.c:1723
compat_do_execve fs/exec.c:1856 [inline]
C_SYSC_execve fs/exec.c:1931 [inline]
compat_SyS_execve+0x48/0x60 fs/exec.c:1927
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

check_prev_add kernel/locking/lockdep.c:1828 [inline]
check_prevs_add kernel/locking/lockdep.c:1938 [inline]
validate_chain kernel/locking/lockdep.c:2265 [inline]
__lock_acquire+0x3189/0x4a10 kernel/locking/lockdep.c:3345
lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_killable_nested+0xcc/0x960 kernel/locking/mutex.c:641
do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2676
proc_tgid_io_accounting+0x22/0x30 fs/proc/base.c:2725
proc_single_show+0xfd/0x170 fs/proc/base.c:771
traverse+0x363/0x920 fs/seq_file.c:124
seq_read+0xd1b/0x12d0 fs/seq_file.c:195
__vfs_read+0x115/0x560 fs/read_write.c:449
vfs_read+0x124/0x390 fs/read_write.c:472
SYSC_pread64 fs/read_write.c:626 [inline]
SyS_pread64+0x145/0x170 fs/read_write.c:613
sys32_pread+0x39/0x50 arch/x86/ia32/sys_ia32.c:179
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(&pipe->mutex/1);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

1 lock held by syz-executor276/4173:
#0: (&p->lock){+.+.+.}, at: [<ffffffff8155eafd>] seq_read+0xdd/0x12d0
fs/seq_file.c:178

stack backtrace:
CPU: 1 PID: 4173 Comm: syz-executor276 Not tainted 4.9.128+ #45
ffff8801ccce7518 ffffffff81af2469 ffffffff83aa85f0 ffffffff83aa2c80
ffffffff83aa1330 ffff8801cdb8b850 ffff8801cdb8af80 ffff8801ccce7560
ffffffff813e79ed 0000000000000001 00000000cdb8b830 0000000000000001
Call Trace:
[<ffffffff81af2469>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81af2469>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff813e79ed>] print_circular_bug.cold.36+0x2f7/0x432
kernel/locking/lockdep.c:1202
[<ffffffff81202779>] check_prev_add kernel/locking/lockdep.c:1828 [inline]
[<ffffffff81202779>] check_prevs_add kernel/locking/lockdep.c:1938 [inline]
[<ffffffff81202779>] validate_chain kernel/locking/lockdep.c:2265 [inline]
[<ffffffff81202779>] __lock_acquire+0x3189/0x4a10
kernel/locking/lockdep.c:3345
[<ffffffff81204b10>] lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
[<ffffffff827837bc>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff827837bc>] mutex_lock_killable_nested+0xcc/0x960
kernel/locking/mutex.c:641
[<ffffffff816384db>] do_io_accounting+0x1fb/0x7e0 fs/proc/base.c:2676
[<ffffffff81638ae2>] proc_tgid_io_accounting+0x22/0x30 fs/proc/base.c:2725
[<ffffffff8163683d>] proc_single_show+0xfd/0x170 fs/proc/base.c:771
[<ffffffff8155e0a3>] traverse+0x363/0x920 fs/seq_file.c:124
[<ffffffff8155f73b>] seq_read+0xd1b/0x12d0 fs/seq_file.c:195
[<ffffffff814e8535>] __vfs_read+0x115/0x560 fs/read_write.c:449
[<ffffffff814eb1b4>] vfs_read+0x124/0x390 fs/read_write.c:472
[<ffffffff814ef605>] SYSC_pread64 fs/read_write.c:626 [inline]
[<ffffffff814ef605>] SyS_pread64+0x145/0x170 fs/read_write.c:613
[<ffffffff810c4299>] sys32_pread+0x39/0x50 arch/x86/ia32/sys_ia32.c:179
[<ffffffff81005fd1>] do_syscall_32_irqs_on arch/x86/entry/common.c:325
[inline]
[<ffffffff81005fd1>] do_fast_syscall_32+0x2f1/0x860
arch/x86/entry/common.c:387
[<ffffffff8278f460>] entry_SYSENTER_compat+0x90/0xa2
arch/x86/entry/entry_64_compat.S:137

syzbot

unread,
Apr 11, 2019, 8:01:07 PM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 666c420f FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.14
console output: https://syzkaller.appspot.com/x/log.txt?x=12b84059400000
kernel config: https://syzkaller.appspot.com/x/.config?x=89d929f317ea847c
dashboard link: https://syzkaller.appspot.com/bug?extid=5fb1a5a226b752b23fdc
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=106a424e400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+5fb1a5...@syzkaller.appspotmail.com


======================================================
WARNING: possible circular locking dependency detected
4.14.71+ #8 Not tainted
------------------------------------------------------
syz-executor0/15734 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.}, at: [<ffffffffa2e9e337>]
do_io_accounting+0x1d7/0x770 fs/proc/base.c:2717

but task is already holding lock:
(&p->lock){+.+.}, at: [<ffffffffa2dd06c4>] seq_read+0xd4/0x11d0
fs/seq_file.c:165

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&p->lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
seq_read+0xd4/0x11d0 fs/seq_file.c:165
proc_reg_read+0xef/0x170 fs/proc/inode.c:217
do_loop_readv_writev fs/read_write.c:698 [inline]
do_iter_read+0x3cc/0x580 fs/read_write.c:922
vfs_readv+0xe6/0x150 fs/read_write.c:984
kernel_readv fs/splice.c:361 [inline]
default_file_splice_read+0x495/0x860 fs/splice.c:416
do_splice_to+0x102/0x150 fs/splice.c:880
do_splice fs/splice.c:1173 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xf4d/0x12a0 fs/splice.c:1382
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #1 (&pipe->mutex/1){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
__pipe_lock fs/pipe.c:88 [inline]
fifo_open+0x156/0x9d0 fs/pipe.c:921
do_dentry_open+0x426/0xda0 fs/open.c:764
vfs_open+0x11c/0x210 fs/open.c:878
do_last fs/namei.c:3408 [inline]
path_openat+0x4eb/0x23a0 fs/namei.c:3550
do_filp_open+0x197/0x270 fs/namei.c:3584
do_open_execat+0x10d/0x5b0 fs/exec.c:849
do_execveat_common.isra.14+0x6cb/0x1d60 fs/exec.c:1740
do_execve fs/exec.c:1847 [inline]
SYSC_execve fs/exec.c:1928 [inline]
SyS_execve+0x34/0x40 fs/exec.c:1923
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #0 (&sig->cred_guard_mutex){+.+.}:
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
do_io_accounting+0x1d7/0x770 fs/proc/base.c:2717
proc_single_show+0xf1/0x160 fs/proc/base.c:748
seq_read+0x4e0/0x11d0 fs/seq_file.c:237
__vfs_read+0xf4/0x5b0 fs/read_write.c:411
vfs_read+0x11e/0x330 fs/read_write.c:447
SYSC_pread64 fs/read_write.c:615 [inline]
SyS_pread64+0x136/0x160 fs/read_write.c:602
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

other info that might help us debug this:

Chain exists of:
&sig->cred_guard_mutex --> &pipe->mutex/1 --> &p->lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(&pipe->mutex/1);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

1 lock held by syz-executor0/15734:
#0: (&p->lock){+.+.}, at: [<ffffffffa2dd06c4>] seq_read+0xd4/0x11d0
fs/seq_file.c:165

stack backtrace:
CPU: 0 PID: 15734 Comm: syz-executor0 Not tainted 4.14.71+ #8
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0xb9/0x11b lib/dump_stack.c:53
print_circular_bug.isra.18.cold.43+0x2d3/0x40c
kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1901 [inline]
check_prevs_add kernel/locking/lockdep.c:2018 [inline]
validate_chain kernel/locking/lockdep.c:2460 [inline]
__lock_acquire+0x2ff9/0x4320 kernel/locking/lockdep.c:3487
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
do_io_accounting+0x1d7/0x770 fs/proc/base.c:2717
proc_single_show+0xf1/0x160 fs/proc/base.c:748
seq_read+0x4e0/0x11d0 fs/seq_file.c:237
__vfs_read+0xf4/0x5b0 fs/read_write.c:411
vfs_read+0x11e/0x330 fs/read_write.c:447
SYSC_pread64 fs/read_write.c:615 [inline]
SyS_pread64+0x136/0x160 fs/read_write.c:602
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x457679
RSP: 002b:00007f827e155c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000011
RAX: ffffffffffffffda RBX: 00007f827e1566d4 RCX: 0000000000457679
RDX: 0000000000000592 RSI: 00000000200000c0 RDI: 0000000000000006
RBP: 000000000072bfa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000004d4860 R14: 00000000004c30c2 R15: 0000000000000001
Reply all
Reply to author
Forward
0 new messages