possible deadlock in lock_trace

14 views
Skip to first unread message

syzbot

unread,
Apr 11, 2019, 4:44:52 AM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 85b352c4 Merge remote-tracking branch 'origin/upstream-f2f..
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=168fd031400000
kernel config: https://syzkaller.appspot.com/x/.config?x=22427be3cc83c9e4
dashboard link: https://syzkaller.appspot.com/bug?extid=47ba7bbad461b2008730
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=101f6b56400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=159bac3a400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+47ba7b...@syzkaller.appspotmail.com


======================================================
[ INFO: possible circular locking dependency detected ]
4.4.158+ #105 Not tainted
-------------------------------------------------------
syz-executor392/4816 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff815e2cb4>]
lock_trace+0x44/0xc0 fs/proc/base.c:448

but task is already holding lock:
(&p->lock){+.+.+.}, at: [<ffffffff815013dd>] seq_read+0xdd/0x12b0
fs/seq_file.c:178

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fa4fb>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fa4fb>] mutex_lock_nested+0xbb/0x8d0
kernel/locking/mutex.c:621
[<ffffffff815013dd>] seq_read+0xdd/0x12b0 fs/seq_file.c:178
[<ffffffff814914d8>] do_loop_readv_writev+0x148/0x1e0
fs/read_write.c:680
[<ffffffff81493311>] do_readv_writev+0x581/0x6f0 fs/read_write.c:810
[<ffffffff814934f8>] vfs_readv+0x78/0xb0 fs/read_write.c:834
[<ffffffff8152f69f>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8152f69f>] default_file_splice_read+0x50f/0x8f0
fs/splice.c:662
[<ffffffff8152b577>] do_splice_to+0xf7/0x140 fs/splice.c:1154
[<ffffffff8152b802>] splice_direct_to_actor+0x242/0x830
fs/splice.c:1226
[<ffffffff8152bf93>] do_splice_direct+0x1a3/0x270 fs/splice.c:1337
[<ffffffff81494734>] do_sendfile+0x4e4/0xb80 fs/read_write.c:1227
[<ffffffff81496723>] SYSC_sendfile64 fs/read_write.c:1282 [inline]
[<ffffffff81496723>] SyS_sendfile64+0xc3/0x150 fs/read_write.c:1274
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff81498e0e>] percpu_down_read
include/linux/percpu-rwsem.h:26 [inline]
[<ffffffff81498e0e>] __sb_start_write+0x1ae/0x310 fs/super.c:1221
[<ffffffff816bb0c7>] sb_start_write include/linux/fs.h:1515 [inline]
[<ffffffff816bb0c7>] ext4_run_li_request fs/ext4/super.c:2674
[inline]
[<ffffffff816bb0c7>] ext4_lazyinit_thread+0x1a7/0x750
fs/ext4/super.c:2773
[<ffffffff81134038>] kthread+0x268/0x300 kernel/kthread.c:211
[<ffffffff82705e45>] ret_from_fork+0x55/0x80
arch/x86/entry/entry_64.S:510

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fa4fb>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fa4fb>] mutex_lock_nested+0xbb/0x8d0
kernel/locking/mutex.c:621
[<ffffffff816c51c4>] ext4_register_li_request+0x304/0x7a0
fs/ext4/super.c:2961
[<ffffffff816c69c8>] ext4_remount+0x1368/0x1bb0 fs/ext4/super.c:4909
[<ffffffff8149c0b8>] do_remount_sb2+0x428/0x7d0 fs/super.c:771
[<ffffffff814fbffe>] do_remount fs/namespace.c:2335 [inline]
[<ffffffff814fbffe>] do_mount+0x101e/0x2a10 fs/namespace.c:2848
[<ffffffff814fe541>] SYSC_mount fs/namespace.c:3051 [inline]
[<ffffffff814fe541>] SyS_mount+0x191/0x1c0 fs/namespace.c:3029
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fa4fb>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fa4fb>] mutex_lock_nested+0xbb/0x8d0
kernel/locking/mutex.c:621
[<ffffffff816c4f47>] ext4_register_li_request+0x87/0x7a0
fs/ext4/super.c:2934
[<ffffffff816c69c8>] ext4_remount+0x1368/0x1bb0 fs/ext4/super.c:4909
[<ffffffff8149c0b8>] do_remount_sb2+0x428/0x7d0 fs/super.c:771
[<ffffffff814fbffe>] do_remount fs/namespace.c:2335 [inline]
[<ffffffff814fbffe>] do_mount+0x101e/0x2a10 fs/namespace.c:2848
[<ffffffff814fe541>] SYSC_mount fs/namespace.c:3051 [inline]
[<ffffffff814fe541>] SyS_mount+0x191/0x1c0 fs/namespace.c:3029
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff82700a72>] down_read+0x42/0x60 kernel/locking/rwsem.c:22
[<ffffffff8149b7d1>] iterate_supers+0xe1/0x260 fs/super.c:537
[<ffffffff8199de54>] selinux_complete_init+0x2f/0x31
security/selinux/hooks.c:6154
[<ffffffff8198faa6>] security_load_policy+0x886/0x9b0
security/selinux/ss/services.c:2060
[<ffffffff819659d1>] sel_write_load+0x191/0xfc0
security/selinux/selinuxfs.c:535
[<ffffffff81490d7c>] __vfs_write+0x11c/0x3e0 fs/read_write.c:489
[<ffffffff81492a2e>] vfs_write+0x17e/0x4e0 fs/read_write.c:538
[<ffffffff81495069>] SYSC_write fs/read_write.c:585 [inline]
[<ffffffff81495069>] SyS_write+0xd9/0x1c0 fs/read_write.c:577
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fa4fb>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fa4fb>] mutex_lock_nested+0xbb/0x8d0
kernel/locking/mutex.c:621
[<ffffffff819634b7>] sel_commit_bools_write+0x87/0x250
security/selinux/selinuxfs.c:1142
[<ffffffff81490d7c>] __vfs_write+0x11c/0x3e0 fs/read_write.c:489
[<ffffffff8149114a>] __kernel_write+0x10a/0x350 fs/read_write.c:511
[<ffffffff8152c1bd>] write_pipe_buf+0x15d/0x1f0 fs/splice.c:1074
[<ffffffff8152d054>] splice_from_pipe_feed fs/splice.c:776 [inline]
[<ffffffff8152d054>] __splice_from_pipe+0x364/0x790 fs/splice.c:901
[<ffffffff81530119>] splice_from_pipe+0xf9/0x170 fs/splice.c:936
[<ffffffff8153021c>] default_file_splice_write+0x3c/0x80
fs/splice.c:1086
[<ffffffff815312e1>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff815312e1>] do_splice fs/splice.c:1404 [inline]
[<ffffffff815312e1>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff815312e1>] SyS_splice+0xde1/0x1430 fs/splice.c:1690
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fa4fb>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fa4fb>] mutex_lock_nested+0xbb/0x8d0
kernel/locking/mutex.c:621
[<ffffffff814acb6c>] __pipe_lock fs/pipe.c:86 [inline]
[<ffffffff814acb6c>] fifo_open+0x15c/0x9e0 fs/pipe.c:896
[<ffffffff8148bb6d>] do_dentry_open+0x38d/0xbd0 fs/open.c:749
[<ffffffff8148f2da>] vfs_open+0x12a/0x210 fs/open.c:862
[<ffffffff814beebc>] do_last fs/namei.c:3222 [inline]
[<ffffffff814beebc>] path_openat+0x50c/0x39a0 fs/namei.c:3359
[<ffffffff814c5fe7>] do_filp_open+0x197/0x270 fs/namei.c:3393
[<ffffffff814a204f>] do_open_execat+0x10f/0x6f0 fs/exec.c:800
[<ffffffff814a7601>] do_execveat_common.isra.14+0x6a1/0x1f00
fs/exec.c:1573
[<ffffffff814a97d2>] do_execve fs/exec.c:1679 [inline]
[<ffffffff814a97d2>] SYSC_execve fs/exec.c:1760 [inline]
[<ffffffff814a97d2>] SyS_execve+0x42/0x50 fs/exec.c:1755
[<ffffffff82705d75>] return_from_execve+0x0/0x23

[<ffffffff811ff31c>] check_prev_add kernel/locking/lockdep.c:1853
[inline]
[<ffffffff811ff31c>] check_prevs_add kernel/locking/lockdep.c:1958
[inline]
[<ffffffff811ff31c>] validate_chain kernel/locking/lockdep.c:2144
[inline]
[<ffffffff811ff31c>] __lock_acquire+0x3e6c/0x5f10
kernel/locking/lockdep.c:3213
[<ffffffff81202efe>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff826fb6ac>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fb6ac>] mutex_lock_killable_nested+0xcc/0xa10
kernel/locking/mutex.c:641
[<ffffffff815e2cb4>] lock_trace+0x44/0xc0 fs/proc/base.c:448
[<ffffffff815e3499>] proc_pid_syscall+0xa9/0x260 fs/proc/base.c:669
[<ffffffff815db15d>] proc_single_show+0xfd/0x170 fs/proc/base.c:791
[<ffffffff815017b6>] seq_read+0x4b6/0x12b0 fs/seq_file.c:240
[<ffffffff814914d8>] do_loop_readv_writev+0x148/0x1e0
fs/read_write.c:680
[<ffffffff81493311>] do_readv_writev+0x581/0x6f0 fs/read_write.c:810
[<ffffffff814934f8>] vfs_readv+0x78/0xb0 fs/read_write.c:834
[<ffffffff8152f69f>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8152f69f>] default_file_splice_read+0x50f/0x8f0
fs/splice.c:662
[<ffffffff8152b577>] do_splice_to+0xf7/0x140 fs/splice.c:1154
[<ffffffff8152b802>] splice_direct_to_actor+0x242/0x830
fs/splice.c:1226
[<ffffffff8152bf93>] do_splice_direct+0x1a3/0x270 fs/splice.c:1337
[<ffffffff81494734>] do_sendfile+0x4e4/0xb80 fs/read_write.c:1227
[<ffffffff81496723>] SYSC_sendfile64 fs/read_write.c:1282 [inline]
[<ffffffff81496723>] SyS_sendfile64+0xc3/0x150 fs/read_write.c:1274
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(sb_writers#4);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

2 locks held by syz-executor392/4816:
#0: (sb_writers#4){.+.+.+}, at: [<ffffffff81494aea>] file_start_write
include/linux/fs.h:2541 [inline]
#0: (sb_writers#4){.+.+.+}, at: [<ffffffff81494aea>]
do_sendfile+0x89a/0xb80 fs/read_write.c:1226
#1: (&p->lock){+.+.+.}, at: [<ffffffff815013dd>] seq_read+0xdd/0x12b0
fs/seq_file.c:178

stack backtrace:
CPU: 1 PID: 4816 Comm: syz-executor392 Not tainted 4.4.158+ #105
0000000000000000 883dd2029a9f1e36 ffff8800b33e6d98 ffffffff81a991dd
ffffffff83ab46a0 ffffffff83ab04d0 ffffffff83aaeb80 ffff8800b318b890
ffff8800b318af80 ffff8800b33e6de0 ffffffff813a84da 0000000000000002
Call Trace:
[<ffffffff81a991dd>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81a991dd>] dump_stack+0xc1/0x124 lib/dump_stack.c:51
[<ffffffff813a84da>] print_circular_bug.cold.34+0x2f7/0x432
kernel/locking/lockdep.c:1226
[<ffffffff811ff31c>] check_prev_add kernel/locking/lockdep.c:1853 [inline]
[<ffffffff811ff31c>] check_prevs_add kernel/locking/lockdep.c:1958 [inline]
[<ffffffff811ff31c>] validate_chain kernel/locking/lockdep.c:2144 [inline]
[<ffffffff811ff31c>] __lock_acquire+0x3e6c/0x5f10
kernel/locking/lockdep.c:3213
[<ffffffff81202efe>] lock_acquire+0x15e/0x450 kernel/locking/lockdep.c:3592
[<ffffffff826fb6ac>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff826fb6ac>] mutex_lock_killable_nested+0xcc/0xa10
kernel/locking/mutex.c:641
[<ffffffff815e2cb4>] lock_trace+0x44/0xc0 fs/proc/base.c:448
[<ffffffff815e3499>] proc_pid_syscall+0xa9/0x260 fs/proc/base.c:669
[<ffffffff815db15d>] proc_single_show+0xfd/0x170 fs/proc/base.c:791
[<ffffffff815017b6>] seq_read+0x4b6/0x12b0 fs/seq_file.c:240
[<ffffffff814914d8>] do_loop_readv_writev+0x148/0x1e0 fs/read_write.c:680
[<ffffffff81493311>] do_readv_writev+0x581/0x6f0 fs/read_write.c:810
[<ffffffff814934f8>] vfs_readv+0x78/0xb0 fs/read_write.c:834
[<ffffffff8152f69f>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8152f69f>] default_file_splice_read+0x50f/0x8f0 fs/splice.c:662
[<ffffffff8152b577>] do_splice_to+0xf7/0x140 fs/splice.c:1154
[<ffffffff8152b802>] splice_direct_to_actor+0x242/0x830 fs/splice.c:1226
[<ffffffff8152bf93>] do_splice_direct+0x1a3/0x270 fs/splice.c:1337
[<ffffffff81494734>] do_sendfile+0x4e4/0xb80 fs/read_write.c:1227
[<ffffffff81496723>] SYSC_sendfile64 fs/read_write.c:1282 [inline]
[<ffffffff81496723>] SyS_sendfile64+0xc3/0x150 fs/read_write.c:1274
[<ffffffff82705a61>] entry_SYSCALL_64_fastpath+0x1e/0x9a


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches

syzbot

unread,
Apr 11, 2019, 8:00:39 PM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 1c57ba4f FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.9
console output: https://syzkaller.appspot.com/x/log.txt?x=17c7b369400000
kernel config: https://syzkaller.appspot.com/x/.config?x=ce644b18d115ba72
dashboard link: https://syzkaller.appspot.com/bug?extid=d5193c8ff411466c6053
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
userspace arch: i386
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1169992a400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=12bb6511400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+d5193c...@syzkaller.appspotmail.com

audit: type=1400 audit(1537689852.649:8): avc: denied { dac_override }
for pid=5747 comm="syz-executor766" capability=1
scontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023
tcontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023 tclass=cap_userns
permissive=1

======================================================
[ INFO: possible circular locking dependency detected ]
4.9.128+ #45 Not tainted
-------------------------------------------------------
syz-executor766/5748 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.+.}, at: [<ffffffff8163e0f4>]
lock_trace+0x44/0xc0 fs/proc/base.c:431
but task is already holding lock:
(&p->lock){+.+.+.}, at: [<ffffffff8155eafd>] seq_read+0xdd/0x12d0
fs/seq_file.c:178
which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_nested+0xc0/0x870 kernel/locking/mutex.c:621
seq_read+0xdd/0x12d0 fs/seq_file.c:178
proc_reg_read+0xfd/0x180 fs/proc/inode.c:203
do_loop_readv_writev.part.1+0xd5/0x280 fs/read_write.c:718
do_loop_readv_writev fs/read_write.c:707 [inline]
do_readv_writev+0x56e/0x7b0 fs/read_write.c:873
vfs_readv+0x84/0xc0 fs/read_write.c:897
kernel_readv fs/splice.c:363 [inline]
default_file_splice_read+0x44b/0x7e0 fs/splice.c:435
do_splice_to+0x10c/0x170 fs/splice.c:899
do_splice fs/splice.c:1192 [inline]
SYSC_splice fs/splice.c:1416 [inline]
SyS_splice+0x10d2/0x14d0 fs/splice.c:1399
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_nested+0xc0/0x870 kernel/locking/mutex.c:621
__pipe_lock fs/pipe.c:87 [inline]
fifo_open+0x15c/0x9e0 fs/pipe.c:921
do_dentry_open+0x3ef/0xc90 fs/open.c:766
vfs_open+0x11c/0x210 fs/open.c:879
do_last fs/namei.c:3410 [inline]
path_openat+0x542/0x2790 fs/namei.c:3534
do_filp_open+0x197/0x270 fs/namei.c:3568
do_open_execat+0x10f/0x640 fs/exec.c:844
do_execveat_common.isra.15+0x687/0x1f80 fs/exec.c:1723
compat_do_execve fs/exec.c:1856 [inline]
C_SYSC_execve fs/exec.c:1931 [inline]
compat_SyS_execve+0x48/0x60 fs/exec.c:1927
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

check_prev_add kernel/locking/lockdep.c:1828 [inline]
check_prevs_add kernel/locking/lockdep.c:1938 [inline]
validate_chain kernel/locking/lockdep.c:2265 [inline]
__lock_acquire+0x3189/0x4a10 kernel/locking/lockdep.c:3345
lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
__mutex_lock_common kernel/locking/mutex.c:521 [inline]
mutex_lock_killable_nested+0xcc/0x960 kernel/locking/mutex.c:641
lock_trace+0x44/0xc0 fs/proc/base.c:431
proc_pid_stack+0xdc/0x220 fs/proc/base.c:467
proc_single_show+0xfd/0x170 fs/proc/base.c:771
traverse+0x363/0x920 fs/seq_file.c:124
seq_read+0xd1b/0x12d0 fs/seq_file.c:195
do_loop_readv_writev.part.1+0xd5/0x280 fs/read_write.c:718
do_loop_readv_writev fs/read_write.c:707 [inline]
compat_do_readv_writev+0x570/0x7b0 fs/read_write.c:1091
compat_readv+0xe2/0x150 fs/read_write.c:1120
do_compat_preadv64+0x152/0x180 fs/read_write.c:1169
C_SYSC_preadv fs/read_write.c:1189 [inline]
compat_SyS_preadv+0x3b/0x50 fs/read_write.c:1183
do_syscall_32_irqs_on arch/x86/entry/common.c:325 [inline]
do_fast_syscall_32+0x2f1/0x860 arch/x86/entry/common.c:387
entry_SYSENTER_compat+0x90/0xa2 arch/x86/entry/entry_64_compat.S:137

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(&pipe->mutex/1);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

1 lock held by syz-executor766/5748:
#0: (&p->lock){+.+.+.}, at: [<ffffffff8155eafd>] seq_read+0xdd/0x12d0
fs/seq_file.c:178

stack backtrace:
CPU: 0 PID: 5748 Comm: syz-executor766 Not tainted 4.9.128+ #45
ffff8801c5b37468 ffffffff81af2469 ffffffff83aa8440 ffffffff83aa2ad0
ffffffff83aa1180 ffff8801c54fd010 ffff8801c54fc740 ffff8801c5b374b0
ffffffff813e79ed 0000000000000001 00000000c54fcff0 0000000000000001
Call Trace:
[<ffffffff81af2469>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81af2469>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
[<ffffffff813e79ed>] print_circular_bug.cold.36+0x2f7/0x432
kernel/locking/lockdep.c:1202
[<ffffffff81202779>] check_prev_add kernel/locking/lockdep.c:1828 [inline]
[<ffffffff81202779>] check_prevs_add kernel/locking/lockdep.c:1938 [inline]
[<ffffffff81202779>] validate_chain kernel/locking/lockdep.c:2265 [inline]
[<ffffffff81202779>] __lock_acquire+0x3189/0x4a10
kernel/locking/lockdep.c:3345
[<ffffffff81204b10>] lock_acquire+0x130/0x3e0 kernel/locking/lockdep.c:3756
[<ffffffff827837bc>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff827837bc>] mutex_lock_killable_nested+0xcc/0x960
kernel/locking/mutex.c:641
[<ffffffff8163e0f4>] lock_trace+0x44/0xc0 fs/proc/base.c:431
[<ffffffff8163e24c>] proc_pid_stack+0xdc/0x220 fs/proc/base.c:467
[<ffffffff8163683d>] proc_single_show+0xfd/0x170 fs/proc/base.c:771
[<ffffffff8155e0a3>] traverse+0x363/0x920 fs/seq_file.c:124
[<ffffffff8155f73b>] seq_read+0xd1b/0x12d0 fs/seq_file.c:195
[<ffffffff814ea805>] do_loop_readv_writev.part.1+0xd5/0x280
fs/read_write.c:718
[<ffffffff814ed120>] do_loop_readv_writev fs/read_write.c:707 [inline]
[<ffffffff814ed120>] compat_do_readv_writev+0x570/0x7b0
fs/read_write.c:1091
[<ffffffff814ed442>] compat_readv+0xe2/0x150 fs/read_write.c:1120
[<ffffffff814ed7d2>] do_compat_preadv64+0x152/0x180 fs/read_write.c:1169
[<ffffffff814efd5b>] C_SYSC_preadv fs/read_write.c:1189 [inline]
[<ffffffff814efd5b>] compat_SyS_preadv+0x3b/0x50 fs/read_write.c:1183
[<ffffffff81005fd1>] do_syscall_32_irqs_on arch/x86/entry/common.c:325
[inline]
[<ffffffff81005fd1>] do_fast_syscall_32+0x2f1/0x860
arch/x86/entry/common.c:387
[<ffffffff8278f460>] entry_SYSENTER_compat+0x90/0xa2
arch/x86/entry/entry_64_compat.S:137

syzbot

unread,
Apr 11, 2019, 8:01:05 PM4/11/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 666c420f FROMLIST: ANDROID: binder: Add BINDER_GET_NODE_IN..
git tree: android-4.14
console output: https://syzkaller.appspot.com/x/log.txt?x=10f61f21400000
kernel config: https://syzkaller.appspot.com/x/.config?x=89d929f317ea847c
dashboard link: https://syzkaller.appspot.com/bug?extid=93cba293055103e38a9f
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=12cc4059400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+93cba2...@syzkaller.appspotmail.com

random: cc1: uninitialized urandom read (8 bytes read)
audit: type=1400 audit(1537664561.371:9): avc: denied { map } for
pid=1832 comm="syz-execprog" path="/root/syzkaller-shm629852769" dev="sda1"
ino=16482 scontext=unconfined_u:system_r:insmod_t:s0-s0:c0.c1023
tcontext=unconfined_u:object_r:file_t:s0 tclass=file permissive=1

======================================================
WARNING: possible circular locking dependency detected
4.14.71+ #8 Not tainted
------------------------------------------------------
syz-executor3/4760 is trying to acquire lock:
(&sig->cred_guard_mutex){+.+.}, at: [<ffffffffabea2c4f>]
lock_trace+0x3f/0xc0 fs/proc/base.c:408

but task is already holding lock:
(&p->lock){+.+.}, at: [<ffffffffabdd06c4>] seq_read+0xd4/0x11d0
fs/seq_file.c:165

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&p->lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
seq_read+0xd4/0x11d0 fs/seq_file.c:165
proc_reg_read+0xef/0x170 fs/proc/inode.c:217
do_loop_readv_writev fs/read_write.c:698 [inline]
do_iter_read+0x3cc/0x580 fs/read_write.c:922
vfs_readv+0xe6/0x150 fs/read_write.c:984
kernel_readv fs/splice.c:361 [inline]
default_file_splice_read+0x495/0x860 fs/splice.c:416
do_splice_to+0x102/0x150 fs/splice.c:880
do_splice fs/splice.c:1173 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0xf4d/0x12a0 fs/splice.c:1382
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #1 (&pipe->mutex/1){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
__pipe_lock fs/pipe.c:88 [inline]
fifo_open+0x156/0x9d0 fs/pipe.c:921
do_dentry_open+0x426/0xda0 fs/open.c:764
vfs_open+0x11c/0x210 fs/open.c:878
do_last fs/namei.c:3408 [inline]
path_openat+0x4eb/0x23a0 fs/namei.c:3550
do_filp_open+0x197/0x270 fs/namei.c:3584
do_open_execat+0x10d/0x5b0 fs/exec.c:849
do_execveat_common.isra.14+0x6cb/0x1d60 fs/exec.c:1740
do_execve fs/exec.c:1847 [inline]
SYSC_execve fs/exec.c:1928 [inline]
SyS_execve+0x34/0x40 fs/exec.c:1923
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #0 (&sig->cred_guard_mutex){+.+.}:
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
lock_trace+0x3f/0xc0 fs/proc/base.c:408
proc_pid_stack+0xcd/0x200 fs/proc/base.c:444
proc_single_show+0xf1/0x160 fs/proc/base.c:748
seq_read+0x4e0/0x11d0 fs/seq_file.c:237
do_loop_readv_writev fs/read_write.c:698 [inline]
do_iter_read+0x3cc/0x580 fs/read_write.c:922
vfs_readv+0xe6/0x150 fs/read_write.c:984
do_preadv+0x187/0x230 fs/read_write.c:1068
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

other info that might help us debug this:

Chain exists of:
&sig->cred_guard_mutex --> &pipe->mutex/1 --> &p->lock

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&p->lock);
lock(&pipe->mutex/1);
lock(&p->lock);
lock(&sig->cred_guard_mutex);

*** DEADLOCK ***

1 lock held by syz-executor3/4760:
#0: (&p->lock){+.+.}, at: [<ffffffffabdd06c4>] seq_read+0xd4/0x11d0
fs/seq_file.c:165

stack backtrace:
CPU: 1 PID: 4760 Comm: syz-executor3 Not tainted 4.14.71+ #8
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0xb9/0x11b lib/dump_stack.c:53
print_circular_bug.isra.18.cold.43+0x2d3/0x40c
kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1901 [inline]
check_prevs_add kernel/locking/lockdep.c:2018 [inline]
validate_chain kernel/locking/lockdep.c:2460 [inline]
__lock_acquire+0x2ff9/0x4320 kernel/locking/lockdep.c:3487
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
lock_trace+0x3f/0xc0 fs/proc/base.c:408
proc_pid_stack+0xcd/0x200 fs/proc/base.c:444
proc_single_show+0xf1/0x160 fs/proc/base.c:748
seq_read+0x4e0/0x11d0 fs/seq_file.c:237
do_loop_readv_writev fs/read_write.c:698 [inline]
do_iter_read+0x3cc/0x580 fs/read_write.c:922
vfs_readv+0xe6/0x150 fs/read_write.c:984
do_preadv+0x187/0x230 fs/read_write.c:1068
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x457679
RSP: 002b:00007ffbe3684c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000127
RAX: ffffffffffffffda RBX: 00007ffbe36856d4 RCX: 0000000000457679
RDX: 0000000000000001 RSI: 00000000200023c0 RDI: 0000000000000006
RBP: 000000000072bfa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00000000004d4878 R14: 00000000004c30ca R15: 0000000000000001
Reply all
Reply to author
Forward
0 new messages