possible deadlock in sel_write_load

6 views
Skip to first unread message

syzbot

unread,
Apr 14, 2019, 5:28:26 AM4/14/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 62872f95 Merge 4.4.174 into android-4.4
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=13416acf200000
kernel config: https://syzkaller.appspot.com/x/.config?x=47bc4dd423780c4a
dashboard link: https://syzkaller.appspot.com/bug?extid=4c8ca95bbf3da1e3d20e
compiler: gcc (GCC) 9.0.0 20181231 (experimental)

Unfortunately, I don't have any reproducer for this crash yet.

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+4c8ca9...@syzkaller.appspotmail.com

======================================================
[ INFO: possible circular locking dependency detected ]
4.4.174+ #4 Not tainted
-------------------------------------------------------
syz-executor.1/29478 is trying to acquire lock:
(sel_mutex){+.+.+.}, at: [<ffffffff81979d7e>] sel_write_load+0x9e/0xf90
security/selinux/selinuxfs.c:511

but task is already holding lock:
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock+0x63/0x80
fs/pipe.c:73

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff814b28fd>] __pipe_lock fs/pipe.c:86 [inline]
[<ffffffff814b28fd>] fifo_open+0x15d/0xa00 fs/pipe.c:896
[<ffffffff8149154f>] do_dentry_open+0x38f/0xbd0 fs/open.c:749
[<ffffffff81494d3b>] vfs_open+0x10b/0x210 fs/open.c:862
[<ffffffff814c5ddf>] do_last fs/namei.c:3269 [inline]
[<ffffffff814c5ddf>] path_openat+0x136f/0x4470 fs/namei.c:3406
[<ffffffff814ccab1>] do_filp_open+0x1a1/0x270 fs/namei.c:3440
[<ffffffff814a7c8c>] do_open_execat+0x10c/0x6e0 fs/exec.c:805
[<ffffffff814ad306>] do_execveat_common.isra.0+0x6f6/0x1e90
fs/exec.c:1577
[<ffffffff814af422>] do_execve fs/exec.c:1683 [inline]
[<ffffffff814af422>] SYSC_execve fs/exec.c:1764 [inline]
[<ffffffff814af422>] SyS_execve+0x42/0x50 fs/exec.c:1759
[<ffffffff82718ef5>] return_from_execve+0x0/0x23

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270d8a2>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270d8a2>] mutex_lock_killable_nested+0xd2/0xd00
kernel/locking/mutex.c:641
[<ffffffff815eaa24>] lock_trace+0x44/0xc0 fs/proc/base.c:448
[<ffffffff815eb28b>] proc_pid_syscall+0x9b/0x250 fs/proc/base.c:683
[<ffffffff815e2e66>] proc_single_show+0xf6/0x160 fs/proc/base.c:805
[<ffffffff8150805d>] seq_read+0x4cd/0x1240 fs/seq_file.c:240
[<ffffffff81497088>] do_loop_readv_writev+0x148/0x1e0
fs/read_write.c:682
[<ffffffff81498ee3>] do_readv_writev+0x573/0x6e0 fs/read_write.c:812
[<ffffffff814990ca>] vfs_readv+0x7a/0xb0 fs/read_write.c:836
[<ffffffff8153601c>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8153601c>] default_file_splice_read+0x3ac/0x8b0
fs/splice.c:662
[<ffffffff815321ff>] do_splice_to+0xff/0x160 fs/splice.c:1154
[<ffffffff815324a9>] splice_direct_to_actor+0x249/0x850
fs/splice.c:1226
[<ffffffff81532c55>] do_splice_direct+0x1a5/0x260 fs/splice.c:1337
[<ffffffff8149a2fd>] do_sendfile+0x4ed/0xba0 fs/read_write.c:1229
[<ffffffff8149c317>] SYSC_sendfile64 fs/read_write.c:1290 [inline]
[<ffffffff8149c317>] SyS_sendfile64+0x137/0x150 fs/read_write.c:1276
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff81507c66>] seq_read+0xd6/0x1240 fs/seq_file.c:178
[<ffffffff815e03bd>] proc_reg_read+0xfd/0x180 fs/proc/inode.c:202
[<ffffffff81497088>] do_loop_readv_writev+0x148/0x1e0
fs/read_write.c:682
[<ffffffff81498ee3>] do_readv_writev+0x573/0x6e0 fs/read_write.c:812
[<ffffffff814990ca>] vfs_readv+0x7a/0xb0 fs/read_write.c:836
[<ffffffff8153601c>] kernel_readv fs/splice.c:586 [inline]
[<ffffffff8153601c>] default_file_splice_read+0x3ac/0x8b0
fs/splice.c:662
[<ffffffff815321ff>] do_splice_to+0xff/0x160 fs/splice.c:1154
[<ffffffff815324a9>] splice_direct_to_actor+0x249/0x850
fs/splice.c:1226
[<ffffffff81532c55>] do_splice_direct+0x1a5/0x260 fs/splice.c:1337
[<ffffffff8149a2fd>] do_sendfile+0x4ed/0xba0 fs/read_write.c:1229
[<ffffffff8149c317>] SYSC_sendfile64 fs/read_write.c:1290 [inline]
[<ffffffff8149c317>] SyS_sendfile64+0x137/0x150 fs/read_write.c:1276
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8149ea2f>] percpu_down_read
include/linux/percpu-rwsem.h:26 [inline]
[<ffffffff8149ea2f>] __sb_start_write+0x1af/0x310 fs/super.c:1239
[<ffffffff816c3af4>] sb_start_write include/linux/fs.h:1517 [inline]
[<ffffffff816c3af4>] ext4_run_li_request fs/ext4/super.c:2685
[inline]
[<ffffffff816c3af4>] ext4_lazyinit_thread fs/ext4/super.c:2784
[inline]
[<ffffffff816c3af4>] ext4_lazyinit_thread+0x1e4/0x7b0
fs/ext4/super.c:2760
[<ffffffff811342c3>] kthread+0x273/0x310 kernel/kthread.c:211
[<ffffffff82718fc5>] ret_from_fork+0x55/0x80
arch/x86/entry/entry_64.S:537

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff816cdacd>] ext4_register_li_request+0x2fd/0x7d0
fs/ext4/super.c:2972
[<ffffffff816cf306>] ext4_remount+0x1366/0x1b90 fs/ext4/super.c:4922
[<ffffffff814a1ccb>] do_remount_sb2+0x41b/0x7a0 fs/super.c:781
[<ffffffff815028ab>] do_remount fs/namespace.c:2347 [inline]
[<ffffffff815028ab>] do_mount+0xfdb/0x2a40 fs/namespace.c:2860
[<ffffffff81504d00>] SYSC_mount fs/namespace.c:3063 [inline]
[<ffffffff81504d00>] SyS_mount+0x130/0x1d0 fs/namespace.c:3041
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff816cd859>] ext4_register_li_request+0x89/0x7d0
fs/ext4/super.c:2945
[<ffffffff816cf306>] ext4_remount+0x1366/0x1b90 fs/ext4/super.c:4922
[<ffffffff814a1ccb>] do_remount_sb2+0x41b/0x7a0 fs/super.c:781
[<ffffffff815028ab>] do_remount fs/namespace.c:2347 [inline]
[<ffffffff815028ab>] do_mount+0xfdb/0x2a40 fs/namespace.c:2860
[<ffffffff81504d00>] SYSC_mount fs/namespace.c:3063 [inline]
[<ffffffff81504d00>] SyS_mount+0x130/0x1d0 fs/namespace.c:3041
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff827139a2>] down_read+0x42/0x60 kernel/locking/rwsem.c:22
[<ffffffff814a1411>] iterate_supers+0xe1/0x250 fs/super.c:547
[<ffffffff819b19f7>] selinux_complete_init+0x2f/0x31
security/selinux/hooks.c:6154
[<ffffffff819a353d>] security_load_policy+0x69d/0x9c0
security/selinux/ss/services.c:2060
[<ffffffff81979e55>] sel_write_load+0x175/0xf90
security/selinux/selinuxfs.c:535
[<ffffffff81496916>] __vfs_write+0x116/0x3d0 fs/read_write.c:491
[<ffffffff81498612>] vfs_write+0x182/0x4e0 fs/read_write.c:540
[<ffffffff8149ac4c>] SYSC_write fs/read_write.c:587 [inline]
[<ffffffff8149ac4c>] SyS_write+0xdc/0x1c0 fs/read_write.c:579
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

[<ffffffff81202d86>] check_prev_add kernel/locking/lockdep.c:1853
[inline]
[<ffffffff81202d86>] check_prevs_add kernel/locking/lockdep.c:1958
[inline]
[<ffffffff81202d86>] validate_chain kernel/locking/lockdep.c:2144
[inline]
[<ffffffff81202d86>] __lock_acquire+0x37d6/0x4f50
kernel/locking/lockdep.c:3213
[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff81979d7e>] sel_write_load+0x9e/0xf90
security/selinux/selinuxfs.c:511
[<ffffffff81496916>] __vfs_write+0x116/0x3d0 fs/read_write.c:491
[<ffffffff81496ce2>] __kernel_write+0x112/0x370 fs/read_write.c:513
[<ffffffff81532e6d>] write_pipe_buf+0x15d/0x1f0 fs/splice.c:1074
[<ffffffff81533b6e>] splice_from_pipe_feed fs/splice.c:776 [inline]
[<ffffffff81533b6e>] __splice_from_pipe+0x37e/0x7a0 fs/splice.c:901
[<ffffffff81536be8>] splice_from_pipe+0x108/0x170 fs/splice.c:936
[<ffffffff81536cdc>] default_file_splice_write+0x3c/0x80
fs/splice.c:1086
[<ffffffff81537d31>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff81537d31>] do_splice fs/splice.c:1404 [inline]
[<ffffffff81537d31>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff81537d31>] SyS_splice+0xd71/0x13a0 fs/splice.c:1690
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(&sig->cred_guard_mutex);
lock(&pipe->mutex/1);
lock(sel_mutex);

*** DEADLOCK ***

2 locks held by syz-executor.1/29478:
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] file_start_write
include/linux/fs.h:2543 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] do_splice
fs/splice.c:1403 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] SYSC_splice
fs/splice.c:1707 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>]
SyS_splice+0xf2d/0x13a0 fs/splice.c:1690
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>]
pipe_lock+0x63/0x80 fs/pipe.c:73

stack backtrace:
CPU: 0 PID: 29478 Comm: syz-executor.1 Not tainted 4.4.174+ #4
0000000000000000 2f83a3ad0523502f ffff8801c05b7530 ffffffff81aad1a1
ffffffff84057a80 ffff8800a020c740 ffffffff83ab8d80 ffffffff83abd2b0
ffffffff83ab6860 ffff8801c05b7580 ffffffff813abcda ffffffff83e39f00
Call Trace:
[<ffffffff81aad1a1>] __dump_stack lib/dump_stack.c:15 [inline]
[<ffffffff81aad1a1>] dump_stack+0xc1/0x120 lib/dump_stack.c:51
[<ffffffff813abcda>] print_circular_bug.cold+0x2f7/0x44e
kernel/locking/lockdep.c:1226
[<ffffffff81202d86>] check_prev_add kernel/locking/lockdep.c:1853 [inline]
[<ffffffff81202d86>] check_prevs_add kernel/locking/lockdep.c:1958 [inline]
[<ffffffff81202d86>] validate_chain kernel/locking/lockdep.c:2144 [inline]
[<ffffffff81202d86>] __lock_acquire+0x37d6/0x4f50
kernel/locking/lockdep.c:3213
[<ffffffff81205f6e>] lock_acquire+0x15e/0x450 kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff81979d7e>] sel_write_load+0x9e/0xf90
security/selinux/selinuxfs.c:511
[<ffffffff81496916>] __vfs_write+0x116/0x3d0 fs/read_write.c:491
[<ffffffff81496ce2>] __kernel_write+0x112/0x370 fs/read_write.c:513
[<ffffffff81532e6d>] write_pipe_buf+0x15d/0x1f0 fs/splice.c:1074
[<ffffffff81533b6e>] splice_from_pipe_feed fs/splice.c:776 [inline]
[<ffffffff81533b6e>] __splice_from_pipe+0x37e/0x7a0 fs/splice.c:901
[<ffffffff81536be8>] splice_from_pipe+0x108/0x170 fs/splice.c:936
[<ffffffff81536cdc>] default_file_splice_write+0x3c/0x80 fs/splice.c:1086
[<ffffffff81537d31>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff81537d31>] do_splice fs/splice.c:1404 [inline]
[<ffffffff81537d31>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff81537d31>] SyS_splice+0xd71/0x13a0 fs/splice.c:1690
[<ffffffff82718ba1>] entry_SYSCALL_64_fastpath+0x1e/0x9a
SELinux: policydb magic number 0x37373130 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x37373130 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x37373130 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x37373130 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x37373130 does not match expected magic
number 0xf97cff8c


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

syzbot

unread,
Aug 16, 2019, 4:13:07 PM8/16/19
to syzkaller-a...@googlegroups.com
syzbot has found a reproducer for the following crash on:

HEAD commit: 62872f95 Merge 4.4.174 into android-4.4
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=17623c96600000
kernel config: https://syzkaller.appspot.com/x/.config?x=47bc4dd423780c4a
dashboard link: https://syzkaller.appspot.com/bug?extid=4c8ca95bbf3da1e3d20e
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
userspace arch: i386
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=11dcb1e2600000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+4c8ca9...@syzkaller.appspotmail.com

======================================================
[ INFO: possible circular locking dependency detected ]
4.4.174+ #17 Not tainted
-------------------------------------------------------
syz-executor.1/3030 is trying to acquire lock:
(sel_mutex){+.+.+.}, at: [<ffffffff81979d7e>] sel_write_load+0x9e/0xf90
security/selinux/selinuxfs.c:511

but task is already holding lock:
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock+0x63/0x80
fs/pipe.c:73

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

:
[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff814af8b3>] pipe_lock_nested fs/pipe.c:65 [inline]
[<ffffffff814af8b3>] pipe_lock+0x63/0x80 fs/pipe.c:73
[<ffffffff815342e9>] iter_file_splice_write+0x179/0xb30
fs/splice.c:974
[<ffffffff81537d31>] do_splice_from fs/splice.c:1128 [inline]
[<ffffffff81537d31>] do_splice fs/splice.c:1404 [inline]
[<ffffffff81537d31>] SYSC_splice fs/splice.c:1707 [inline]
[<ffffffff81537d31>] SyS_splice+0xd71/0x13a0 fs/splice.c:1690
[<ffffffff8100603d>] do_syscall_32_irqs_on
arch/x86/entry/common.c:330 [inline]
[<ffffffff8100603d>] do_fast_syscall_32+0x32d/0xa90
arch/x86/entry/common.c:397
[<ffffffff8271a350>] sysenter_flags_fixed+0xd/0x1a
SELinux: policydb magic number 0x30307830 does not match expected magic
number 0xf97cff8c
[<ffffffff8100603d>] do_syscall_32_irqs_on
arch/x86/entry/common.c:330 [inline]
[<ffffffff8100603d>] do_fast_syscall_32+0x32d/0xa90
arch/x86/entry/common.c:397
[<ffffffff8271a350>] sysenter_flags_fixed+0xd/0x1a

other info that might help us debug this:

Chain exists of:
Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&pipe->mutex/1);
lock(sb_writers#4);
lock(&pipe->mutex/1);
lock(sel_mutex);

*** DEADLOCK ***

2 locks held by syz-executor.1/3030:
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] file_start_write
include/linux/fs.h:2543 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] do_splice
fs/splice.c:1403 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] SYSC_splice
fs/splice.c:1707 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>]
SyS_splice+0xf2d/0x13a0 fs/splice.c:1690
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>]
pipe_lock+0x63/0x80 fs/pipe.c:73

stack backtrace:
CPU: 0 PID: 3030 Comm: syz-executor.1 Not tainted 4.4.174+ #17
0000000000000000 1d6736d521569112 ffff8801d8c574b0 ffffffff81aad1a1
ffffffff84057a80 ffff8801d8c5c740 ffffffff83ab8a20 ffffffff83abd2b0
ffffffff83abc380 ffff8801d8c57500 ffffffff813abcda ffffffff83e24300
[<ffffffff8100603d>] do_syscall_32_irqs_on arch/x86/entry/common.c:330
[inline]
[<ffffffff8100603d>] do_fast_syscall_32+0x32d/0xa90
arch/x86/entry/common.c:397
[<ffffffff8271a350>] sysenter_flags_fixed+0xd/0x1a
SELinux: policydb magic number 0x30307830 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x30307830 does not match expected magic
number 0xf97cff8c
SELinux: policydb magic number 0x30307830 does not match expected magic
number 0xf97cff8c

syzbot

unread,
Aug 16, 2019, 5:41:07 PM8/16/19
to syzkaller-a...@googlegroups.com
syzbot has found a reproducer for the following crash on:

HEAD commit: 62872f95 Merge 4.4.174 into android-4.4
git tree: android-4.4
console output: https://syzkaller.appspot.com/x/log.txt?x=17d0b25a600000
kernel config: https://syzkaller.appspot.com/x/.config?x=47bc4dd423780c4a
dashboard link: https://syzkaller.appspot.com/bug?extid=4c8ca95bbf3da1e3d20e
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15ba11e2600000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=16e2c25a600000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+4c8ca9...@syzkaller.appspotmail.com

======================================================
[ INFO: possible circular locking dependency detected ]
4.4.174+ #4 Not tainted
-------------------------------------------------------
syz-executor150/5022 is trying to acquire lock:
(sel_mutex){+.+.+.}, at: [<ffffffff81979d7e>] sel_write_load+0x9e/0xf90
security/selinux/selinuxfs.c:511

but task is already holding lock:
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
(&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock+0x63/0x80
fs/pipe.c:73

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

[<ffffffff81205f6e>] lock_acquire+0x15e/0x450
kernel/locking/lockdep.c:3592
[<ffffffff8270c191>] __mutex_lock_common kernel/locking/mutex.c:521
[inline]
[<ffffffff8270c191>] mutex_lock_nested+0xc1/0xb80
kernel/locking/mutex.c:621
[<ffffffff814af8b3>] pipe_lock_nested fs/pipe.c:65 [inline]
[<ffffffff814af8b3>] pipe_lock+0x63/0x80 fs/pipe.c:73
[<ffffffff815342e9>] iter_file_splice_write+0x179/0xb30
fs/splice.c:974
lock(sb_writers#4);
lock(&pipe->mutex/1);
lock(sel_mutex);

*** DEADLOCK ***

2 locks held by syz-executor150/5022:
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] file_start_write
include/linux/fs.h:2543 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] do_splice
fs/splice.c:1403 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>] SYSC_splice
fs/splice.c:1707 [inline]
#0: (sb_writers#3){.+.+.+}, at: [<ffffffff81537eed>]
SyS_splice+0xf2d/0x13a0 fs/splice.c:1690
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>] pipe_lock_nested
fs/pipe.c:65 [inline]
#1: (&pipe->mutex/1){+.+.+.}, at: [<ffffffff814af8b3>]
pipe_lock+0x63/0x80 fs/pipe.c:73

stack backtrace:
CPU: 0 PID: 5022 Comm: syz-executor150 Not tainted 4.4.174+ #4
0000000000000000 68e9cd1d25b51a27 ffff8800b8307530 ffffffff81aad1a1
ffffffff84057a80 ffff8800b0c997c0 ffffffff83ab8a20 ffffffff83abd610
ffffffff83abc380 ffff8800b8307580 ffffffff813abcda ffffffff83e26380
SELinux: policydb magic number 0x30307830 does not match expected magic
number 0xf97cff8c

Reply all
Reply to author
Forward
0 new messages