possible deadlock in pipe_lock (2)

5 views
Skip to first unread message

syzbot

unread,
Dec 26, 2019, 2:17:10 AM12/26/19
to syzkaller...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: 672481c2 Linux 4.19.91
git tree: linux-4.19.y
console output: https://syzkaller.appspot.com/x/log.txt?x=140d0ac1e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=445a712574d168a6
dashboard link: https://syzkaller.appspot.com/bug?extid=53f68fd7f9d60fc0f4b3
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10cc3b99e00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=147f6cc6e00000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+53f68f...@syzkaller.appspotmail.com

IPv6: ADDRCONF(NETDEV_UP): veth0_to_hsr: link is not ready
IPv6: ADDRCONF(NETDEV_CHANGE): veth0_to_hsr: link becomes ready
IPv6: ADDRCONF(NETDEV_CHANGE): hsr_slave_0: link becomes ready
======================================================
WARNING: possible circular locking dependency detected
4.19.91-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor541/7877 is trying to acquire lock:
00000000764e9f11 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:62
[inline]
00000000764e9f11 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x6e/0x80
fs/pipe.c:70

but task is already holding lock:
00000000ac26da27 (sb_writers#4){.+.+}, at: file_start_write
include/linux/fs.h:2775 [inline]
00000000ac26da27 (sb_writers#4){.+.+}, at: do_splice+0xd44/0x1340
fs/splice.c:1153
overlayfs: failed to resolve './file0': -2

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (sb_writers#4){.+.+}:
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
[inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
__sb_start_write+0x208/0x360 fs/super.c:1387
file_start_write include/linux/fs.h:2775 [inline]
ovl_write_iter+0x91b/0xc20 fs/overlayfs/file.c:280
call_write_iter include/linux/fs.h:1820 [inline]
new_sync_write fs/read_write.c:474 [inline]
__vfs_write+0x587/0x810 fs/read_write.c:487
__kernel_write+0x110/0x390 fs/read_write.c:506
write_pipe_buf+0x15d/0x1f0 fs/splice.c:798
splice_from_pipe_feed fs/splice.c:503 [inline]
__splice_from_pipe+0x391/0x7d0 fs/splice.c:627
splice_from_pipe+0x108/0x170 fs/splice.c:662
default_file_splice_write+0x3c/0x90 fs/splice.c:810
do_splice_from fs/splice.c:852 [inline]
do_splice+0x642/0x1340 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2c6/0x330 fs/splice.c:1408
do_syscall_64+0xfd/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (&ovl_i_mutex_key[depth]){+.+.}:
down_write+0x38/0x90 kernel/locking/rwsem.c:70
inode_lock include/linux/fs.h:747 [inline]
ovl_write_iter+0x148/0xc20 fs/overlayfs/file.c:268
call_write_iter include/linux/fs.h:1820 [inline]
new_sync_write fs/read_write.c:474 [inline]
__vfs_write+0x587/0x810 fs/read_write.c:487
__kernel_write+0x110/0x390 fs/read_write.c:506
write_pipe_buf+0x15d/0x1f0 fs/splice.c:798
splice_from_pipe_feed fs/splice.c:503 [inline]
__splice_from_pipe+0x391/0x7d0 fs/splice.c:627
splice_from_pipe+0x108/0x170 fs/splice.c:662
default_file_splice_write+0x3c/0x90 fs/splice.c:810
do_splice_from fs/splice.c:852 [inline]
do_splice+0x642/0x1340 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2c6/0x330 fs/splice.c:1408
do_syscall_64+0xfd/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&pipe->mutex/1){+.+.}:
lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3903
__mutex_lock_common kernel/locking/mutex.c:925 [inline]
__mutex_lock+0xf7/0x1300 kernel/locking/mutex.c:1072
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
pipe_lock_nested fs/pipe.c:62 [inline]
pipe_lock+0x6e/0x80 fs/pipe.c:70
iter_file_splice_write+0x18b/0xbd0 fs/splice.c:700
do_splice_from fs/splice.c:852 [inline]
do_splice+0x642/0x1340 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2c6/0x330 fs/splice.c:1408
do_syscall_64+0xfd/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
&pipe->mutex/1 --> &ovl_i_mutex_key[depth] --> sb_writers#4

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(sb_writers#4);
lock(&ovl_i_mutex_key[depth]);
lock(sb_writers#4);
lock(&pipe->mutex/1);

*** DEADLOCK ***

1 lock held by syz-executor541/7877:
#0: 00000000ac26da27 (sb_writers#4){.+.+}, at: file_start_write
include/linux/fs.h:2775 [inline]
#0: 00000000ac26da27 (sb_writers#4){.+.+}, at: do_splice+0xd44/0x1340
fs/splice.c:1153

stack backtrace:
CPU: 1 PID: 7877 Comm: syz-executor541 Not tainted 4.19.91-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x197/0x210 lib/dump_stack.c:118
print_circular_bug.isra.0.cold+0x1cc/0x28f kernel/locking/lockdep.c:1221
check_prev_add kernel/locking/lockdep.c:1861 [inline]
check_prevs_add kernel/locking/lockdep.c:1974 [inline]
validate_chain kernel/locking/lockdep.c:2415 [inline]
__lock_acquire+0x2e19/0x49c0 kernel/locking/lockdep.c:3411
lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3903
__mutex_lock_common kernel/locking/mutex.c:925 [inline]
__mutex_lock+0xf7/0x1300 kernel/locking/mutex.c:1072
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1087
pipe_lock_nested fs/pipe.c:62 [inline]
pipe_lock+0x6e/0x80 fs/pipe.c:70
iter_file_splice_write+0x18b/0xbd0 fs/splice.c:700
do_splice_from fs/splice.c:852 [inline]
do_splice+0x642/0x1340 fs/splice.c:1154
__do_sys_splice fs/splice.c:1428 [inline]
__se_sys_splice fs/splice.c:1408 [inline]
__x64_sys_splice+0x2c6/0x330 fs/splice.c:1408
do_syscall_64+0xfd/0x620 arch/x86/entry/common.c:293
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x4492b9
Code: e8 7c e6 ff ff 48 83 c4 18 c3 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7
48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff
ff 0f 83 1b 05 fc ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fde6711dcd8 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 00000000006dfc58 RCX: 00000000004492b9
RDX: 0000000000000004 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00000000006dfc50 R08: 0000000100000002 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dfc5c
R13: 00007ffef745f05f R14: 00007fde6711e9c0 R15: 20c49ba5e353f7cf
IPv6: ADDRCONF(NETDEV_UP): veth1_to_hsr: link is not ready
hsr0: Slave B (hsr_slave_1) is not up; please bring it up to get a fully
working HSR network
IPv6: ADDRCONF(NETDEV_UP): hsr0: link is not ready
IPv6: ADDRCONF(NETDEV_CHANGE): hsr0: link becomes ready
IPv6: ADDRCONF(NETDEV_CHANGE): veth1_to_hsr: link becomes ready
IPv6: ADDRCONF(NETDEV_CHANGE): hsr_slave_1: link becomes ready
IPv6: ADDRCONF(NETDEV_UP): vxcan0: link is not ready
IPv6: ADDRCONF(NETDEV_UP): vxcan1: link is not ready
IPv6: ADDRCONF(NETDEV_CHANGE): vxcan1: link becomes ready
IPv6: ADDRCONF(NETDEV_CHANGE): vxcan0: link becomes ready
8021q: adding VLAN 0 to HW filter on device batadv0
kobject: 'vlan0' (00000000f893f910): kobject_add_internal: parent: 'mesh',
set: '<NULL>'


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages