possible deadlock in lru_add_drain_all

4 views
Skip to first unread message

syzbot

unread,
Apr 10, 2019, 8:00:23 PM4/10/19
to syzkaller-a...@googlegroups.com
Hello,

syzbot found the following crash on:

HEAD commit: e6fa8a20 Merge remote-tracking branch 'origin/upstream-f2f..
git tree: android-4.14
console output: https://syzkaller.appspot.com/x/log.txt?x=175ac3c6400000
kernel config: https://syzkaller.appspot.com/x/.config?x=9b3b342f97278cde
dashboard link: https://syzkaller.appspot.com/bug?extid=6d1c2cc3e56d9909c9dc
compiler: gcc (GCC) 8.0.1 20180413 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=14a0e891400000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=103c13c6400000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+6d1c2c...@syzkaller.appspotmail.com


======================================================
WARNING: possible circular locking dependency detected
4.14.73+ #13 Not tainted
------------------------------------------------------
syz-executor428/12635 is trying to acquire lock:
(cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffffa96601fa>] get_online_cpus
include/linux/cpu.h:138 [inline]
(cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffffa96601fa>]
lru_add_drain_all+0xa/0x20 mm/swap.c:729

but task is already holding lock:
(&sb->s_type->i_mutex_key#10){+.+.}, at: [<ffffffffa9680e42>] inode_lock
include/linux/fs.h:713 [inline]
(&sb->s_type->i_mutex_key#10){+.+.}, at: [<ffffffffa9680e42>]
shmem_add_seals+0x132/0x1230 mm/shmem.c:2779

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #5 (&sb->s_type->i_mutex_key#10){+.+.}:
down_write+0x34/0x90 kernel/locking/rwsem.c:54
inode_lock include/linux/fs.h:713 [inline]
shmem_fallocate+0x149/0xb20 mm/shmem.c:2852
ashmem_shrink_scan+0x1b6/0x4e0 drivers/staging/android/ashmem.c:447
ashmem_ioctl+0x2cc/0xe20 drivers/staging/android/ashmem.c:789
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:500 [inline]
do_vfs_ioctl+0x1a0/0x1030 fs/ioctl.c:684
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x7e/0xb0 fs/ioctl.c:692
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #4 (ashmem_mutex){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
ashmem_mmap+0x4c/0x3b0 drivers/staging/android/ashmem.c:369
call_mmap include/linux/fs.h:1787 [inline]
mmap_region+0x836/0xfb0 mm/mmap.c:1731
do_mmap+0x551/0xb80 mm/mmap.c:1509
do_mmap_pgoff include/linux/mm.h:2167 [inline]
vm_mmap_pgoff+0x180/0x1d0 mm/util.c:333
SYSC_mmap_pgoff mm/mmap.c:1559 [inline]
SyS_mmap_pgoff+0xf8/0x1a0 mm/mmap.c:1517
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #3 (&mm->mmap_sem){++++}:
__might_fault+0x137/0x1b0 mm/memory.c:4529
_copy_from_user+0x27/0x100 lib/usercopy.c:10
copy_from_user include/linux/uaccess.h:147 [inline]
perf_event_period kernel/events/core.c:4747 [inline]
_perf_ioctl kernel/events/core.c:4802 [inline]
perf_ioctl+0x6ef/0x1bb0 kernel/events/core.c:4869
vfs_ioctl fs/ioctl.c:46 [inline]
file_ioctl fs/ioctl.c:500 [inline]
do_vfs_ioctl+0x1a0/0x1030 fs/ioctl.c:684
SYSC_ioctl fs/ioctl.c:701 [inline]
SyS_ioctl+0x7e/0xb0 fs/ioctl.c:692
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

-> #2 (&cpuctx_mutex){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
perf_event_init_cpu+0xab/0x150 kernel/events/core.c:11210
perf_event_init+0x295/0x2d4 kernel/events/core.c:11257
start_kernel+0x441/0x739 init/main.c:621
secondary_startup_64+0xa5/0xb0 arch/x86/kernel/head_64.S:239

-> #1 (pmus_lock){+.+.}:
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0xf5/0x1480 kernel/locking/mutex.c:893
perf_event_init_cpu+0x2c/0x150 kernel/events/core.c:11204
cpuhp_invoke_callback+0x1b5/0x1960 kernel/cpu.c:183
cpuhp_up_callbacks kernel/cpu.c:567 [inline]
_cpu_up+0x22c/0x520 kernel/cpu.c:1126
do_cpu_up+0x13f/0x180 kernel/cpu.c:1160
smp_init+0x137/0x14b kernel/smp.c:578
kernel_init_freeable+0x186/0x39f init/main.c:1068
kernel_init+0xc/0x157 init/main.c:1000
ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:402

-> #0 (cpu_hotplug_lock.rw_sem){++++}:
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36
[inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
cpus_read_lock+0x39/0xb0 kernel/cpu.c:294
get_online_cpus include/linux/cpu.h:138 [inline]
lru_add_drain_all+0xa/0x20 mm/swap.c:729
shmem_wait_for_pins mm/shmem.c:2683 [inline]
shmem_add_seals+0x4db/0x1230 mm/shmem.c:2791
shmem_fcntl+0xea/0x120 mm/shmem.c:2826
do_fcntl+0x966/0xea0 fs/fcntl.c:421
SYSC_fcntl fs/fcntl.c:463 [inline]
SyS_fcntl+0xc7/0x100 fs/fcntl.c:448
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7

other info that might help us debug this:

Chain exists of:
cpu_hotplug_lock.rw_sem --> ashmem_mutex --> &sb->s_type->i_mutex_key#10

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#10);
lock(ashmem_mutex);
lock(&sb->s_type->i_mutex_key#10);
lock(cpu_hotplug_lock.rw_sem);

*** DEADLOCK ***

1 lock held by syz-executor428/12635:
#0: (&sb->s_type->i_mutex_key#10){+.+.}, at: [<ffffffffa9680e42>]
inode_lock include/linux/fs.h:713 [inline]
#0: (&sb->s_type->i_mutex_key#10){+.+.}, at: [<ffffffffa9680e42>]
shmem_add_seals+0x132/0x1230 mm/shmem.c:2779

stack backtrace:
CPU: 0 PID: 12635 Comm: syz-executor428 Not tainted 4.14.73+ #13
Call Trace:
__dump_stack lib/dump_stack.c:17 [inline]
dump_stack+0xb9/0x11b lib/dump_stack.c:53
print_circular_bug.isra.18.cold.43+0x2d3/0x40c
kernel/locking/lockdep.c:1258
check_prev_add kernel/locking/lockdep.c:1901 [inline]
check_prevs_add kernel/locking/lockdep.c:2018 [inline]
validate_chain kernel/locking/lockdep.c:2460 [inline]
__lock_acquire+0x2ff9/0x4320 kernel/locking/lockdep.c:3487
lock_acquire+0x10f/0x380 kernel/locking/lockdep.c:3991
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline]
percpu_down_read include/linux/percpu-rwsem.h:59 [inline]
cpus_read_lock+0x39/0xb0 kernel/cpu.c:294
get_online_cpus include/linux/cpu.h:138 [inline]
lru_add_drain_all+0xa/0x20 mm/swap.c:729
shmem_wait_for_pins mm/shmem.c:2683 [inline]
shmem_add_seals+0x4db/0x1230 mm/shmem.c:2791
shmem_fcntl+0xea/0x120 mm/shmem.c:2826
do_fcntl+0x966/0xea0 fs/fcntl.c:421
SYSC_fcntl fs/fcntl.c:463 [inline]
SyS_fcntl+0xc7/0x100 fs/fcntl.c:448
do_syscall_64+0x19b/0x4b0 arch/x86/entry/common.c:289
entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x446679
RSP: 002b:00007f8b0bfadda8 EFLAGS: 00000246 ORIG_RAX: 0000000000000048
RAX: ffffffffffffffda RBX: 00000000006dbc48 RCX: 0000000000446679
RDX: 0000000000000008 RSI: 0000000000000409 RDI: 0000000000000005
RBP: 00000000006dbc40 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006dbc4c
R13: 3ed592df9946286b R14: dfdd4f11168a8b2b R15: 0000000000000000
random: crng init done


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzk...@googlegroups.com.

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
Reply all
Reply to author
Forward
0 new messages