possible deadlock in lru_add_drain_all

40 views
Skip to first unread message

syzbot

unread,
Oct 27, 2017, 5:22:44 AM10/27/17
to ak...@linux-foundation.org, dan.j.w...@intel.com, han...@cmpxchg.org, ja...@suse.cz, jgl...@redhat.com, linux-...@vger.kernel.org, linu...@kvack.org, mho...@suse.com, sh...@fb.com, syzkall...@googlegroups.com, tg...@linutronix.de, vba...@suse.cz, ying....@intel.com
Hello,

syzkaller hit the following crash on
a31cc455c512f3f1dd5f79cac8e29a7c8a617af8
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
compiler: gcc (GCC) 7.1.1 20170620
.config is attached
Raw console output is attached.





======================================================
WARNING: possible circular locking dependency detected
4.13.0-next-20170911+ #19 Not tainted
------------------------------------------------------
syz-executor5/6914 is trying to acquire lock:
(cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff818c1b3e>] get_online_cpus
include/linux/cpu.h:126 [inline]
(cpu_hotplug_lock.rw_sem){++++}, at: [<ffffffff818c1b3e>]
lru_add_drain_all+0xe/0x20 mm/swap.c:729

but task is already holding lock:
(&sb->s_type->i_mutex_key#9){++++}, at: [<ffffffff818fbef7>] inode_lock
include/linux/fs.h:712 [inline]
(&sb->s_type->i_mutex_key#9){++++}, at: [<ffffffff818fbef7>]
shmem_add_seals+0x197/0x1060 mm/shmem.c:2768

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #6 (&sb->s_type->i_mutex_key#9){++++}:
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
down_write+0x87/0x120 kernel/locking/rwsem.c:53
inode_lock include/linux/fs.h:712 [inline]
generic_file_write_iter+0xdc/0x7a0 mm/filemap.c:3141
call_write_iter include/linux/fs.h:1770 [inline]
do_iter_readv_writev+0x531/0x7f0 fs/read_write.c:673
do_iter_write+0x15a/0x540 fs/read_write.c:952
vfs_iter_write+0x77/0xb0 fs/read_write.c:965
iter_file_splice_write+0x7e9/0xf50 fs/splice.c:749
do_splice_from fs/splice.c:851 [inline]
do_splice fs/splice.c:1147 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0x7d5/0x1630 fs/splice.c:1382
entry_SYSCALL_64_fastpath+0x1f/0xbe

-> #5 (&pipe->mutex/1){+.+.}:
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
pipe_lock_nested fs/pipe.c:66 [inline]
pipe_lock+0x56/0x70 fs/pipe.c:74
iter_file_splice_write+0x264/0xf50 fs/splice.c:699
do_splice_from fs/splice.c:851 [inline]
do_splice fs/splice.c:1147 [inline]
SYSC_splice fs/splice.c:1402 [inline]
SyS_splice+0x7d5/0x1630 fs/splice.c:1382
entry_SYSCALL_64_fastpath+0x1f/0xbe

-> #4 (sb_writers){.+.+}:
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
[inline]
percpu_down_read include/linux/percpu-rwsem.h:58 [inline]
__sb_start_write+0x18f/0x290 fs/super.c:1341
sb_start_write include/linux/fs.h:1541 [inline]
mnt_want_write+0x3f/0xb0 fs/namespace.c:387
filename_create+0x12b/0x520 fs/namei.c:3628
kern_path_create+0x33/0x40 fs/namei.c:3674
handle_create+0xc0/0x760 drivers/base/devtmpfs.c:203

-> #3 ((complete)&req.done){+.+.}:
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
complete_acquire include/linux/completion.h:39 [inline]
__wait_for_common kernel/sched/completion.c:108 [inline]
wait_for_common kernel/sched/completion.c:122 [inline]
wait_for_completion+0xc8/0x770 kernel/sched/completion.c:143
devtmpfs_create_node+0x32b/0x4a0 drivers/base/devtmpfs.c:115
device_add+0x120f/0x1640 drivers/base/core.c:1824
device_create_groups_vargs+0x1f3/0x250 drivers/base/core.c:2430
device_create_vargs drivers/base/core.c:2470 [inline]
device_create+0xda/0x110 drivers/base/core.c:2506
msr_device_create+0x26/0x40 arch/x86/kernel/msr.c:188
cpuhp_invoke_callback+0x256/0x14d0 kernel/cpu.c:145
cpuhp_thread_fun+0x265/0x520 kernel/cpu.c:434
smpboot_thread_fn+0x489/0x850 kernel/smpboot.c:164
kthread+0x39c/0x470 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431

-> #2 (cpuhp_state){+.+.}:
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
cpuhp_invoke_ap_callback kernel/cpu.c:467 [inline]
cpuhp_issue_call+0x1a2/0x3e0 kernel/cpu.c:1308
__cpuhp_setup_state_cpuslocked+0x2d6/0x5f0 kernel/cpu.c:1455
__cpuhp_setup_state+0xb0/0x140 kernel/cpu.c:1484
cpuhp_setup_state include/linux/cpuhotplug.h:177 [inline]
page_writeback_init+0x4d/0x71 mm/page-writeback.c:2082
pagecache_init+0x48/0x4f mm/filemap.c:871
start_kernel+0x6c1/0x754 init/main.c:690
x86_64_start_reservations+0x2a/0x2c arch/x86/kernel/head64.c:377
x86_64_start_kernel+0x87/0x8a arch/x86/kernel/head64.c:358
verify_cpu+0x0/0xfb

-> #1 (cpuhp_state_mutex){+.+.}:
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
__mutex_lock_common kernel/locking/mutex.c:756 [inline]
__mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893
mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
__cpuhp_setup_state_cpuslocked+0x5b/0x5f0 kernel/cpu.c:1430
__cpuhp_setup_state+0xb0/0x140 kernel/cpu.c:1484
cpuhp_setup_state_nocalls include/linux/cpuhotplug.h:205 [inline]
kvm_guest_init+0x1f3/0x20f arch/x86/kernel/kvm.c:488
setup_arch+0x1899/0x1ab3 arch/x86/kernel/setup.c:1295
start_kernel+0xa5/0x754 init/main.c:530
x86_64_start_reservations+0x2a/0x2c arch/x86/kernel/head64.c:377
x86_64_start_kernel+0x87/0x8a arch/x86/kernel/head64.c:358
verify_cpu+0x0/0xfb

-> #0 (cpu_hotplug_lock.rw_sem){++++}:
check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35
[inline]
percpu_down_read include/linux/percpu-rwsem.h:58 [inline]
cpus_read_lock+0x42/0x90 kernel/cpu.c:218
get_online_cpus include/linux/cpu.h:126 [inline]
lru_add_drain_all+0xe/0x20 mm/swap.c:729
shmem_wait_for_pins mm/shmem.c:2672 [inline]
shmem_add_seals+0x3e1/0x1060 mm/shmem.c:2780
shmem_fcntl+0xfe/0x130 mm/shmem.c:2815
do_fcntl+0x7d0/0x1060 fs/fcntl.c:420
SYSC_fcntl fs/fcntl.c:462 [inline]
SyS_fcntl+0xdc/0x120 fs/fcntl.c:447
entry_SYSCALL_64_fastpath+0x1f/0xbe

other info that might help us debug this:

Chain exists of:
cpu_hotplug_lock.rw_sem --> &pipe->mutex/1 --> &sb->s_type->i_mutex_key#9

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&sb->s_type->i_mutex_key#9);
lock(&pipe->mutex/1);
lock(&sb->s_type->i_mutex_key#9);
lock(cpu_hotplug_lock.rw_sem);

*** DEADLOCK ***

1 lock held by syz-executor5/6914:
#0: (&sb->s_type->i_mutex_key#9){++++}, at: [<ffffffff818fbef7>]
inode_lock include/linux/fs.h:712 [inline]
#0: (&sb->s_type->i_mutex_key#9){++++}, at: [<ffffffff818fbef7>]
shmem_add_seals+0x197/0x1060 mm/shmem.c:2768

stack backtrace:
CPU: 0 PID: 6914 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
print_circular_bug+0x503/0x710 kernel/locking/lockdep.c:1259
check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894
check_prevs_add kernel/locking/lockdep.c:2020 [inline]
validate_chain kernel/locking/lockdep.c:2469 [inline]
__lock_acquire+0x328f/0x4620 kernel/locking/lockdep.c:3498
lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002
percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35 [inline]
percpu_down_read include/linux/percpu-rwsem.h:58 [inline]
cpus_read_lock+0x42/0x90 kernel/cpu.c:218
get_online_cpus include/linux/cpu.h:126 [inline]
lru_add_drain_all+0xe/0x20 mm/swap.c:729
shmem_wait_for_pins mm/shmem.c:2672 [inline]
shmem_add_seals+0x3e1/0x1060 mm/shmem.c:2780
shmem_fcntl+0xfe/0x130 mm/shmem.c:2815
do_fcntl+0x7d0/0x1060 fs/fcntl.c:420
SYSC_fcntl fs/fcntl.c:462 [inline]
SyS_fcntl+0xdc/0x120 fs/fcntl.c:447
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x451e59
RSP: 002b:00007f4d15f12c08 EFLAGS: 00000216 ORIG_RAX: 0000000000000048
RAX: ffffffffffffffda RBX: 00000000007180b0 RCX: 0000000000451e59
RDX: 000000000000000a RSI: 0000000000000409 RDI: 0000000000000017
RBP: 0000000000000082 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000216 R12: 0000000000000000
R13: 0000000000a6f7ef R14: 00007f4d15f139c0 R15: 0000000000000002
SELinux: unrecognized netlink message: protocol=6 nlmsg_type=32768
sclass=netlink_xfrm_socket pig=6942 comm=syz-executor2
SELinux: unrecognized netlink message: protocol=6 nlmsg_type=32768
sclass=netlink_xfrm_socket pig=6958 comm=syz-executor2
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=1817
sclass=netlink_route_socket pig=7056 comm=syz-executor0
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=1817
sclass=netlink_route_socket pig=7071 comm=syz-executor0
device gre0 entered promiscuous mode
audit: type=1326 audit(1505170621.197:24): auid=4294967295 uid=0 gid=0
ses=4294967295 subj=kernel pid=7144 comm="syz-executor7"
exe="/root/syz-executor7" sig=9 arch=c000003e syscall=202 compat=0
ip=0x451e59 code=0x0
audit: type=1326 audit(1505170621.314:25): auid=4294967295 uid=0 gid=0
ses=4294967295 subj=kernel pid=7144 comm="syz-executor7"
exe="/root/syz-executor7" sig=9 arch=c000003e syscall=202 compat=0
ip=0x451e59 code=0x0
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
*** Guest State ***
CR0: actual=0x0000000080000031, shadow=0x00000000e0000031,
gh_mask=fffffffffffffff7
CR4: actual=0x0000000000212061, shadow=0x0000000000212021,
gh_mask=ffffffffffffe871
CR3 = 0x0000000000002000
RSP = 0x0000000000000f80 RIP = 0x000000000000000b
RFLAGS=0x00000082 DR7 = 0x0000000000000400
Sysenter RSP=0000000000000f80 CS:RIP=0050:0000000000002810
CS: sel=0x0030, attr=0x0409b, limit=0x000fffff, base=0x0000000000000000
DS: sel=0x0038, attr=0x04093, limit=0x000fffff, base=0x0000000000000000
SS: sel=0x0038, attr=0x04093, limit=0x000fffff, base=0x0000000000000000
ES: sel=0x0038, attr=0x04093, limit=0x000fffff, base=0x0000000000000000
FS: sel=0x0038, attr=0x04093, limit=0x000fffff, base=0x0000000000000000
GS: sel=0x0038, attr=0x04093, limit=0x000fffff, base=0x0000000000000000
GDTR: limit=0x000007ff, base=0x0000000000001000
LDTR: sel=0x0008, attr=0x04082, limit=0x000007ff, base=0x0000000000001800
IDTR: limit=0x000001ff, base=0x0000000000003800
TR: sel=0x0000, attr=0x0008b, limit=0x0000ffff, base=0x0000000000000000
EFER = 0x0000000000000501 PAT = 0x0007040600070406
DebugCtl = 0x0000000000000000 DebugExceptions = 0x0000000000000000
Interruptibility = 00000000 ActivityState = 00000000
*** Host State ***
RIP = 0xffffffff811b8bff RSP = 0xffff8801c57c74c8
CS=0010 SS=0018 DS=0000 ES=0000 FS=0000 GS=0000 TR=0040
FSBase=00007f6116fe3700 GSBase=ffff8801db200000 TRBase=ffff8801db223100
GDTBase=ffffffffff577000 IDTBase=ffffffffff57b000
CR0=0000000080050033 CR3=00000001c3448000 CR4=00000000001426f0
Sysenter RSP=0000000000000000 CS:RIP=0010:ffffffff84d3ebf0
EFER = 0x0000000000000d01 PAT = 0x0000000000000000
*** Control State ***
PinBased=0000003f CPUBased=b6986dfa SecondaryExec=0000004a
EntryControls=0000d3ff ExitControls=0023efff
ExceptionBitmap=00060042 PFECmask=00000000 PFECmatch=00000000
VMEntry: intr_info=00000000 errcode=00000000 ilen=00000000
VMExit: intr_info=00000000 errcode=00000000 ilen=00000003
reason=80000021 qualification=0000000000000000
IDTVectoring: info=00000000 errcode=00000000
TSC Offset = 0xffffffd52f4ba64f
EPT pointer = 0x00000001d9a8c01e
device syz4 entered promiscuous mode
device syz4 left promiscuous mode
device syz4 entered promiscuous mode
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
nla_parse: 3 callbacks suppressed
netlink: 1 bytes leftover after parsing attributes in process
`syz-executor4'.
netlink: 1 bytes leftover after parsing attributes in process
`syz-executor4'.
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
audit: type=1326 audit(1505170623.164:26): auid=4294967295 uid=0 gid=0
ses=4294967295 subj=kernel pid=7523 comm="syz-executor4"
exe="/root/syz-executor4" sig=9 arch=c000003e syscall=202 compat=0
ip=0x451e59 code=0x0
QAT: Invalid ioctl
QAT: Invalid ioctl
audit: type=1326 audit(1505170623.271:27): auid=4294967295 uid=0 gid=0
ses=4294967295 subj=kernel pid=7523 comm="syz-executor4"
exe="/root/syz-executor4" sig=9 arch=c000003e syscall=202 compat=0
ip=0x451e59 code=0x0
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
sctp: [Deprecated]: syz-executor7 (pid 7652) Use of int in max_burst socket
option.
Use struct sctp_assoc_value instead
QAT: Invalid ioctl
sctp: [Deprecated]: syz-executor7 (pid 7661) Use of int in max_burst socket
option.
Use struct sctp_assoc_value instead
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 1
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=4355
sclass=netlink_route_socket pig=7703 comm=syz-executor4
QAT: Invalid ioctl
QAT: Invalid ioctl
SELinux: unrecognized netlink message: protocol=0 nlmsg_type=4355
sclass=netlink_route_socket pig=7707 comm=syz-executor4
QAT: Invalid ioctl
QAT: Invalid ioctl
netlink: 1 bytes leftover after parsing attributes in process
`syz-executor7'.
netlink: 1 bytes leftover after parsing attributes in process
`syz-executor7'.
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
CPU: 1 PID: 7690 Comm: syz-executor2 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_failslab+0xec/0x120 mm/failslab.c:31
slab_pre_alloc_hook mm/slab.h:422 [inline]
slab_alloc mm/slab.c:3383 [inline]
kmem_cache_alloc+0x47/0x760 mm/slab.c:3559
mpol_new+0x144/0x2e0 mm/mempolicy.c:275
do_mbind+0x1d0/0xce0 mm/mempolicy.c:1189
SYSC_mbind mm/mempolicy.c:1341 [inline]
SyS_mbind+0x13b/0x150 mm/mempolicy.c:1323
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x451e59
RSP: 002b:00007f659d8e2c08 EFLAGS: 00000216 ORIG_RAX: 00000000000000ed
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 0000000000451e59
RDX: 0000000000004003 RSI: 0000000000003000 RDI: 0000000020007000
RBP: 00007f659d8e2a10 R08: 0000000000000020 R09: 0000000000000000
R10: 000000002000cff8 R11: 0000000000000216 R12: 00000000004b69f7
R13: 00007f659d8e2b48 R14: 00000000004b6a07 R15: 0000000000000000
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
sctp: [Deprecated]: syz-executor7 (pid 7835) Use of struct sctp_assoc_value
in delayed_ack socket option.
Use struct sctp_sack_info instead
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
sctp: [Deprecated]: syz-executor7 (pid 7852) Use of struct sctp_assoc_value
in delayed_ack socket option.
Use struct sctp_sack_info instead
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
FAULT_INJECTION: forcing a failure.
name fail_page_alloc, interval 1, probability 0, space 0, times 1
CPU: 0 PID: 7966 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_fail_alloc_page mm/page_alloc.c:2915 [inline]
prepare_alloc_pages mm/page_alloc.c:4151 [inline]
__alloc_pages_nodemask+0x338/0xd80 mm/page_alloc.c:4187
alloc_pages_current+0xb6/0x1e0 mm/mempolicy.c:2035
alloc_pages include/linux/gfp.h:505 [inline]
skb_page_frag_refill+0x358/0x5f0 net/core/sock.c:2196
tun_build_skb.isra.42+0x2a2/0x1690 drivers/net/tun.c:1289
tun_get_user+0x1dad/0x2150 drivers/net/tun.c:1455
tun_chr_write_iter+0xde/0x190 drivers/net/tun.c:1579
call_write_iter include/linux/fs.h:1770 [inline]
new_sync_write fs/read_write.c:468 [inline]
__vfs_write+0x68a/0x970 fs/read_write.c:481
vfs_write+0x18f/0x510 fs/read_write.c:543
SYSC_write fs/read_write.c:588 [inline]
SyS_write+0xef/0x220 fs/read_write.c:580
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x40c2c1
RSP: 002b:00007f4d15f33c10 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 000000000040c2c1
RDX: 000000000000004e RSI: 000000002000cfbc RDI: 0000000000000015
RBP: 00007f4d15f33a10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000f4240 R11: 0000000000000293 R12: 00000000004b69f7
R13: 00007f4d15f33b48 R14: 00000000004b6a07 R15: 0000000000000000
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 0
CPU: 1 PID: 7979 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_failslab+0xec/0x120 mm/failslab.c:31
slab_pre_alloc_hook mm/slab.h:422 [inline]
slab_alloc mm/slab.c:3383 [inline]
kmem_cache_alloc+0x47/0x760 mm/slab.c:3559
__build_skb+0x9d/0x450 net/core/skbuff.c:284
build_skb+0x6f/0x260 net/core/skbuff.c:316
tun_build_skb.isra.42+0x92f/0x1690 drivers/net/tun.c:1346
tun_get_user+0x1dad/0x2150 drivers/net/tun.c:1455
tun_chr_write_iter+0xde/0x190 drivers/net/tun.c:1579
call_write_iter include/linux/fs.h:1770 [inline]
new_sync_write fs/read_write.c:468 [inline]
__vfs_write+0x68a/0x970 fs/read_write.c:481
vfs_write+0x18f/0x510 fs/read_write.c:543
SYSC_write fs/read_write.c:588 [inline]
SyS_write+0xef/0x220 fs/read_write.c:580
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x40c2c1
RSP: 002b:00007f4d15f33c10 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 000000000040c2c1
RDX: 000000000000004e RSI: 000000002000cfbc RDI: 0000000000000015
RBP: 00007f4d15f33a10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000f4240 R11: 0000000000000293 R12: 00000000004b69f7
R13: 00007f4d15f33b48 R14: 00000000004b6a07 R15: 0000000000000000
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 0
CPU: 1 PID: 7984 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_failslab+0xec/0x120 mm/failslab.c:31
slab_pre_alloc_hook mm/slab.h:422 [inline]
slab_alloc mm/slab.c:3383 [inline]
kmem_cache_alloc+0x47/0x760 mm/slab.c:3559
dst_alloc+0x11f/0x1a0 net/core/dst.c:107
__ip6_dst_alloc+0x34/0x60 net/ipv6/route.c:355
ip6_rt_pcpu_alloc net/ipv6/route.c:1031 [inline]
rt6_make_pcpu_route net/ipv6/route.c:1061 [inline]
ip6_pol_route+0x1500/0x2c90 net/ipv6/route.c:1186
ip6_pol_route_input+0x5a/0x70 net/ipv6/route.c:1200
fib6_rule_lookup+0x9e/0x2a0 net/ipv6/ip6_fib.c:299
ip6_route_input_lookup+0x8a/0xa0 net/ipv6/route.c:1210
ip6_route_input+0x656/0xa20 net/ipv6/route.c:1283
ip6_rcv_finish+0x14b/0x7a0 net/ipv6/ip6_input.c:69
NF_HOOK include/linux/netfilter.h:249 [inline]
ipv6_rcv+0xff0/0x2190 net/ipv6/ip6_input.c:208
__netif_receive_skb_core+0x19af/0x33d0 net/core/dev.c:4423
__netif_receive_skb+0x2c/0x1b0 net/core/dev.c:4461
netif_receive_skb_internal+0x10b/0x670 net/core/dev.c:4534
netif_receive_skb+0xae/0x390 net/core/dev.c:4558
tun_rx_batched.isra.43+0x5ed/0x860 drivers/net/tun.c:1218
tun_get_user+0x11dd/0x2150 drivers/net/tun.c:1553
tun_chr_write_iter+0xde/0x190 drivers/net/tun.c:1579
call_write_iter include/linux/fs.h:1770 [inline]
new_sync_write fs/read_write.c:468 [inline]
__vfs_write+0x68a/0x970 fs/read_write.c:481
vfs_write+0x18f/0x510 fs/read_write.c:543
SYSC_write fs/read_write.c:588 [inline]
SyS_write+0xef/0x220 fs/read_write.c:580
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x40c2c1
RSP: 002b:00007f4d15f33c10 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 000000000040c2c1
RDX: 000000000000004e RSI: 000000002000cfbc RDI: 0000000000000015
RBP: 00007f4d15f33a10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000f4240 R11: 0000000000000293 R12: 00000000004b69f7
R13: 00007f4d15f33b48 R14: 00000000004b6a07 R15: 0000000000000000
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 0
CPU: 1 PID: 7996 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_failslab+0xec/0x120 mm/failslab.c:31
slab_pre_alloc_hook mm/slab.h:422 [inline]
slab_alloc_node mm/slab.c:3304 [inline]
kmem_cache_alloc_node+0x56/0x760 mm/slab.c:3649
__alloc_skb+0xf1/0x740 net/core/skbuff.c:194
alloc_skb include/linux/skbuff.h:976 [inline]
alloc_skb_with_frags+0x10d/0x710 net/core/skbuff.c:5137
sock_alloc_send_pskb+0x7b4/0x9d0 net/core/sock.c:2073
sock_alloc_send_skb+0x32/0x40 net/core/sock.c:2090
__ip6_append_data.isra.42+0x1bd9/0x32b0 net/ipv6/ip6_output.c:1390
ip6_append_data+0x189/0x290 net/ipv6/ip6_output.c:1552
icmpv6_echo_reply+0x11ac/0x1d60 net/ipv6/icmp.c:740
icmpv6_rcv+0x1160/0x18d0 net/ipv6/icmp.c:858
ip6_input_finish+0x36f/0x16d0 net/ipv6/ip6_input.c:284
NF_HOOK include/linux/netfilter.h:249 [inline]
ip6_input+0xe9/0x560 net/ipv6/ip6_input.c:327
dst_input include/net/dst.h:478 [inline]
ip6_rcv_finish+0x1a9/0x7a0 net/ipv6/ip6_input.c:71
NF_HOOK include/linux/netfilter.h:249 [inline]
ipv6_rcv+0xff0/0x2190 net/ipv6/ip6_input.c:208
__netif_receive_skb_core+0x19af/0x33d0 net/core/dev.c:4423
__netif_receive_skb+0x2c/0x1b0 net/core/dev.c:4461
netif_receive_skb_internal+0x10b/0x670 net/core/dev.c:4534
netif_receive_skb+0xae/0x390 net/core/dev.c:4558
tun_rx_batched.isra.43+0x5ed/0x860 drivers/net/tun.c:1218
tun_get_user+0x11dd/0x2150 drivers/net/tun.c:1553
tun_chr_write_iter+0xde/0x190 drivers/net/tun.c:1579
call_write_iter include/linux/fs.h:1770 [inline]
new_sync_write fs/read_write.c:468 [inline]
__vfs_write+0x68a/0x970 fs/read_write.c:481
vfs_write+0x18f/0x510 fs/read_write.c:543
SYSC_write fs/read_write.c:588 [inline]
SyS_write+0xef/0x220 fs/read_write.c:580
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x40c2c1
RSP: 002b:00007f4d15f33c10 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 000000000040c2c1
RDX: 000000000000004e RSI: 000000002000cfbc RDI: 0000000000000015
RBP: 00007f4d15f33a10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000f4240 R11: 0000000000000293 R12: 00000000004b69f7
R13: 00007f4d15f33b48 R14: 00000000004b6a07 R15: 0000000000000000
capability: warning: `syz-executor2' uses deprecated v2 capabilities in a
way that may be insecure
QAT: Invalid ioctl
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 0
CPU: 1 PID: 8034 Comm: syz-executor5 Not tainted 4.13.0-next-20170911+ #19
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:16 [inline]
dump_stack+0x194/0x257 lib/dump_stack.c:52
fail_dump lib/fault-inject.c:51 [inline]
should_fail+0x8c0/0xa40 lib/fault-inject.c:149
should_failslab+0xec/0x120 mm/failslab.c:31
slab_pre_alloc_hook mm/slab.h:422 [inline]
slab_alloc_node mm/slab.c:3304 [inline]
kmem_cache_alloc_node_trace+0x5a/0x760 mm/slab.c:3668
__do_kmalloc_node mm/slab.c:3688 [inline]
__kmalloc_node_track_caller+0x33/0x70 mm/slab.c:3703
__kmalloc_reserve.isra.40+0x41/0xd0 net/core/skbuff.c:138
__alloc_skb+0x13b/0x740 net/core/skbuff.c:206
alloc_skb include/linux/skbuff.h:976 [inline]
alloc_skb_with_frags+0x10d/0x710 net/core/skbuff.c:5137
sock_alloc_send_pskb+0x7b4/0x9d0 net/core/sock.c:2073
sock_alloc_send_skb+0x32/0x40 net/core/sock.c:2090
__ip6_append_data.isra.42+0x1bd9/0x32b0 net/ipv6/ip6_output.c:1390
ip6_append_data+0x189/0x290 net/ipv6/ip6_output.c:1552
icmpv6_echo_reply+0x11ac/0x1d60 net/ipv6/icmp.c:740
icmpv6_rcv+0x1160/0x18d0 net/ipv6/icmp.c:858
ip6_input_finish+0x36f/0x16d0 net/ipv6/ip6_input.c:284
NF_HOOK include/linux/netfilter.h:249 [inline]
ip6_input+0xe9/0x560 net/ipv6/ip6_input.c:327
dst_input include/net/dst.h:478 [inline]
ip6_rcv_finish+0x1a9/0x7a0 net/ipv6/ip6_input.c:71
NF_HOOK include/linux/netfilter.h:249 [inline]
ipv6_rcv+0xff0/0x2190 net/ipv6/ip6_input.c:208
__netif_receive_skb_core+0x19af/0x33d0 net/core/dev.c:4423
__netif_receive_skb+0x2c/0x1b0 net/core/dev.c:4461
netif_receive_skb_internal+0x10b/0x670 net/core/dev.c:4534
netif_receive_skb+0xae/0x390 net/core/dev.c:4558
tun_rx_batched.isra.43+0x5ed/0x860 drivers/net/tun.c:1218
tun_get_user+0x11dd/0x2150 drivers/net/tun.c:1553
tun_chr_write_iter+0xde/0x190 drivers/net/tun.c:1579
call_write_iter include/linux/fs.h:1770 [inline]
new_sync_write fs/read_write.c:468 [inline]
__vfs_write+0x68a/0x970 fs/read_write.c:481
vfs_write+0x18f/0x510 fs/read_write.c:543
SYSC_write fs/read_write.c:588 [inline]
SyS_write+0xef/0x220 fs/read_write.c:580
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x40c2c1
RSP: 002b:00007f4d15f33c10 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000718000 RCX: 000000000040c2c1
RDX: 000000000000004e RSI: 000000002000cfbc RDI: 0000000000000015
RBP: 00007f4d15f33a10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000f4240 R11: 0000000000000293 R12: 00000000004b69f7
R13: 00007f4d15f33b48 R14: 00000000004b6a07 R15: 0000000000000000
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl
QAT: Invalid ioctl


---
This bug is generated by a dumb bot. It may contain errors.
See https://goo.gl/tpsmEJ for details.
Direct all questions to syzk...@googlegroups.com.

syzbot will keep track of this bug report.
Once a fix for this bug is committed, please reply to this email with:
#syz fix: exact-commit-title
To mark this as a duplicate of another syzbot report, please reply with:
#syz dup: exact-subject-of-another-report
If it's a one-off invalid bug report, please reply with:
#syz invalid
Note: if the crash happens again, it will cause creation of a new bug
report.
config.txt
raw.log

Michal Hocko

unread,
Oct 27, 2017, 5:34:23 AM10/27/17
to syzbot, ak...@linux-foundation.org, dan.j.w...@intel.com, han...@cmpxchg.org, ja...@suse.cz, jgl...@redhat.com, linux-...@vger.kernel.org, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, tg...@linutronix.de, vba...@suse.cz, ying....@intel.com
On Fri 27-10-17 02:22:40, syzbot wrote:
> Hello,
>
> syzkaller hit the following crash on
> a31cc455c512f3f1dd5f79cac8e29a7c8a617af8
> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
> compiler: gcc (GCC) 7.1.1 20170620
> .config is attached
> Raw console output is attached.

I do not see such a commit. My linux-next top is next-20171018

[...]
> Chain exists of:
> cpu_hotplug_lock.rw_sem --> &pipe->mutex/1 --> &sb->s_type->i_mutex_key#9
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(&sb->s_type->i_mutex_key#9);
> lock(&pipe->mutex/1);
> lock(&sb->s_type->i_mutex_key#9);
> lock(cpu_hotplug_lock.rw_sem);

I am quite confused about this report. Where exactly is the deadlock?
I do not see where we would get pipe mutex from inside of the hotplug
lock. Is it possible this is just a false possitive due to cross release
feature?
--
Michal Hocko
SUSE Labs

Dmitry Vyukov

unread,
Oct 27, 2017, 5:45:19 AM10/27/17
to Michal Hocko, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com
As far as I understand this CPU0/CPU1 scheme works only for simple
cases with 2 mutexes. This seem to have larger cycle as denoted by
"the existing dependency chain (in reverse order) is:" section.

Dmitry Vyukov

unread,
Oct 27, 2017, 5:48:02 AM10/27/17
to Michal Hocko, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com
On Fri, Oct 27, 2017 at 11:44 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> On Fri, Oct 27, 2017 at 11:34 AM, Michal Hocko <mho...@kernel.org> wrote:
>> On Fri 27-10-17 02:22:40, syzbot wrote:
>>> Hello,
>>>
>>> syzkaller hit the following crash on
>>> a31cc455c512f3f1dd5f79cac8e29a7c8a617af8
>>> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
>>> compiler: gcc (GCC) 7.1.1 20170620
>>> .config is attached
>>> Raw console output is attached.
>>
>> I do not see such a commit. My linux-next top is next-20171018

As far as I understand linux-next constantly recreates tree, so that
all commits hashes are destroyed.
Somebody mentioned some time ago about linux-next-something tree which
keeps all of the history (but I don't remember it off the top of my
head).

Vlastimil Babka

unread,
Oct 27, 2017, 7:28:05 AM10/27/17
to Michal Hocko, syzbot, ak...@linux-foundation.org, dan.j.w...@intel.com, han...@cmpxchg.org, ja...@suse.cz, jgl...@redhat.com, linux-...@vger.kernel.org, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, tg...@linutronix.de, ying....@intel.com
On 10/27/2017 11:34 AM, Michal Hocko wrote:
> On Fri 27-10-17 02:22:40, syzbot wrote:
>> Hello,
>>
>> syzkaller hit the following crash on
>> a31cc455c512f3f1dd5f79cac8e29a7c8a617af8
>> git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/master
>> compiler: gcc (GCC) 7.1.1 20170620
>> .config is attached
>> Raw console output is attached.
>
> I do not see such a commit. My linux-next top is next-20171018

It's the next-20170911 tag. Try git fetch --tags, but I'm not sure how
many are archived...

Michal Hocko

unread,
Oct 27, 2017, 9:42:36 AM10/27/17
to Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com
My point was that lru_add_drain_all doesn't take any external locks
other than lru_lock and that one is not anywhere in the chain AFAICS.

Michal Hocko

unread,
Oct 30, 2017, 4:22:15 AM10/30/17
to Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, Byungchul Park
[Cc Byungchul. The original full report is
http://lkml.kernel.org/r/089e0825eec895...@google.com]

Could you have a look please? This smells like a false positive to me.

Byungchul Park

unread,
Oct 30, 2017, 6:09:30 AM10/30/17
to Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com, pet...@infradead.org
On Mon, Oct 30, 2017 at 09:22:03AM +0100, Michal Hocko wrote:
> [Cc Byungchul. The original full report is
> http://lkml.kernel.org/r/089e0825eec895...@google.com]
>
> Could you have a look please? This smells like a false positive to me.

+cc pet...@infradead.org

Hello,

IMHO, the false positive was caused by the lockdep_map of 'cpuhp_state'
which couldn't distinguish between cpu-up and cpu-down.

And it was solved with the following commit by Peter and Thomas:

5f4b55e10645b7371322c800a5ec745cab487a6c
smp/hotplug: Differentiate the AP-work lockdep class between up and down

Therefore, we can avoid the false positive on later than the commit.

Peter and Thomas, could you confirm it?

Thanks,
Byungchul

Byungchul Park

unread,
Oct 30, 2017, 6:26:28 AM10/30/17
to Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Mon, Oct 30, 2017 at 09:22:03AM +0100, Michal Hocko wrote:
I think lru_add_drain_all() takes cpu_hotplug_lock.rw_sem implicitly in
get_online_cpus(), which appears in the chain.

Thanks,
Byungchul

Michal Hocko

unread,
Oct 30, 2017, 7:48:49 AM10/30/17
to Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
Yes, but it doesn't take any _other_ locks which are externally visible
except for lru_lock and that itself doesn't provide any further
dependency AFAICS. So what exactly is the deadlock scenario?

Peter Zijlstra

unread,
Oct 30, 2017, 11:10:23 AM10/30/17
to Byungchul Park, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Mon, Oct 30, 2017 at 07:09:21PM +0900, Byungchul Park wrote:
> On Mon, Oct 30, 2017 at 09:22:03AM +0100, Michal Hocko wrote:
> > [Cc Byungchul. The original full report is
> > http://lkml.kernel.org/r/089e0825eec895...@google.com]
> >
> > Could you have a look please? This smells like a false positive to me.
>
> +cc pet...@infradead.org
>
> Hello,
>
> IMHO, the false positive was caused by the lockdep_map of 'cpuhp_state'
> which couldn't distinguish between cpu-up and cpu-down.
>
> And it was solved with the following commit by Peter and Thomas:
>
> 5f4b55e10645b7371322c800a5ec745cab487a6c
> smp/hotplug: Differentiate the AP-work lockdep class between up and down
>
> Therefore, we can avoid the false positive on later than the commit.
>
> Peter and Thomas, could you confirm it?

I can indeed confirm it's running old code; cpuhp_state is no more.

However, that splat translates like:

__cpuhp_setup_state()
#0 cpus_read_lock()
__cpuhp_setup_state_cpuslocked()
#1 mutex_lock(&cpuhp_state_mutex)



__cpuhp_state_add_instance()
#2 mutex_lock(&cpuhp_state_mutex)
cpuhp_issue_call()
cpuhp_invoke_ap_callback()
#3 wait_for_completion()

msr_device_create()
...
#4 filename_create()
#3 complete()



do_splice()
#4 file_start_write()
do_splice_from()
iter_file_splice_write()
#5 pipe_lock()
vfs_iter_write()
...
#6 inode_lock()



sys_fcntl()
do_fcntl()
shmem_fcntl()
#5 inode_lock()
shmem_wait_for_pins()
if (!scan)
lru_add_drain_all()
#0 cpus_read_lock()



Which is an actual real deadlock, there is no mixing of up and down.

Peter Zijlstra

unread,
Oct 30, 2017, 11:22:19 AM10/30/17
to Byungchul Park, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Mon, Oct 30, 2017 at 04:10:09PM +0100, Peter Zijlstra wrote:
> I can indeed confirm it's running old code; cpuhp_state is no more.
>
> However, that splat translates like:
>
> __cpuhp_setup_state()
> #0 cpus_read_lock()
> __cpuhp_setup_state_cpuslocked()
> #1 mutex_lock(&cpuhp_state_mutex)
>
>
>
> __cpuhp_state_add_instance()
> #2 mutex_lock(&cpuhp_state_mutex)
> cpuhp_issue_call()
> cpuhp_invoke_ap_callback()
> #3 wait_for_completion()
>
> msr_device_create()
> ...
> #4 filename_create()
> #3 complete()
>


So all this you can get in a single callchain when you do something
shiny like:

modprobe msr


> do_splice()
> #4 file_start_write()
> do_splice_from()
> iter_file_splice_write()
> #5 pipe_lock()
> vfs_iter_write()
> ...
> #6 inode_lock()
>
>

This is a splice into a devtmpfs file


> sys_fcntl()
> do_fcntl()
> shmem_fcntl()
> #5 inode_lock()

#6 (obviously)

> shmem_wait_for_pins()
> if (!scan)
> lru_add_drain_all()
> #0 cpus_read_lock()
>

Is the right fcntl()


So 3 different callchains, and *splat*..


Michal Hocko

unread,
Oct 31, 2017, 9:13:37 AM10/31/17
to Peter Zijlstra, Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Mon 30-10-17 16:10:09, Peter Zijlstra wrote:
> On Mon, Oct 30, 2017 at 07:09:21PM +0900, Byungchul Park wrote:
> > On Mon, Oct 30, 2017 at 09:22:03AM +0100, Michal Hocko wrote:
> > > [Cc Byungchul. The original full report is
> > > http://lkml.kernel.org/r/089e0825eec895...@google.com]
> > >
> > > Could you have a look please? This smells like a false positive to me.
> >
> > +cc pet...@infradead.org
> >
> > Hello,
> >
> > IMHO, the false positive was caused by the lockdep_map of 'cpuhp_state'
> > which couldn't distinguish between cpu-up and cpu-down.
> >
> > And it was solved with the following commit by Peter and Thomas:
> >
> > 5f4b55e10645b7371322c800a5ec745cab487a6c
> > smp/hotplug: Differentiate the AP-work lockdep class between up and down
> >
> > Therefore, we can avoid the false positive on later than the commit.
> >
> > Peter and Thomas, could you confirm it?
>
> I can indeed confirm it's running old code; cpuhp_state is no more.

Does this mean the below chain is no longer possible with the current
linux-next (tip)?

> However, that splat translates like:
>
> __cpuhp_setup_state()
> #0 cpus_read_lock()
> __cpuhp_setup_state_cpuslocked()
> #1 mutex_lock(&cpuhp_state_mutex)
>
>
>
> __cpuhp_state_add_instance()
> #2 mutex_lock(&cpuhp_state_mutex)

this should be #1 right?

> cpuhp_issue_call()
> cpuhp_invoke_ap_callback()
> #3 wait_for_completion()
>
> msr_device_create()
> ...
> #4 filename_create()
> #3 complete()
>
>
>
> do_splice()
> #4 file_start_write()
> do_splice_from()
> iter_file_splice_write()
> #5 pipe_lock()
> vfs_iter_write()
> ...
> #6 inode_lock()
>
>
>
> sys_fcntl()
> do_fcntl()
> shmem_fcntl()
> #5 inode_lock()
> shmem_wait_for_pins()
> if (!scan)
> lru_add_drain_all()
> #0 cpus_read_lock()
>
>
>
> Which is an actual real deadlock, there is no mixing of up and down.

thanks a lot, this made it more clear to me. It took a while to
actually see 0 -> 1 -> 3 -> 4 -> 5 -> 0 cycle. I have only focused
on lru_add_drain_all while it was holding the cpus lock.

Peter Zijlstra

unread,
Oct 31, 2017, 9:51:15 AM10/31/17
to Michal Hocko, Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue, Oct 31, 2017 at 02:13:33PM +0100, Michal Hocko wrote:
> On Mon 30-10-17 16:10:09, Peter Zijlstra wrote:

> > However, that splat translates like:
> >
> > __cpuhp_setup_state()
> > #0 cpus_read_lock()
> > __cpuhp_setup_state_cpuslocked()
> > #1 mutex_lock(&cpuhp_state_mutex)
> >
> >
> >
> > __cpuhp_state_add_instance()
> > #2 mutex_lock(&cpuhp_state_mutex)
>
> this should be #1 right?

Yes

> > cpuhp_issue_call()
> > cpuhp_invoke_ap_callback()
> > #3 wait_for_completion()
> >
> > msr_device_create()
> > ...
> > #4 filename_create()
> > #3 complete()
> >
> >
> >
> > do_splice()
> > #4 file_start_write()
> > do_splice_from()
> > iter_file_splice_write()
> > #5 pipe_lock()
> > vfs_iter_write()
> > ...
> > #6 inode_lock()
> >
> >
> >
> > sys_fcntl()
> > do_fcntl()
> > shmem_fcntl()
> > #5 inode_lock()

And that #6

> > shmem_wait_for_pins()
> > if (!scan)
> > lru_add_drain_all()
> > #0 cpus_read_lock()
> >
> >
> >
> > Which is an actual real deadlock, there is no mixing of up and down.
>
> thanks a lot, this made it more clear to me. It took a while to
> actually see 0 -> 1 -> 3 -> 4 -> 5 -> 0 cycle. I have only focused
> on lru_add_drain_all while it was holding the cpus lock.

Yeah, these things are a pain to read, which is why I always construct
something like the above first.

Dmitry Vyukov

unread,
Oct 31, 2017, 9:55:53 AM10/31/17
to Peter Zijlstra, Michal Hocko, Byungchul Park, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
I noticed that for a simple 2 lock deadlock lockdep prints only 2
stacks. FWIW in user-space TSAN we print 4 stacks for such deadlocks,
namely where A was locked, where B was locked under A, where B was
locked, where A was locked under B. It makes it easier to figure out
what happens. However, for this report it seems to be 8 stacks this
way. So it's probably hard either way.

Peter Zijlstra

unread,
Oct 31, 2017, 10:52:59 AM10/31/17
to Dmitry Vyukov, Michal Hocko, Byungchul Park, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue, Oct 31, 2017 at 04:55:32PM +0300, Dmitry Vyukov wrote:

> I noticed that for a simple 2 lock deadlock lockdep prints only 2
> stacks.

3, one of which is useless :-)

For the AB-BA we print where we acquire A (#0) where we acquire B while
holding A #1 and then where we acquire B while holding A the unwind at
point of splat.

The #0 trace is useless.

> FWIW in user-space TSAN we print 4 stacks for such deadlocks,
> namely where A was locked, where B was locked under A, where B was
> locked, where A was locked under B. It makes it easier to figure out
> what happens. However, for this report it seems to be 8 stacks this
> way. So it's probably hard either way.

Right, its a question of performance and overhead I suppose. Lockdep
typically only saves a stack trace when it finds a new link.

So only when we find the AB relation do we save the stacktrace; which
reflects the location where we acquire B. But by that time we've lost
where it was we acquire A.

If we want to save those stacks; we have to save a stacktrace on _every_
lock acquire, simply because we never know ahead of time if there will
be a new link. Doing this is _expensive_.

Furthermore, the space into which we store stacktraces is limited;
since memory allocators use locks we can't very well use dynamic memory
for lockdep -- that would give recursive and robustness issues.

Also, its usually not too hard to find the site where we took A if we
know the site of AB.

Michal Hocko

unread,
Oct 31, 2017, 10:58:06 AM10/31/17
to Peter Zijlstra, Dmitry Vyukov, Byungchul Park, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
[...]
> If we want to save those stacks; we have to save a stacktrace on _every_
> lock acquire, simply because we never know ahead of time if there will
> be a new link. Doing this is _expensive_.
>
> Furthermore, the space into which we store stacktraces is limited;
> since memory allocators use locks we can't very well use dynamic memory
> for lockdep -- that would give recursive and robustness issues.

Wouldn't stackdepot help here? Sure the first stack unwind will be
costly but then you amortize that over time. It is quite likely that
locks are held from same addresses.

Peter Zijlstra

unread,
Oct 31, 2017, 11:10:39 AM10/31/17
to Michal Hocko, Dmitry Vyukov, Byungchul Park, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
I'm not familiar with that; but looking at it, no. It uses alloc_pages()
which has locks in and it has a lock itself.

Also, it seems to index the stack based on the entire stacktrace; which
means you actually have to have the stacktrace first. And doing
stacktraces on every single acquire is horrendously expensive.

The idea just saves on storage, it doesn't help with having to do a
gazillion of unwinds in the first place.

Peter Zijlstra

unread,
Oct 31, 2017, 11:25:53 AM10/31/17
to Michal Hocko, Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue, Oct 31, 2017 at 02:13:33PM +0100, Michal Hocko wrote:

> > I can indeed confirm it's running old code; cpuhp_state is no more.
>
> Does this mean the below chain is no longer possible with the current
> linux-next (tip)?

I see I failed to answer this; no it will happen but now reads like:

s/cpuhp_state/&_up/

Where we used to have a single lock protecting the hotplug stuff, we now
have 2, one for bringing stuff up and one for tearing it down.

This got rid of lock cycles that included cpu-up and cpu-down parts;
those are false positives because we cannot do cpu-up and cpu-down
concurrently.

But this report only includes a single (cpu-up) part and therefore is
not affected by that change other than a lock name changing.

Michal Hocko

unread,
Oct 31, 2017, 11:45:51 AM10/31/17
to Peter Zijlstra, Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com, David Herrmann
[CC David Herrmann for shmem_wait_for_pins. The thread starts
http://lkml.kernel.org/r/089e0825eec895...@google.com
with the callchains explained http://lkml.kernel.org/r/20171030151009....@hirez.programming.kicks-ass.net
for shmem_wait_for_pins involvement see below]
Hmm, OK. I have quickly glanced through shmem_wait_for_pins and I fail
to see why it needs lru_add_drain_all at all. All we should care about
is the radix tree and the lru cache only cares about the proper
placement on the LRU list which is not checked here. I might be missing
something subtle though. David?

We've had some MM vs. hotplug issues. See e.g. a459eeb7b852 ("mm,
page_alloc: do not depend on cpu hotplug locks inside the allocator"),
so I suspect we might want/need to do similar for lru_add_drain_all.
It feels like I've already worked on that but for the live of mine I
cannot remember.

Anyway, this lock dependecy is subtle as hell and I am worried that we
might have way too many of those. We have so many callers of
get_online_cpus that dependecies like this are just waiting to blow up.

Peter Zijlstra

unread,
Oct 31, 2017, 12:31:08 PM10/31/17
to Michal Hocko, Byungchul Park, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com, David Herrmann
On Tue, Oct 31, 2017 at 04:45:46PM +0100, Michal Hocko wrote:
> Anyway, this lock dependecy is subtle as hell and I am worried that we
> might have way too many of those. We have so many callers of
> get_online_cpus that dependecies like this are just waiting to blow up.

Yes, the filesystem locks inside hotplug thing is totally annoying. I've
got a few other splats that contain a similar theme and I've no real
clue what to do about.

See for instance this one:

https://lkml.kernel.org/r/2017102715...@worktop.lehotels.local

splice from devtmpfs is another common theme and links it do the
pipe->mutex, which then makes another other splice op invert against
that hotplug crap :/


I'm sure I've suggested simply creating possible_cpus devtmpfs files up
front to get around this... maybe we should just do that.

Byungchul Park

unread,
Nov 1, 2017, 4:31:27 AM11/1/17
to Peter Zijlstra, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue, Oct 31, 2017 at 04:25:32PM +0100, Peter Zijlstra wrote:
> But this report only includes a single (cpu-up) part and therefore is

Thanks for fixing me, Peter. I thought '#1 -> #2' and '#2 -> #3', where
#2 is 'cpuhp_state', should have been built with two different classes
of #2 as the latest code. Sorry for confusing Michal.

Byungchul Park

unread,
Nov 1, 2017, 4:59:33 AM11/1/17
to Peter Zijlstra, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Tue, Oct 31, 2017 at 04:10:24PM +0100, Peter Zijlstra wrote:
> On Tue, Oct 31, 2017 at 03:58:04PM +0100, Michal Hocko wrote:
> > On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
> > [...]
> > > If we want to save those stacks; we have to save a stacktrace on _every_
> > > lock acquire, simply because we never know ahead of time if there will
> > > be a new link. Doing this is _expensive_.
> > >
> > > Furthermore, the space into which we store stacktraces is limited;
> > > since memory allocators use locks we can't very well use dynamic memory
> > > for lockdep -- that would give recursive and robustness issues.

I agree with all you said.

But, I have a better idea, that is, to save only the caller's ip of each
acquisition as an additional information? Of course, it's not enough in
some cases, but it's cheep and better than doing nothing.

For example, when building A->B, let's save not only full stack of B,
but also caller's ip of A together, then use them on warning like:

-> #3 aa_mutex:
a()
b()
c()
d()
---
while holding bb_mutex at $IP <- additional information I said

-> #2 bb_mutex:
e()
f()
g()
h()
---
while holding cc_mutex at $IP <- additional information I said

-> #1 cc_mutex:
i()
j()
k()
l()
---
while holding xxx at $IP <- additional information I said

and so on.

Don't you think this is worth working it?

Peter Zijlstra

unread,
Nov 1, 2017, 8:01:11 AM11/1/17
to Byungchul Park, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Wed, Nov 01, 2017 at 05:59:27PM +0900, Byungchul Park wrote:
> On Tue, Oct 31, 2017 at 04:10:24PM +0100, Peter Zijlstra wrote:
> > On Tue, Oct 31, 2017 at 03:58:04PM +0100, Michal Hocko wrote:
> > > On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
> > > [...]
> > > > If we want to save those stacks; we have to save a stacktrace on _every_
> > > > lock acquire, simply because we never know ahead of time if there will
> > > > be a new link. Doing this is _expensive_.
> > > >
> > > > Furthermore, the space into which we store stacktraces is limited;
> > > > since memory allocators use locks we can't very well use dynamic memory
> > > > for lockdep -- that would give recursive and robustness issues.
>
> I agree with all you said.
>
> But, I have a better idea, that is, to save only the caller's ip of each
> acquisition as an additional information? Of course, it's not enough in
> some cases, but it's cheep and better than doing nothing.
>
> For example, when building A->B, let's save not only full stack of B,
> but also caller's ip of A together, then use them on warning like:

Like said; I've never really had trouble finding where we take A. And
for the most difficult cases, just the IP isn't too useful either.

So that would solve a non problem while leaving the real problem.

Byungchul Park

unread,
Nov 1, 2017, 7:55:01 PM11/1/17
to Peter Zijlstra, Michal Hocko, Dmitry Vyukov, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, jgl...@redhat.com, LKML, linu...@kvack.org, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Wed, Nov 01, 2017 at 01:01:01PM +0100, Peter Zijlstra wrote:
> On Wed, Nov 01, 2017 at 05:59:27PM +0900, Byungchul Park wrote:
> > On Tue, Oct 31, 2017 at 04:10:24PM +0100, Peter Zijlstra wrote:
> > > On Tue, Oct 31, 2017 at 03:58:04PM +0100, Michal Hocko wrote:
> > > > On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
> > > > [...]
> > > > > If we want to save those stacks; we have to save a stacktrace on _every_
> > > > > lock acquire, simply because we never know ahead of time if there will
> > > > > be a new link. Doing this is _expensive_.
> > > > >
> > > > > Furthermore, the space into which we store stacktraces is limited;
> > > > > since memory allocators use locks we can't very well use dynamic memory
> > > > > for lockdep -- that would give recursive and robustness issues.
> >
> > I agree with all you said.
> >
> > But, I have a better idea, that is, to save only the caller's ip of each
> > acquisition as an additional information? Of course, it's not enough in
> > some cases, but it's cheep and better than doing nothing.
> >
> > For example, when building A->B, let's save not only full stack of B,
> > but also caller's ip of A together, then use them on warning like:
>
> Like said; I've never really had trouble finding where we take A. And

Me, either, since I know the way. But I've seen many guys who got
confused with it, which is why I suggested it.

But, leave it if you don't think so.

Dmitry Vyukov

unread,
Feb 14, 2018, 9:02:01 AM2/14/18
to Byungchul Park, Peter Zijlstra, Michal Hocko, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, Jerome Glisse, LKML, Linux-MM, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
Hi,

What's the status of this? Was any patch submitted for this?

Michal Hocko

unread,
Feb 14, 2018, 10:44:36 AM2/14/18
to Dmitry Vyukov, Byungchul Park, Peter Zijlstra, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, Jerome Glisse, LKML, Linux-MM, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com

Dmitry Vyukov

unread,
Feb 14, 2018, 10:57:49 AM2/14/18
to Michal Hocko, Byungchul Park, Peter Zijlstra, syzbot, Andrew Morton, Dan Williams, Johannes Weiner, Jan Kara, Jerome Glisse, LKML, Linux-MM, sh...@fb.com, syzkall...@googlegroups.com, Thomas Gleixner, Vlastimil Babka, ying....@intel.com, kerne...@lge.com
On Wed, Feb 14, 2018 at 4:44 PM, Michal Hocko <mho...@kernel.org> wrote:
>> >> > > > [...]
>> >> > > > > If we want to save those stacks; we have to save a stacktrace on _every_
>> >> > > > > lock acquire, simply because we never know ahead of time if there will
>> >> > > > > be a new link. Doing this is _expensive_.
>> >> > > > >
>> >> > > > > Furthermore, the space into which we store stacktraces is limited;
>> >> > > > > since memory allocators use locks we can't very well use dynamic memory
>> >> > > > > for lockdep -- that would give recursive and robustness issues.
>> >> >
>> >> > I agree with all you said.
>> >> >
>> >> > But, I have a better idea, that is, to save only the caller's ip of each
>> >> > acquisition as an additional information? Of course, it's not enough in
>> >> > some cases, but it's cheep and better than doing nothing.
>> >> >
>> >> > For example, when building A->B, let's save not only full stack of B,
>> >> > but also caller's ip of A together, then use them on warning like:
>> >>
>> >> Like said; I've never really had trouble finding where we take A. And
>> >
>> > Me, either, since I know the way. But I've seen many guys who got
>> > confused with it, which is why I suggested it.
>> >
>> > But, leave it if you don't think so.
>> >
>> >> for the most difficult cases, just the IP isn't too useful either.
>> >>
>> >> So that would solve a non problem while leaving the real problem.
>>
>>
>> Hi,
>>
>> What's the status of this? Was any patch submitted for this?
>
> This http://lkml.kernel.org/r/20171116120535...@kernel.org?

Thanks

Let's tell syzbot:

#syz fix: mm: drop hotplug lock from lru_add_drain_all()
Reply all
Reply to author
Forward
0 new messages