Re: [PATCH] x86: optimize memcpy_flushcache

27 views
Skip to first unread message

Yigal Korman

unread,
Jun 27, 2018, 7:23:28 AM6/27/18
to Dan Williams, Mikulas Patocka, Mike Snitzer, Ingo Molnar, device-mapper development, linux-nvdimm, X86 ML, pmem
Hi,
I'm a bit late on this but I have a question about the original patch -
I thought that in order for movnt (movntil, movntiq) to push the data
into the persistency domain (ADR),
one must work with length that is multiple of cacheline size,
otherwise the write-combine buffers remain partially
filled and you need to commit them with a fence (sfence) - which ruins
the whole performance gain you got here.
Am I wrong, are the write-combine buffers are part of the ADR domain
or something?

Thanks,
Yigal

On Mon, Jun 18, 2018 at 7:38 PM, Dan Williams <dan.j.w...@intel.com> wrote:
> On Mon, Jun 18, 2018 at 5:50 AM, Mikulas Patocka <mpat...@redhat.com> wrote:
>> Hi Mike
>>
>> Could you please push this patch to the kernel 4.18-rc? Dan Williams said
>> that he will submit it, but he forgot about it.
>
> ...to be clear I acked it and asked Ingo to take it. Will need a
> resubmit for 4.19.
>
> Ingo, see below for a patch to pick up into -tip when you have a chance.
>
>>
>> Without this patch, dm-writecache is suffering 2% penalty because of
>> memcpy_flushcache overhead.
>>
>> Mikulas
>>
>>
>>
>> From: Mikulas Patocka <mpat...@redhat.com>
>>
>> I use memcpy_flushcache in my persistent memory driver for metadata
>> updates and it turns out that the overhead of memcpy_flushcache causes 2%
>> performance degradation compared to "movnti" instruction explicitly coded
>> using inline assembler.
>>
>> This patch recognizes memcpy_flushcache calls with constant short length
>> and turns them into inline assembler - so that I don't have to use inline
>> assembler in the driver.
>>
>> Signed-off-by: Mikulas Patocka <mpat...@redhat.com>
>>
>> ---
>> arch/x86/include/asm/string_64.h | 20 +++++++++++++++++++-
>> arch/x86/lib/usercopy_64.c | 4 ++--
>> 2 files changed, 21 insertions(+), 3 deletions(-)
>>
>> Index: linux-2.6/arch/x86/include/asm/string_64.h
>> ===================================================================
>> --- linux-2.6.orig/arch/x86/include/asm/string_64.h
>> +++ linux-2.6/arch/x86/include/asm/string_64.h
>> @@ -149,7 +149,25 @@ memcpy_mcsafe(void *dst, const void *src
>>
>> #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
>> #define __HAVE_ARCH_MEMCPY_FLUSHCACHE 1
>> -void memcpy_flushcache(void *dst, const void *src, size_t cnt);
>> +void __memcpy_flushcache(void *dst, const void *src, size_t cnt);
>> +static __always_inline void memcpy_flushcache(void *dst, const void *src, size_t cnt)
>> +{
>> + if (__builtin_constant_p(cnt)) {
>> + switch (cnt) {
>> + case 4:
>> + asm ("movntil %1, %0" : "=m"(*(u32 *)dst) : "r"(*(u32 *)src));
>> + return;
>> + case 8:
>> + asm ("movntiq %1, %0" : "=m"(*(u64 *)dst) : "r"(*(u64 *)src));
>> + return;
>> + case 16:
>> + asm ("movntiq %1, %0" : "=m"(*(u64 *)dst) : "r"(*(u64 *)src));
>> + asm ("movntiq %1, %0" : "=m"(*(u64 *)(dst + 8)) : "r"(*(u64 *)(src + 8)));
>> + return;
>> + }
>> + }
>> + __memcpy_flushcache(dst, src, cnt);
>> +}
>> #endif
>>
>> #endif /* __KERNEL__ */
>> Index: linux-2.6/arch/x86/lib/usercopy_64.c
>> ===================================================================
>> --- linux-2.6.orig/arch/x86/lib/usercopy_64.c
>> +++ linux-2.6/arch/x86/lib/usercopy_64.c
>> @@ -153,7 +153,7 @@ long __copy_user_flushcache(void *dst, c
>> return rc;
>> }
>>
>> -void memcpy_flushcache(void *_dst, const void *_src, size_t size)
>> +void __memcpy_flushcache(void *_dst, const void *_src, size_t size)
>> {
>> unsigned long dest = (unsigned long) _dst;
>> unsigned long source = (unsigned long) _src;
>> @@ -216,7 +216,7 @@ void memcpy_flushcache(void *_dst, const
>> clean_cache_range((void *) dest, size);
>> }
>> }
>> -EXPORT_SYMBOL_GPL(memcpy_flushcache);
>> +EXPORT_SYMBOL_GPL(__memcpy_flushcache);
>>
>> void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
>> size_t len)
> _______________________________________________
> Linux-nvdimm mailing list
> Linux-...@lists.01.org
> https://lists.01.org/mailman/listinfo/linux-nvdimm

Yigal Korman

unread,
Jun 27, 2018, 10:02:43 AM6/27/18
to Dan Williams, Mikulas Patocka, Mike Snitzer, Ingo Molnar, device-mapper development, linux-nvdimm, X86 ML, pmem
On Wed, Jun 27, 2018 at 4:03 PM, Dan Williams <dan.j.w...@intel.com> wrote:
> On Wed, Jun 27, 2018 at 4:23 AM, Yigal Korman <yi...@plexistor.com> wrote:
>> Hi,
>> I'm a bit late on this but I have a question about the original patch -
>> I thought that in order for movnt (movntil, movntiq) to push the data
>> into the persistency domain (ADR),
>> one must work with length that is multiple of cacheline size,
>> otherwise the write-combine buffers remain partially
>> filled and you need to commit them with a fence (sfence) - which ruins
>> the whole performance gain you got here.
>> Am I wrong, are the write-combine buffers are part of the ADR domain
>> or something?
>
> The intent is to allow a batch of memcpy_flushcache() calls followed
> by a single sfence. Specifying a multiple of a cacheline size does not
> necessarily help as sfence is still needed to make sure that the movnt
> result has reached the ADR-safe domain.

Oh, right, I see that dm-writecache calls writecache_commit_flushed
which in turn calls wmb().
I keep confusing *_nocache (i.e. copy_user_nocache) that includes
sfence and *_flushcache (i.e. memcpy_flushcache) that doesn't.
Thanks for the clear up.
Reply all
Reply to author
Forward
0 new messages