Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[patch 0/2] x86,pat: Reduce contention on the memtype_lock -V3

0 views
Skip to first unread message

ho...@sgi.com

unread,
Mar 15, 2010, 9:30:02 AM3/15/10
to

Tracking memtype on x86 uses a single global spin_lock for either reading
or changing the memory type. This includes changes made to page flags
which is perfectly parallel.

Part one of the patchset makes the page-based tracking use cmpxchg
without a need for a lock.

Part two of the patchset converts the spin_lock into a read/write lock.


To: Ingo Molnar <mi...@redhat.com>
To: H. Peter Anvin <h...@zytor.com>
To: Thomas Gleixner <tg...@linutronix.de>
Signed-off-by: Robin Holt <ho...@sgi.com>
Cc: Venkatesh Pallipadi <venkatesh...@intel.com>
Cc: Venkatesh Pallipadi <venkatesh...@gmail.com>
Cc: Suresh Siddha <suresh....@intel.com>
Cc: Linux Kernel Mailing List <linux-...@vger.kernel.org>
Cc: x...@kernel.org

---

arch/x86/include/asm/cacheflush.h | 44 +++++++++++++++++++++-----------------
arch/x86/mm/pat.c | 28 ++++++++----------------
2 files changed, 35 insertions(+), 37 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

ho...@sgi.com

unread,
Mar 15, 2010, 9:30:02 AM3/15/10
to
memtype_atomic_update_V3

ho...@sgi.com

unread,
Mar 15, 2010, 9:30:02 AM3/15/10
to
memtype_rwlock_V3

Robin Holt

unread,
Mar 17, 2010, 6:20:02 AM3/17/10
to
Is there any movement on this? The problem is easily understood and
the code in this patch is quite clear. I am having difficulty getting
distros to evaluate this patch because it has not been accepted upstream.
While I understand a slow review process is desirable, I first submitted
these for review on 26 Feb.

Thanks,
Robin

On Mon, Mar 15, 2010 at 08:21:04AM -0500, ho...@sgi.com wrote:
>
> While testing an application using the xpmem (out of kernel) driver, we
> noticed a significant page fault rate reduction of x86_64 with respect
> to ia64. For one test running with 32 cpus, one thread per cpu, it
> took 01:08 for each of the threads to vm_insert_pfn 2GB worth of pages.
> For the same test running on 256 cpus, one thread per cpu, it took 14:48
> to vm_insert_pfn 2 GB worth of pages.
>
> The slowdown was tracked to lookup_memtype which acquires the
> spinlock memtype_lock. This heavily contended lock was slowing down
> vm_insert_pfn().
>
> With the cmpxchg on page->flags method, both the 32 cpu and 256 cpu
> cases take approx 00:01.3 seconds to complete.


>
>
> To: Ingo Molnar <mi...@redhat.com>
> To: H. Peter Anvin <h...@zytor.com>
> To: Thomas Gleixner <tg...@linutronix.de>
> Signed-off-by: Robin Holt <ho...@sgi.com>
> Cc: Venkatesh Pallipadi <venkatesh...@intel.com>
> Cc: Venkatesh Pallipadi <venkatesh...@gmail.com>
> Cc: Suresh Siddha <suresh....@intel.com>
> Cc: Linux Kernel Mailing List <linux-...@vger.kernel.org>
> Cc: x...@kernel.org
>
> ---
>

> Changes since -V2:
> 1) Cleared up the naming of the masks used in setting and clearing
> the flags.
>
>
> Changes since -V1:
> 1) Introduce atomically setting and clearing the page flags and not
> using the global memtype_lock to protect page->flags.
>
> 2) This allowed me the opportunity to convert the rwlock back into a
> spinlock and not affect _MY_ tests performance as all the pages my test
> was utilizing are tracked by struct pages.
>
> 3) Corrected the commit log. The timings were for 32 cpus and not 256.
>
> arch/x86/include/asm/cacheflush.h | 44 +++++++++++++++++++++-----------------
> arch/x86/mm/pat.c | 8 ------
> 2 files changed, 25 insertions(+), 27 deletions(-)
>
> Index: linux-next/arch/x86/include/asm/cacheflush.h
> ===================================================================
> --- linux-next.orig/arch/x86/include/asm/cacheflush.h 2010-03-12 19:55:06.690471974 -0600
> +++ linux-next/arch/x86/include/asm/cacheflush.h 2010-03-12 19:55:41.846472324 -0600
> @@ -44,9 +44,6 @@ static inline void copy_from_user_page(s
> memcpy(dst, src, len);
> }
>
> -#define PG_WC PG_arch_1
> -PAGEFLAG(WC, WC)
> -
> #ifdef CONFIG_X86_PAT
> /*
> * X86 PAT uses page flags WC and Uncached together to keep track of
> @@ -55,16 +52,24 @@ PAGEFLAG(WC, WC)
> * _PAGE_CACHE_UC_MINUS and fourth state where page's memory type has not
> * been changed from its default (value of -1 used to denote this).
> * Note we do not support _PAGE_CACHE_UC here.
> - *
> - * Caller must hold memtype_lock for atomicity.
> */
> +
> +#define _PGMT_DEFAULT 0
> +#define _PGMT_WC PG_arch_1
> +#define _PGMT_UC_MINUS PG_uncached
> +#define _PGMT_WB (PG_uncached | PG_arch_1)
> +#define _PGMT_MASK (PG_uncached | PG_arch_1)
> +#define _PGMT_CLEAR_MASK (~_PGMT_MASK)
> +
> static inline unsigned long get_page_memtype(struct page *pg)
> {
> - if (!PageUncached(pg) && !PageWC(pg))
> + unsigned long pg_flags = pg->flags & _PGMT_MASK;
> +
> + if (pg_flags == _PGMT_DEFAULT)
> return -1;
> - else if (!PageUncached(pg) && PageWC(pg))
> + else if (pg_flags == _PGMT_WC)
> return _PAGE_CACHE_WC;
> - else if (PageUncached(pg) && !PageWC(pg))
> + else if (pg_flags == _PGMT_UC_MINUS)
> return _PAGE_CACHE_UC_MINUS;
> else
> return _PAGE_CACHE_WB;
> @@ -72,25 +77,26 @@ static inline unsigned long get_page_mem
>
> static inline void set_page_memtype(struct page *pg, unsigned long memtype)
> {
> + unsigned long memtype_flags = _PGMT_DEFAULT;
> + unsigned long old_flags;
> + unsigned long new_flags;
> +
> switch (memtype) {
> case _PAGE_CACHE_WC:
> - ClearPageUncached(pg);
> - SetPageWC(pg);
> + memtype_flags = _PGMT_WC;
> break;
> case _PAGE_CACHE_UC_MINUS:
> - SetPageUncached(pg);
> - ClearPageWC(pg);
> + memtype_flags = _PGMT_UC_MINUS;
> break;
> case _PAGE_CACHE_WB:
> - SetPageUncached(pg);
> - SetPageWC(pg);
> - break;
> - default:
> - case -1:
> - ClearPageUncached(pg);
> - ClearPageWC(pg);
> + memtype_flags = _PGMT_WB;
> break;
> }
> +
> + do {
> + old_flags = pg->flags;
> + new_flags = (old_flags & _PGMT_CLEAR_MASK) | memtype_flags;
> + } while (cmpxchg(&pg->flags, old_flags, new_flags) != old_flags);
> }
> #else
> static inline unsigned long get_page_memtype(struct page *pg) { return -1; }
> Index: linux-next/arch/x86/mm/pat.c
> ===================================================================
> --- linux-next.orig/arch/x86/mm/pat.c 2010-03-12 19:55:06.690471974 -0600
> +++ linux-next/arch/x86/mm/pat.c 2010-03-12 19:55:59.434468352 -0600
> @@ -190,8 +190,6 @@ static int pat_pagerange_is_ram(unsigned
> * Here we do two pass:
> * - Find the memtype of all the pages in the range, look for any conflicts
> * - In case of no conflicts, set the new memtype for pages in the range
> - *
> - * Caller must hold memtype_lock for atomicity.
> */
> static int reserve_ram_pages_type(u64 start, u64 end, unsigned long req_type,
> unsigned long *new_type)
> @@ -297,9 +295,7 @@ int reserve_memtype(u64 start, u64 end,
> is_range_ram = pat_pagerange_is_ram(start, end);
> if (is_range_ram == 1) {
>
> - spin_lock(&memtype_lock);
> err = reserve_ram_pages_type(start, end, req_type, new_type);
> - spin_unlock(&memtype_lock);
>
> return err;
> } else if (is_range_ram < 0) {
> @@ -351,9 +347,7 @@ int free_memtype(u64 start, u64 end)
> is_range_ram = pat_pagerange_is_ram(start, end);
> if (is_range_ram == 1) {
>
> - spin_lock(&memtype_lock);
> err = free_ram_pages_type(start, end);
> - spin_unlock(&memtype_lock);
>
> return err;
> } else if (is_range_ram < 0) {
> @@ -394,10 +388,8 @@ static unsigned long lookup_memtype(u64
>
> if (pat_pagerange_is_ram(paddr, paddr + PAGE_SIZE)) {
> struct page *page;
> - spin_lock(&memtype_lock);
> page = pfn_to_page(paddr >> PAGE_SHIFT);
> rettype = get_page_memtype(page);
> - spin_unlock(&memtype_lock);
> /*
> * -1 from get_page_memtype() implies RAM page is in its
> * default state and not reserved, and hence of type WB

Suresh Siddha

unread,
Mar 17, 2010, 11:30:02 AM3/17/10
to
On Mon, 2010-03-15 at 06:21 -0700, ho...@sgi.com wrote:
> While testing an application using the xpmem (out of kernel) driver, we
> noticed a significant page fault rate reduction of x86_64 with respect
> to ia64. For one test running with 32 cpus, one thread per cpu, it
> took 01:08 for each of the threads to vm_insert_pfn 2GB worth of pages.
> For the same test running on 256 cpus, one thread per cpu, it took 14:48
> to vm_insert_pfn 2 GB worth of pages.
>
> The slowdown was tracked to lookup_memtype which acquires the
> spinlock memtype_lock. This heavily contended lock was slowing down
> vm_insert_pfn().
>
> With the cmpxchg on page->flags method, both the 32 cpu and 256 cpu
> cases take approx 00:01.3 seconds to complete.

Acked-by: Suresh Siddha <suresh....@intel.com>

Suresh Siddha

unread,
Mar 17, 2010, 11:30:03 AM3/17/10
to
On Mon, 2010-03-15 at 06:21 -0700, ho...@sgi.com wrote:
> Convert the memtype_lock from a spin_lock to an rw_lock. The first
> version of my patch had this and it did improve performance for fault
> in times. The atomic page flags patch (first in the series) improves
> things much greater for ram pages. This patch is to help the other pages.
>

Acked-by: Suresh Siddha <suresh....@intel.com>

X86 folks, can you please queue both these patches if you don't have
any objections.

thanks,
suresh

H. Peter Anvin

unread,
Mar 17, 2010, 4:00:02 PM3/17/10
to
Well, as you know :) tglx and I are on the road ... I'll try to get to it on Friday before I take off again.

"Suresh Siddha" <suresh....@intel.com> wrote:

>On Mon, 2010-03-15 at 06:21 -0700, ho...@sgi.com wrote:
>> Convert the memtype_lock from a spin_lock to an rw_lock. The first
>> version of my patch had this and it did improve performance for fault
>> in times. The atomic page flags patch (first in the series) improves
>> things much greater for ram pages. This patch is to help the other pages.
>>
>
>Acked-by: Suresh Siddha <suresh....@intel.com>
>
>X86 folks, can you please queue both these patches if you don't have
>any objections.
>
>thanks,
>suresh
>

--
Sent from my mobile phone, pardon any lack of formatting.

Suresh Siddha

unread,
Mar 17, 2010, 7:30:03 PM3/17/10
to
On Wed, 2010-03-17 at 12:51 -0700, H. Peter Anvin wrote:
> Well, as you know :) tglx and I are on the road ... I'll try to get to it on Friday before I take off again.

Also I talked to Thomas about this rwlock conversion and he referred to
RT issues with rwlock. And the best is to avoid this using RCU.

For now, the second patch can be perhaps dropped, as we are being
proactive anyways. We can revisit this in the future.

First patch "[patch 1/2] x86,pat Update the page flags for memtype
atomically instead of using memtype_lock. -V3" is good to go.

Rafael J. Wysocki

unread,
Mar 23, 2010, 7:20:02 PM3/23/10
to
On Monday 15 March 2010, ho...@sgi.com wrote:
>
> While testing an application using the xpmem (out of kernel) driver, we
> noticed a significant page fault rate reduction of x86_64 with respect
> to ia64. For one test running with 32 cpus, one thread per cpu, it
> took 01:08 for each of the threads to vm_insert_pfn 2GB worth of pages.
> For the same test running on 256 cpus, one thread per cpu, it took 14:48
> to vm_insert_pfn 2 GB worth of pages.
>
> The slowdown was tracked to lookup_memtype which acquires the
> spinlock memtype_lock. This heavily contended lock was slowing down
> vm_insert_pfn().
>
> With the cmpxchg on page->flags method, both the 32 cpu and 256 cpu
> cases take approx 00:01.3 seconds to complete.
>
>
> To: Ingo Molnar <mi...@redhat.com>
> To: H. Peter Anvin <h...@zytor.com>
> To: Thomas Gleixner <tg...@linutronix.de>
> Signed-off-by: Robin Holt <ho...@sgi.com>
> Cc: Venkatesh Pallipadi <venkatesh...@intel.com>
> Cc: Venkatesh Pallipadi <venkatesh...@gmail.com>
> Cc: Suresh Siddha <suresh....@intel.com>
> Cc: Linux Kernel Mailing List <linux-...@vger.kernel.org>
> Cc: x...@kernel.org
>
> ---
>

Can we manipulate the PG_* constants this way? They are just bit numbers,
so for example _PGMT_WB should be ((1 << PG_uncached) | (PG_arch_1)) for
example.

Rafael

Peter Zijlstra

unread,
Mar 24, 2010, 7:40:02 AM3/24/10
to
On Wed, 2010-03-17 at 16:19 -0800, Suresh Siddha wrote:
> On Wed, 2010-03-17 at 12:51 -0700, H. Peter Anvin wrote:
> > Well, as you know :) tglx and I are on the road ... I'll try to get to it on Friday before I take off again.
>
> Also I talked to Thomas about this rwlock conversion and he referred to
> RT issues with rwlock. And the best is to avoid this using RCU.

Its not just RT, even for mainline rwlock_t is a massive pain and often
is no better (actually worse) than a spinlock due to the massive
cacheline bouncing it introduces.

Suresh Siddha

unread,
Mar 24, 2010, 12:20:02 PM3/24/10
to
On Wed, 2010-03-24 at 04:32 -0700, Peter Zijlstra wrote:
> On Wed, 2010-03-17 at 16:19 -0800, Suresh Siddha wrote:
> > On Wed, 2010-03-17 at 12:51 -0700, H. Peter Anvin wrote:
> > > Well, as you know :) tglx and I are on the road ... I'll try to get to it on Friday before I take off again.
> >
> > Also I talked to Thomas about this rwlock conversion and he referred to
> > RT issues with rwlock. And the best is to avoid this using RCU.
>
> Its not just RT, even for mainline rwlock_t is a massive pain and often
> is no better (actually worse) than a spinlock due to the massive
> cacheline bouncing it introduces.

Don't we have the same cacheline bouncing issues with the ticket
spinlocks?

thanks,
suresh

Peter Zijlstra

unread,
Mar 24, 2010, 12:40:03 PM3/24/10
to
On Wed, 2010-03-24 at 09:12 -0700, Suresh Siddha wrote:
> On Wed, 2010-03-24 at 04:32 -0700, Peter Zijlstra wrote:
> > On Wed, 2010-03-17 at 16:19 -0800, Suresh Siddha wrote:
> > > On Wed, 2010-03-17 at 12:51 -0700, H. Peter Anvin wrote:
> > > > Well, as you know :) tglx and I are on the road ... I'll try to get to it on Friday before I take off again.
> > >
> > > Also I talked to Thomas about this rwlock conversion and he referred to
> > > RT issues with rwlock. And the best is to avoid this using RCU.
> >
> > Its not just RT, even for mainline rwlock_t is a massive pain and often
> > is no better (actually worse) than a spinlock due to the massive
> > cacheline bouncing it introduces.
>
> Don't we have the same cacheline bouncing issues with the ticket
> spinlocks?

Sure, but the rwlock_t is unfair and can degrade into much worse
performance than the spinlock.

Thing is, rwlock_t needs to write to the cacheline for each read
acquire, so unless the hold time is much-much longer than the cacheline
bounce time, its just not worth it, but since its a rwlock_t it should
be have short hold time, hence its a useless construct :-)

0 new messages