This patch fixes the following potential deadlock reported by Bart Van Assche:
=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.37-rc6+ #12
-------------------------------------------------------
grep/10562 is trying to acquire lock:
(slub_lock){+++++.}, at: [<ffffffff8114baec>] show_slab_objects+0xfc/0x390
but task is already holding lock:
(s_active#182){++++.+}, at: [<ffffffff811c4b16>] sysfs_read_file+0x96/0x1c0
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (s_active#182){++++.+}:
[<ffffffff81096b00>] lock_acquire+0xa0/0x150
[<ffffffff811c5b77>] sysfs_deactivate+0x157/0x1c0
[<ffffffff811c6273>] sysfs_addrm_finish+0x43/0x70
[<ffffffff811c637e>] sysfs_remove_dir+0x7e/0xa0
[<ffffffff812c3616>] kobject_del+0x16/0x40
[<ffffffff8114c132>] kmem_cache_destroy+0x2f2/0x380
[<ffffffffa01b4bd1>] 0xffffffffa01b4bd1
[<ffffffff810a1682>] sys_delete_module+0x1a2/0x280
[<ffffffff81003042>] system_call_fastpath+0x16/0x1b
-> #0 (slub_lock){+++++.}:
[<ffffffff810968c0>] __lock_acquire+0x1370/0x1510
[<ffffffff81096b00>] lock_acquire+0xa0/0x150
[<ffffffff81548f41>] down_read+0x51/0xa0
[<ffffffff8114baec>] show_slab_objects+0xfc/0x390
[<ffffffff8114be33>] objects_show+0x13/0x20
[<ffffffff81145e92>] slab_attr_show+0x22/0x30
[<ffffffff811c4b59>] sysfs_read_file+0xd9/0x1c0
[<ffffffff81158f8d>] vfs_read+0xcd/0x1a0
[<ffffffff81159854>] sys_read+0x54/0x90
[<ffffffff81003042>] system_call_fastpath+0x16/0x1b
other info that might help us debug this:
2 locks held by grep/10562:
#0: (&buffer->mutex){+.+.+.}, at: [<ffffffff811c4ac6>] sysfs_read_file+0x46/0x1c0
#1: (s_active#182){++++.+}, at: [<ffffffff811c4b16>] sysfs_read_file+0x96/0x1c0
stack backtrace:
Pid: 10562, comm: grep Tainted: G W 2.6.37-rc6+ #12
Call Trace:
[<ffffffff81094379>] print_circular_bug+0xf9/0x100
[<ffffffff810968c0>] __lock_acquire+0x1370/0x1510
[<ffffffff8100a3d9>] ? sched_clock+0x9/0x10
[<ffffffff81148a5c>] ? check_object+0xac/0x250
[<ffffffff81096b00>] lock_acquire+0xa0/0x150
[<ffffffff8114baec>] ? show_slab_objects+0xfc/0x390
[<ffffffff8109522d>] ? trace_hardirqs_on_caller+0x14d/0x190
[<ffffffff81548f41>] down_read+0x51/0xa0
[<ffffffff8114baec>] ? show_slab_objects+0xfc/0x390
[<ffffffff8114baec>] show_slab_objects+0xfc/0x390
[<ffffffff8114be33>] objects_show+0x13/0x20
[<ffffffff81145e92>] slab_attr_show+0x22/0x30
[<ffffffff811c4b59>] sysfs_read_file+0xd9/0x1c0
[<ffffffff81158f8d>] vfs_read+0xcd/0x1a0
[<ffffffff81159854>] sys_read+0x54/0x90
[<ffffffff81003042>] system_call_fastpath+0x16/0x1b
The problem here is that locking order is implicitly (1) sysfs internals and
(2) slub_lock but we violate that in kmem_cache_destroy().
Reference: https://bugzilla.kernel.org/show_bug.cgi?id=25622
Reported-by: Bart Van Assche <bart.va...@gmail.com>
Cc: Bart Van Assche <bart.va...@gmail.com>
Cc: Andrew Morton <ak...@linux-foundation.org>
Cc: Christoph Lameter <c...@linux.com>
Cc: David Rientjes <rien...@google.com>
Signed-off-by: Pekka Enberg <pen...@kernel.org>
---
mm/slub.c | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index bec0e35..9831004 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2516,7 +2516,13 @@ void kmem_cache_destroy(struct kmem_cache *s)
}
if (s->flags & SLAB_DESTROY_BY_RCU)
rcu_barrier();
+ /*
+ * The locking order is (1) sysfs internal locks and (2)
+ * slub_lock so drop the latter to avoid a deadlock.
+ */
+ up_write(&slub_lock);
sysfs_slab_remove(s);
+ return;
}
up_write(&slub_lock);
}
--
1.7.0.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
> diff --git a/mm/slub.c b/mm/slub.c
> index bec0e35..9831004 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2516,7 +2516,13 @@ void kmem_cache_destroy(struct kmem_cache *s)
> }
> if (s->flags & SLAB_DESTROY_BY_RCU)
> rcu_barrier();
> + /*
> + * The locking order is (1) sysfs internal locks and (2)
> + * slub_lock so drop the latter to avoid a deadlock.
> + */
> + up_write(&slub_lock);
> sysfs_slab_remove(s);
> + return;
> }
> up_write(&slub_lock);
> }
slub_lock protects slab_state following kmem_cache_init() if caches are
created/destroyed prior to the sysfs slab initcall setting the global
variable, so couldn't this leak the kobject if its initialization and add
was deferred on kmem_cache_create() and added by slab_sysfs_init()?
> slub_lock protects slab_state following kmem_cache_init() if caches are
> created/destroyed prior to the sysfs slab initcall setting the global
> variable, so couldn't this leak the kobject if its initialization and add
> was deferred on kmem_cache_create() and added by slab_sysfs_init()?
Potentially.
Also note that the same lock inversion occurs in kmem_cache_create().
slub lock is taken and then sysfs operations are performed.
The problem is that slub_lock has multiple roles:
1. Protect the list of slab caches and the slab_state (thats why its
mainly used in kmem_cache_destroy and kmem_cache_create)
2. Protect against removal and adding of new cpus and nodes
(show_slab_objects)
If we could create another lock that protects against new cpus / nodes
being added and removed then we could take that lock in
show_slab_objects() instead.
IMHO the lock order must have the slub_lock at the top.
The other lock that would prevent adding/removal of nodes / cpu can be
taken in show_slab_objects and in the corresponding hotplug functions. We
do not need to take the lock at all in show_slab_objects cpu and memory
hotplug are not enabled. Maybe there is a hotplug specific lock that can
be used instead?
Subject: slub: Avoid use of slub_lock in show_slab_objects()
The purpose of the locking is to prevent removal and additions
of nodes when statistics are gathered for a slab cache. So we
need to avoid racing with memory hotplug functionality.
It is enough to take the memory hotplug locks there instead
of the slub_lock.
Signed-off-by: Christoph Lameter <c...@linux.com>
---
mm/slub.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-01-05 09:55:34.000000000 -0600
+++ linux-2.6/mm/slub.c 2011-01-05 09:56:27.000000000 -0600
@@ -3821,7 +3821,7 @@ static ssize_t show_slab_objects(struct
}
}
- down_read(&slub_lock);
+ lock_memory_hotplug();
#ifdef CONFIG_SLUB_DEBUG
if (flags & SO_ALL) {
for_each_node_state(node, N_NORMAL_MEMORY) {
@@ -3862,7 +3862,7 @@ static ssize_t show_slab_objects(struct
x += sprintf(buf + x, " N%d=%lu",
node, nodes[node]);
#endif
- up_read(&slub_lock);
+ unlock_memory_hotplug();
kfree(nodes);
return x + sprintf(buf + x, "\n");
Makes sense. Bart, does this fix the problem for you?
The action sequence that had triggered this sequence (invoke
kmem_cache_create(); read all files in /sys/kernel/slab; invoke
kmem_cache_destroy()) does now pass without triggering lock inversion
complaints.
Bart.
Thanks for testing. David, does Christoph's patch look OK to you?
I think it certainly fixes the problem at hand, but I think we also need
to do lock_memory_hotplug() for memory hotplug in
slab_mem_going_online_callback() to make show_slab_objects() consistent
when being printed during concurrent node hot-add since it sets bits in
N_NORMAL_MEMORY. The MEM_OFFLINE callback is already handled at a higher
level by taking the lock in the hotplug layer, but we need to protect the
MEM_GOING_ONLINE and MEM_CANCEL_ONLINE callbacks if slub_lock is no longer
used to protect node arrays (which was admittedly always convenient since
it's typically associated with an iteration through slab_caches).
> > Thanks for testing. David, does Christoph's patch look OK to you?
> >
>
> I think it certainly fixes the problem at hand, but I think we also need
> to do lock_memory_hotplug() for memory hotplug in
> slab_mem_going_online_callback() to make show_slab_objects() consistent
> when being printed during concurrent node hot-add since it sets bits in
> N_NORMAL_MEMORY. The MEM_OFFLINE callback is already handled at a higher
> level by taking the lock in the hotplug layer, but we need to protect the
> MEM_GOING_ONLINE and MEM_CANCEL_ONLINE callbacks if slub_lock is no longer
> used to protect node arrays (which was admittedly always convenient since
> it's typically associated with an iteration through slab_caches).
Hmm, I would have expected the callbacks all to be done under hotplug
locking.
The MEM_GOING_ONLINE callback is not that critical since a node that is
coming online presumably has only a minimal set of objects necessary for
potential future allocations.
slab_mem_going_online_callback() etc already take the slub_lock since they
have to iterate over the list of slab caches in existence. We could take
the hotplug lock there as well.
Kame-san: Can you enlighten us on hotplug locking? And also check this
patch?
> On Thu, 6 Jan 2011, David Rientjes wrote:
>
> > > Thanks for testing. David, does Christoph's patch look OK to you?
> > >
> >
> > I think it certainly fixes the problem at hand, but I think we also need
> > to do lock_memory_hotplug() for memory hotplug in
> > slab_mem_going_online_callback() to make show_slab_objects() consistent
> > when being printed during concurrent node hot-add since it sets bits in
> > N_NORMAL_MEMORY. The MEM_OFFLINE callback is already handled at a higher
> > level by taking the lock in the hotplug layer, but we need to protect the
> > MEM_GOING_ONLINE and MEM_CANCEL_ONLINE callbacks if slub_lock is no longer
> > used to protect node arrays (which was admittedly always convenient since
> > it's typically associated with an iteration through slab_caches).
>
> Hmm, I would have expected the callbacks all to be done under hotplug
> locking.
>
> The MEM_GOING_ONLINE callback is not that critical since a node that is
> coming online presumably has only a minimal set of objects necessary for
> potential future allocations.
>
> slab_mem_going_online_callback() etc already take the slub_lock since they
> have to iterate over the list of slab caches in existence. We could take
> the hotplug lock there as well.
>
> Kame-san: Can you enlighten us on hotplug locking? And also check this
> patch?
>
IIRC, lock_memory_hotplug() is a new lock in 2.6.37 added by Kosaki
as bugfix.
This lock is for avoiding race with hwpoison and memory hotplug and original
lock (before replacement) was for avoiding race with hibernation. online_pages()
was out of lock because it just makes PG_reserved page to be free page, not racy
with hibernation.
But, nice catch. I think MEM_GOING_ONLINE, MEM_ONLINE should be done under
lock_memory_hotplug. So, could you update your patch and modify online_pages() ?
IIUC, online_pages() is an user interface function and there will be no downside
to insert lock there. online_pages() should be serialized.
Thanks,
-Kame
> But, nice catch. I think MEM_GOING_ONLINE, MEM_ONLINE should be done under
> lock_memory_hotplug. So, could you update your patch and modify online_pages() ?
>
> IIUC, online_pages() is an user interface function and there will be no downside
> to insert lock there. online_pages() should be serialized.
Subject: slub: Avoid use of slub_lock in show_slab_objects()
The purpose of the locking is to prevent removal and additions
of nodes when statistics are gathered for a slab cache. So we
need to avoid racing with memory hotplug functionality.
It is enough to take the memory hotplug locks there instead
of the slub_lock.
online_pages() does not acquire the memory_hotplug lock. So
add the missing locking there.
Signed-off-by: Christoph Lameter <c...@linux.com>
---
mm/memory_hotplug.c | 17 +++++++++++------
mm/slub.c | 4 ++--
2 files changed, 13 insertions(+), 8 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-01-07 09:17:35.000000000 -0600
+++ linux-2.6/mm/slub.c 2011-01-07 09:17:38.000000000 -0600
@@ -3821,7 +3821,7 @@ static ssize_t show_slab_objects(struct
}
}
- down_read(&slub_lock);
+ lock_memory_hotplug();
#ifdef CONFIG_SLUB_DEBUG
if (flags & SO_ALL) {
for_each_node_state(node, N_NORMAL_MEMORY) {
@@ -3862,7 +3862,7 @@ static ssize_t show_slab_objects(struct
x += sprintf(buf + x, " N%d=%lu",
node, nodes[node]);
#endif
- up_read(&slub_lock);
+ unlock_memory_hotplug();
kfree(nodes);
return x + sprintf(buf + x, "\n");
}
Index: linux-2.6/mm/memory_hotplug.c
===================================================================
--- linux-2.6.orig/mm/memory_hotplug.c 2011-01-07 09:17:59.000000000 -0600
+++ linux-2.6/mm/memory_hotplug.c 2011-01-07 09:20:07.000000000 -0600
@@ -415,12 +415,12 @@ int online_pages(unsigned long pfn, unsi
if (node_present_pages(nid) == 0)
arg.status_change_nid = nid;
+ lock_memory_hotplug();
ret = memory_notify(MEM_GOING_ONLINE, &arg);
ret = notifier_to_errno(ret);
- if (ret) {
- memory_notify(MEM_CANCEL_ONLINE, &arg);
- return ret;
- }
+ if (ret)
+ goto cancel;
+
/*
* This doesn't need a lock to do pfn_to_page().
* The section can't be removed here because of the
@@ -442,8 +442,7 @@ int online_pages(unsigned long pfn, unsi
mutex_unlock(&zonelists_mutex);
printk(KERN_DEBUG "online_pages %lx at %lx failed\n",
nr_pages, pfn);
- memory_notify(MEM_CANCEL_ONLINE, &arg);
- return ret;
+ goto cancel;
}
zone->present_pages += onlined_pages;
@@ -468,7 +467,13 @@ int online_pages(unsigned long pfn, unsi
if (onlined_pages)
memory_notify(MEM_ONLINE, &arg);
+ unlock_memory_hotplug();
return 0;
+
+cancel:
+ memory_notify(MEM_CANCEL_ONLINE, &arg);
+ unlock_memory_hotplug();
+ return ret;
}
#endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */
> Subject: slub: Avoid use of slub_lock in show_slab_objects()
>
> The purpose of the locking is to prevent removal and additions
> of nodes when statistics are gathered for a slab cache. So we
> need to avoid racing with memory hotplug functionality.
>
> It is enough to take the memory hotplug locks there instead
> of the slub_lock.
>
Because memory hotplug is the only time s->node[] is modified after the
sysfs files are created, which is the only time show_slab_objects() is
called.
> online_pages() does not acquire the memory_hotplug lock. So
> add the missing locking there.
>
> Signed-off-by: Christoph Lameter <c...@linux.com>
Acked-by: David Rientjes <rien...@google.com>
[ Should probably be seperated out into two patches, one for the
memory hotplug locking addition and one for the slub fix, both
should be pushed during this merge window. If so, a comment
describing the new semantics of lock_memory_hotplug() to protect
data structures that may be modified in hotplug notifier
callbacks would be appreciated. ]
Shouldn't sta...@kernel.org be CC-ed too ?
Bart.
Yup, I'll add it to the patch.
Agreed.
> On Fri, Jan 7, 2011 at 10:34 PM, David Rientjes <rien...@google.com> wrote:
> > �[ Should probably be seperated out into two patches, one for the
> > � memory hotplug locking addition and one for the slub fix, both
> > � should be pushed during this merge window. �If so, a comment
> > � describing the new semantics of lock_memory_hotplug() to protect
> > � data structures that may be modified in hotplug notifier
> > � callbacks would be appreciated. ]
>
> Agreed.
Kame-san: Could you do the hotplug piece and the description of the use
of the memory hotplug locks to protect extensions of data structures for
new nodes?
Subject: slub: Avoid use of slub_lock in show_slab_objects()
The purpose of the locking is to prevent removal and additions
of nodes when statistics are gathered for a slab cache. So we
need to avoid racing with memory hotplug functionality.
It is enough to take the memory hotplug locks there instead
of the slub_lock.
online_pages() currently does not acquire the memory_hotplug
lock. Another patch will be submitted by the memory hotplug
authors to take the memory hotplug lock and describe the
uses of the memory hotplug lock to protect against
adding and removal of nodes from non hotplug data structures.
Signed-off-by: Christoph Lameter <c...@linux.com>
---
mm/memory_hotplug.c | 17 +++++++++++------
mm/slub.c | 4 ++--
2 files changed, 13 insertions(+), 8 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-01-10 09:02:08.000000000 -0600
+++ linux-2.6/mm/slub.c 2011-01-10 10:10:46.000000000 -0600
@@ -3781,7 +3781,7 @@ static ssize_t show_slab_objects(struct
}
}
- down_read(&slub_lock);
+ lock_memory_hotplug();
#ifdef CONFIG_SLUB_DEBUG
if (flags & SO_ALL) {
for_each_node_state(node, N_NORMAL_MEMORY) {
@@ -3822,7 +3822,7 @@ static ssize_t show_slab_objects(struct
x += sprintf(buf + x, " N%d=%lu",
node, nodes[node]);
#endif
- up_read(&slub_lock);
+ unlock_memory_hotplug();
kfree(nodes);
return x + sprintf(buf + x, "\n");
}
--
> New patch that just covers the slub changes.
>
> Subject: slub: Avoid use of slub_lock in show_slab_objects()
>
> The purpose of the locking is to prevent removal and additions
> of nodes when statistics are gathered for a slab cache. So we
> need to avoid racing with memory hotplug functionality.
>
> It is enough to take the memory hotplug locks there instead
> of the slub_lock.
>
> online_pages() currently does not acquire the memory_hotplug
> lock. Another patch will be submitted by the memory hotplug
> authors to take the memory hotplug lock and describe the
> uses of the memory hotplug lock to protect against
> adding and removal of nodes from non hotplug data structures.
>
> Signed-off-by: Christoph Lameter <c...@linux.com>
Acked-by: David Rientjes <rien...@google.com>
will do.
Thanks,
-Kame
Is this safe to be applied without the other hotplug parts?
> On 1/10/11 10:30 PM, David Rientjes wrote:
> > On Mon, 10 Jan 2011, Christoph Lameter wrote:
> >
> >> New patch that just covers the slub changes.
> >>
> >> Subject: slub: Avoid use of slub_lock in show_slab_objects()
> >>
> >> The purpose of the locking is to prevent removal and additions
> >> of nodes when statistics are gathered for a slab cache. So we
> >> need to avoid racing with memory hotplug functionality.
> >>
> >> It is enough to take the memory hotplug locks there instead
> >> of the slub_lock.
> >>
> >> online_pages() currently does not acquire the memory_hotplug
> >> lock. Another patch will be submitted by the memory hotplug
> >> authors to take the memory hotplug lock and describe the
> >> uses of the memory hotplug lock to protect against
> >> adding and removal of nodes from non hotplug data structures.
> >>
> >> Signed-off-by: Christoph Lameter<c...@linux.com>
> > Acked-by: David Rientjes<rien...@google.com>
>
> Is this safe to be applied without the other hotplug parts?
>
>
How about this ?
==
From: KAMEZAWA Hiroyuki <kamezaw...@jp.fujitsu.com>
Now, memory_hotplug_(un)lock() is used for add/remove/offline pages
for avoiding races with hibernation. But this should be held in
online_pages(), too. It seems asymmetric.
There are cases where one has to avoid a race with memory hotplug
notifier and his own local code, and hotplug v.s. hotplug.
This will add a generic solution for avoiding races. In other view,
having lock here has no big impacts. online pages is tend to be
done by udev script at el against each memory section one by one.
Then, it's better to have lock here, too.
Signed-off-by: KAMEZAWA Hiroyuki <kamezaw...@jp.fujitsu.com>
---
include/linux/memory_hotplug.h | 6 ++++++
mm/memory_hotplug.c | 4 ++++
2 files changed, 10 insertions(+)
Index: linux-2.6.37.org/include/linux/memory_hotplug.h
===================================================================
--- linux-2.6.37.org.orig/include/linux/memory_hotplug.h
+++ linux-2.6.37.org/include/linux/memory_hotplug.h
@@ -161,6 +161,12 @@ extern void register_page_bootmem_info_n
extern void put_page_bootmem(struct page *page);
#endif
+/*
+ * Lock for memory hotplug guarantees 1) all callbacks for memory hotplug
+ * notifier will be called under this. 2) offline/online/add/remove memory
+ * will not run simultaneously.
+ */
+
void lock_memory_hotplug(void);
void unlock_memory_hotplug(void);
Index: linux-2.6.37.org/mm/memory_hotplug.c
===================================================================
--- linux-2.6.37.org.orig/mm/memory_hotplug.c
+++ linux-2.6.37.org/mm/memory_hotplug.c
@@ -407,6 +407,7 @@ int online_pages(unsigned long pfn, unsi
int ret;
struct memory_notify arg;
+ lock_memory_hotplug();
arg.start_pfn = pfn;
arg.nr_pages = nr_pages;
arg.status_change_nid = -1;
@@ -419,6 +420,7 @@ int online_pages(unsigned long pfn, unsi
ret = notifier_to_errno(ret);
if (ret) {
memory_notify(MEM_CANCEL_ONLINE, &arg);
+ unlock_memory_hotplug();
return ret;
}
/*
@@ -443,6 +445,7 @@ int online_pages(unsigned long pfn, unsi
printk(KERN_DEBUG "online_pages %lx at %lx failed\n",
nr_pages, pfn);
memory_notify(MEM_CANCEL_ONLINE, &arg);
+ unlock_memory_hotplug();
return ret;
}
@@ -467,6 +470,7 @@ int online_pages(unsigned long pfn, unsi
if (onlined_pages)
memory_notify(MEM_ONLINE, &arg);
+ unlock_memory_hotplug();
return 0;
}
--
> From: KAMEZAWA Hiroyuki <kamezaw...@jp.fujitsu.com>
>
> Now, memory_hotplug_(un)lock() is used for add/remove/offline pages
> for avoiding races with hibernation. But this should be held in
> online_pages(), too. It seems asymmetric.
>
> There are cases where one has to avoid a race with memory hotplug
> notifier and his own local code, and hotplug v.s. hotplug.
> This will add a generic solution for avoiding races. In other view,
> having lock here has no big impacts. online pages is tend to be
> done by udev script at el against each memory section one by one.
>
> Then, it's better to have lock here, too.
>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezaw...@jp.fujitsu.com>
I think taking lock_memory_hotplug() could have been added after the
initialization of the struct memory_notify, but it's not really important.
Acked-by: David Rientjes <rien...@google.com>
This should be targeted for 2.6.38 to avoid racing with memory hot-remove
and for the pending patch in the slab tree from Christoph.
> > > New patch that just covers the slub changes.
> > >
> > > Subject: slub: Avoid use of slub_lock in show_slab_objects()
> > >
> > > The purpose of the locking is to prevent removal and additions
> > > of nodes when statistics are gathered for a slab cache. So we
> > > need to avoid racing with memory hotplug functionality.
> > >
> > > It is enough to take the memory hotplug locks there instead
> > > of the slub_lock.
> > >
> > > online_pages() currently does not acquire the memory_hotplug
> > > lock. Another patch will be submitted by the memory hotplug
> > > authors to take the memory hotplug lock and describe the
> > > uses of the memory hotplug lock to protect against
> > > adding and removal of nodes from non hotplug data structures.
> > >
> > > Signed-off-by: Christoph Lameter<c...@linux.com>
> > Acked-by: David Rientjes<rien...@google.com>
>
> Is this safe to be applied without the other hotplug parts?
>
It's safe, but not protecting anything since it can race with memory
hot-add and cause inconsistent information to be displayed (since we
iterate over N_NORMAL_MEMORY several times in show_slab_objects() and the
memory hotplug code modifies it). I'm hoping Andrew can push Kame's patch
to add lock_memory_hotplug() to online_pages() either during the merge
window or during -rc1 (it has good justification -- it can race with
memory hot-remove) and this can also be pushed during the rc series (to
fix the lockdep warning).
I think the two patches should go in the same batch. Andrew, do you want
to pick up the slab patch as well or do you want me to pick up Kame's patch?
Pekka
I picked up both patches in slab.git in 'slub/hotplug' branch and queued
them for linux-next. Andrew, just holler if you want me to send them to
you; otherwise I'll send them to Linus myself.
Pekka