During recovery, vmalloc() also nicely frees all of the memory that it
got up to the point of the failure. That is wonderful, but it also
quickly hides any issues. We have a much different sitation if vmalloc()
repeatedly fails 10GB in to:
vmalloc(100 * 1<<30);
versus repeatedly failing 4096 bytes in to a:
vmalloc(8192);
This will print out messages that look like this:
[ 30.040774] bash: vmalloc failure allocating after 0 / 73728 bytes
As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
solely on an unverified value passed in from userspace. Granted, it's
under CAP_SYS_ADMIN, but it still frightens me a bit.
multipathd: page allocation failure. order:0, mode:0xd2
Call Trace:
[c0000000f34ef570] [c000000000012d84] .show_stack+0x74/0x1c0 (unreliable)
[c0000000f34ef620] [c000000000159ed4] .__alloc_pages_nodemask+0x574/0x830
[c0000000f34ef7a0] [c00000000019306c] .alloc_pages_current+0x8c/0x110
[c0000000f34ef840] [c000000000183bdc] .__vmalloc_area_node+0x17c/0x220
[c0000000f34ef900] [d00000000132bb24] .copy_params+0x74/0xc0 [dm_mod]
[c0000000f34efad0] [d00000000132bcec] .ctl_ioctl+0x17c/0x2c0 [dm_mod]
[c0000000f34efb90] [d00000000132be48] .dm_ctl_ioctl+0x18/0x30 [dm_mod]
[c0000000f34efc00] [c0000000001c4ee4] .vfs_ioctl+0x54/0x140
[c0000000f34efc90] [c0000000001c5130] .do_vfs_ioctl+0x90/0x7c0
[c0000000f34efd80] [c0000000001c5914] .SyS_ioctl+0xb4/0xd0
[c0000000f34efe30] [c00000000000852c] syscall_exit+0x0/0x40
Mem-Info:
Node 0 DMA per-cpu:
..
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/mm/vmalloc.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
--- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-07 10:21:27.792401938 -0700
+++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-07 10:21:27.800401934 -0700
@@ -1579,6 +1579,18 @@ static void *__vmalloc_area_node(struct
return area->addr;
fail:
+ if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
+ /*
+ * We probably did a show_mem() and a stack dump above
+ * inside of alloc_page*(). This is only so we can
+ * tell how big the vmalloc() really was. This will
+ * also not be exactly the same as what was passed
+ * to vmalloc() due to alignment and the guard page.
+ */
+ printk(KERN_WARNING "%s: vmalloc: allocation failure, "
+ "allocated %ld of %ld bytes\n", current->comm,
+ (area->nr_pages*PAGE_SIZE), area->size);
+ }
vfree(area->addr);
return NULL;
}
_
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
>
> I was tracking down a page allocation failure that ended up in vmalloc().
> Since vmalloc() uses 0-order pages, if somebody asks for an insane amount
> of memory, we'll still get a warning with "order:0" in it. That's not
> very useful.
>
> During recovery, vmalloc() also nicely frees all of the memory that it
> got up to the point of the failure. That is wonderful, but it also
> quickly hides any issues. We have a much different sitation if vmalloc()
> repeatedly fails 10GB in to:
>
> vmalloc(100 * 1<<30);
>
> versus repeatedly failing 4096 bytes in to a:
>
> vmalloc(8192);
>
> This will print out messages that look like this:
>
> [ 30.040774] bash: vmalloc failure allocating after 0 / 73728 bytes
>
Won't it print "bash: vmalloc: allocation failure, allocated 0 of 73728
bytes" instead?
> As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
> solely on an unverified value passed in from userspace. Granted, it's
> under CAP_SYS_ADMIN, but it still frightens me a bit.
>
> multipathd: page allocation failure. order:0, mode:0xd2
> Call Trace:
> [c0000000f34ef570] [c000000000012d84] .show_stack+0x74/0x1c0 (unreliable)
> [c0000000f34ef620] [c000000000159ed4] .__alloc_pages_nodemask+0x574/0x830
> [c0000000f34ef7a0] [c00000000019306c] .alloc_pages_current+0x8c/0x110
> [c0000000f34ef840] [c000000000183bdc] .__vmalloc_area_node+0x17c/0x220
> [c0000000f34ef900] [d00000000132bb24] .copy_params+0x74/0xc0 [dm_mod]
> [c0000000f34efad0] [d00000000132bcec] .ctl_ioctl+0x17c/0x2c0 [dm_mod]
> [c0000000f34efb90] [d00000000132be48] .dm_ctl_ioctl+0x18/0x30 [dm_mod]
> [c0000000f34efc00] [c0000000001c4ee4] .vfs_ioctl+0x54/0x140
> [c0000000f34efc90] [c0000000001c5130] .do_vfs_ioctl+0x90/0x7c0
> [c0000000f34efd80] [c0000000001c5914] .SyS_ioctl+0xb4/0xd0
> [c0000000f34efe30] [c00000000000852c] syscall_exit+0x0/0x40
> Mem-Info:
> Node 0 DMA per-cpu:
> ...
>
>
> Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
> ---
>
> linux-2.6.git-dave/mm/vmalloc.c | 12 ++++++++++++
> 1 file changed, 12 insertions(+)
>
> diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
> --- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-07 10:21:27.792401938 -0700
> +++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-07 10:21:27.800401934 -0700
> @@ -1579,6 +1579,18 @@ static void *__vmalloc_area_node(struct
> return area->addr;
>
> fail:
> + if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
> + /*
> + * We probably did a show_mem() and a stack dump above
> + * inside of alloc_page*(). This is only so we can
> + * tell how big the vmalloc() really was. This will
> + * also not be exactly the same as what was passed
> + * to vmalloc() due to alignment and the guard page.
> + */
> + printk(KERN_WARNING "%s: vmalloc: allocation failure, "
> + "allocated %ld of %ld bytes\n", current->comm,
> + (area->nr_pages*PAGE_SIZE), area->size);
> + }
> vfree(area->addr);
> return NULL;
> }
Looks good.
Acked-by: David Rientjes <rien...@google.com>
__vmalloc_area_node() can also be moved into __vmalloc_node_range() since
that's its only caller if you're interested.
I agree with this in general, but have some nitpicks.
> @@ -1579,6 +1579,18 @@ static void *__vmalloc_area_node(struct
> return area->addr;
>
> fail:
> + if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
There is a comment above the declaration of printk_ratelimit:
/*
* Please don't use printk_ratelimit(), because it shares ratelimiting state
* with all other unrelated printk_ratelimit() callsites. Instead use
* printk_ratelimited() or plain old __ratelimit().
*/
I realize that the page allocator does it the same way, but I think it
should probably be fixed in there, rather than spread any further.
> + /*
> + * We probably did a show_mem() and a stack dump above
> + * inside of alloc_page*(). This is only so we can
> + * tell how big the vmalloc() really was. This will
> + * also not be exactly the same as what was passed
> + * to vmalloc() due to alignment and the guard page.
> + */
> + printk(KERN_WARNING "%s: vmalloc: allocation failure, "
> + "allocated %ld of %ld bytes\n", current->comm,
> + (area->nr_pages*PAGE_SIZE), area->size);
> + }
To me, this does not look like something that should just be appended
to the whole pile spewed out by dump_stack() and show_mem(). What do
you think about doing the page allocation with __GFP_NOWARN and have
the full report come from this place, with the line you introduce as
leader?
Hannes
You're the second person to mention this. I should have listened the
first time. :) I'll fix it up and repost.
> > + /*
> > + * We probably did a show_mem() and a stack dump above
> > + * inside of alloc_page*(). This is only so we can
> > + * tell how big the vmalloc() really was. This will
> > + * also not be exactly the same as what was passed
> > + * to vmalloc() due to alignment and the guard page.
> > + */
> > + printk(KERN_WARNING "%s: vmalloc: allocation failure, "
> > + "allocated %ld of %ld bytes\n", current->comm,
> > + (area->nr_pages*PAGE_SIZE), area->size);
> > + }
>
> To me, this does not look like something that should just be appended
> to the whole pile spewed out by dump_stack() and show_mem(). What do
> you think about doing the page allocation with __GFP_NOWARN and have
> the full report come from this place, with the line you introduce as
> leader?
That sounds fine to me.
-- Dave
During recovery, vmalloc() also nicely frees all of the memory that it
got up to the point of the failure. That is wonderful, but it also
quickly hides any issues. We have a much different sitation if vmalloc()
repeatedly fails 10GB in to:
vmalloc(100 * 1<<30);
versus repeatedly failing 4096 bytes in to a:
vmalloc(8192);
This patch will print out messages that look like this:
[ 30.040774] bash: vmalloc failure allocating after 0 / 73728 bytes
As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
solely on an unverified value passed in from userspace. Granted, it's
under CAP_SYS_ADMIN, but it still frightens me a bit.
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/mm/vmalloc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
--- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-08 09:36:05.877020199 -0700
+++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-08 09:38:00.373093593 -0700
@@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
pgprot_t prot, int node, void *caller)
{
+ int order = 0;
struct page **pages;
unsigned int nr_pages, array_size, i;
gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
@@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
+ gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(tmp_mask);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, tmp_mask, order);
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
@@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
return area->addr;
fail:
+ nopage_warning(gfp_mask, order, "vmalloc: allocation failure, "
+ "allocated %ld of %ld bytes\n",
+ (area->nr_pages*PAGE_SIZE), area->size);
vfree(area->addr);
return NULL;
}
_
But, I do think there's a lot of value in what
__alloc_pages_slowpath() does with its filtering and so
forth.
This patch creates a new function which other allocators
can call instead of relying on the internal page allocator
warnings. It also gives this function private rate-limiting
which separates it from other printk_ratelimit() users.
---
linux-2.6.git-dave/include/linux/mm.h | 2 +
linux-2.6.git-dave/mm/page_alloc.c | 65 +++++++++++++++++++++++-----------
2 files changed, 46 insertions(+), 21 deletions(-)
diff -puN include/linux/mm.h~break-out-alloc-failure-messages include/linux/mm.h
--- linux-2.6.git/include/linux/mm.h~break-out-alloc-failure-messages 2011-04-08 13:07:18.978332687 -0700
+++ linux-2.6.git-dave/include/linux/mm.h 2011-04-08 13:07:18.990332675 -0700
@@ -1365,6 +1365,8 @@ extern void si_meminfo(struct sysinfo *
extern void si_meminfo_node(struct sysinfo *val, int nid);
extern int after_bootmem;
+extern void nopage_warning(gfp_t gfp_mask, int order, const char *fmt, ...);
+
extern void setup_per_cpu_pageset(void);
extern void zone_pcp_update(struct zone *zone);
diff -puN mm/page_alloc.c~break-out-alloc-failure-messages mm/page_alloc.c
--- linux-2.6.git/mm/page_alloc.c~break-out-alloc-failure-messages 2011-04-08 13:07:18.982332683 -0700
+++ linux-2.6.git-dave/mm/page_alloc.c 2011-04-08 13:07:18.990332675 -0700
@@ -54,6 +54,7 @@
#include <trace/events/kmem.h>
#include <linux/ftrace_event.h>
#include <linux/memcontrol.h>
+#include <linux/ratelimit.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -1734,6 +1735,48 @@ static inline bool should_suppress_show_
return ret;
}
+static DEFINE_RATELIMIT_STATE(nopage_rs,
+ DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+void nopage_warning(gfp_t gfp_mask, int order, const char *fmt, ...)
+{
+ va_list args;
+ int r;
+ unsigned int filter = SHOW_MEM_FILTER_NODES;
+ const gfp_t wait = gfp_mask & __GFP_WAIT;
+
+ if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+ return;
+
+ /*
+ * This documents exceptions given to allocations in certain
+ * contexts that are allowed to allocate outside current's set
+ * of allowed nodes.
+ */
+ if (!(gfp_mask & __GFP_NOMEMALLOC))
+ if (test_thread_flag(TIF_MEMDIE) ||
+ (current->flags & (PF_MEMALLOC | PF_EXITING)))
+ filter &= ~SHOW_MEM_FILTER_NODES;
+ if (in_interrupt() || !wait)
+ filter &= ~SHOW_MEM_FILTER_NODES;
+
+ if (fmt) {
+ printk(KERN_WARNING);
+ va_start(args, fmt);
+ r = vprintk(fmt, args);
+ va_end(args);
+ }
+
+ printk(KERN_WARNING);
+ printk("%s: page allocation failure: order:%d, mode:0x%x\n",
+ current->comm, order, gfp_mask);
+
+ dump_stack();
+ if (!should_suppress_show_mem())
+ show_mem(filter);
+}
+
static inline int
should_alloc_retry(gfp_t gfp_mask, unsigned int order,
unsigned long pages_reclaimed)
@@ -2176,27 +2219,7 @@ rebalance:
}
nopage:
- if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
- unsigned int filter = SHOW_MEM_FILTER_NODES;
-
- /*
- * This documents exceptions given to allocations in certain
- * contexts that are allowed to allocate outside current's set
- * of allowed nodes.
- */
- if (!(gfp_mask & __GFP_NOMEMALLOC))
- if (test_thread_flag(TIF_MEMDIE) ||
- (current->flags & (PF_MEMALLOC | PF_EXITING)))
- filter &= ~SHOW_MEM_FILTER_NODES;
- if (in_interrupt() || !wait)
- filter &= ~SHOW_MEM_FILTER_NODES;
-
- pr_warning("%s: page allocation failure. order:%d, mode:0x%x\n",
- current->comm, order, gfp_mask);
- dump_stack();
- if (!should_suppress_show_mem())
- show_mem(filter);
- }
+ nopage_warning(gfp_mask, order, NULL);
return page;
got_pg:
if (kmemcheck_enabled)
I suggest a different name for this, something like warn_alloc_failure()
or such.
I guess this isn't general enough where it could be used in the oom killer
as well?
This shouldn't be here, it should have been printed already.
>
> I was tracking down a page allocation failure that ended up in vmalloc().
> Since vmalloc() uses 0-order pages, if somebody asks for an insane amount
> of memory, we'll still get a warning with "order:0" in it. That's not
> very useful.
>
> During recovery, vmalloc() also nicely frees all of the memory that it
> got up to the point of the failure. That is wonderful, but it also
> quickly hides any issues. We have a much different sitation if vmalloc()
> repeatedly fails 10GB in to:
>
> vmalloc(100 * 1<<30);
>
> versus repeatedly failing 4096 bytes in to a:
>
> vmalloc(8192);
>
> This patch will print out messages that look like this:
>
> [ 30.040774] bash: vmalloc failure allocating after 0 / 73728 bytes
>
Either the changelog or the patch is still wrong because the format of
this string is inconsistent.
> As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
> solely on an unverified value passed in from userspace. Granted, it's
> under CAP_SYS_ADMIN, but it still frightens me a bit.
>
> Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
> ---
>
> linux-2.6.git-dave/mm/vmalloc.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
> --- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-08 09:36:05.877020199 -0700
> +++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-08 09:38:00.373093593 -0700
> @@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
> static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> pgprot_t prot, int node, void *caller)
> {
> + int order = 0;
Unnecessary, we can continue to hardcode the 0, vmalloc isn't going to use
higher order allocs (it's there to avoid such things!).
> struct page **pages;
> unsigned int nr_pages, array_size, i;
> gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> @@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
>
> for (i = 0; i < area->nr_pages; i++) {
> struct page *page;
> + gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
I think it would be better to just do away with this as well and just
hardwire the __GFP_NOWARN directly into the two allocation calls.
>
> if (node < 0)
> - page = alloc_page(gfp_mask);
> + page = alloc_page(tmp_mask);
> else
> - page = alloc_pages_node(node, gfp_mask, 0);
> + page = alloc_pages_node(node, tmp_mask, order);
>
> if (unlikely(!page)) {
> /* Successfully allocated i pages, free them in __vunmap() */
> @@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
> return area->addr;
>
> fail:
> + nopage_warning(gfp_mask, order, "vmalloc: allocation failure, "
> + "allocated %ld of %ld bytes\n",
> + (area->nr_pages*PAGE_SIZE), area->size);
> vfree(area->addr);
> return NULL;
> }
That works for me.
> I guess this isn't general enough where it could be used in the oom killer
> as well?
Nope, don't think so. I took a look at it, but it isn't horribly close
to this.
The "page allocation failure" might have been, if it was specified (it
isn't from the allocator), but order and mode haven't been. My thought
here is that _all_ allocator failures will want to output mode and gfp,
so it might as well be common code instead of making everybody specify
it.
-- Dave
Yeah, ya caught me. :)
> > diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
> > --- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-08 09:36:05.877020199 -0700
> > +++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-08 09:38:00.373093593 -0700
> > @@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
> > static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
> > pgprot_t prot, int node, void *caller)
> > {
> > + int order = 0;
>
> Unnecessary, we can continue to hardcode the 0, vmalloc isn't going to use
> higher order allocs (it's there to avoid such things!).
The only reason I did that was to keep the printk from looking like
this:
> > + nopage_warning(gfp_mask, 0, "vmalloc: allocation failure, "
> > + "allocated %ld of %ld bytes\n",
> > + (area->nr_pages*PAGE_SIZE), area->size);
The order is pretty darn obvious in the direct allocator calls, but I
liked having it named where it wasn't as obvious.
> > struct page **pages;
> > unsigned int nr_pages, array_size, i;
> > gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> > @@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
> >
> > for (i = 0; i < area->nr_pages; i++) {
> > struct page *page;
> > + gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
>
> I think it would be better to just do away with this as well and just
> hardwire the __GFP_NOWARN directly into the two allocation calls.
I did it because hard-wiring it takes the alloc_pages_node() one over 80
columns. I figured if I was going to add a line, I might as well keep
it pretty.
-- Dave
The core problem is this: I want two lines of output: one for the
order/mode gunk, and one for the user-specified message.
If we have the user pass in a string for the printk() level, we're stuck
doing what I have here. If we have them _prepend_ it to the "fmt"
string, then it's harder to figure out below. I guess we could fish in
the string for it.
> > + printk(KERN_WARNING);
> > + printk("%s: page allocation failure: order:%d, mode:0x%x\n",
> > + current->comm, order, gfp_mask);
>
> Even more so here. Why not pr_warning instead of two non-atomic calls
> to printk?
It's a relic of an hour ago when I tried passing in the printk() level
to the function as a string. It can go away now. :)
-- Dave
> On Fri, 2011-04-08 at 22:54 +0200, Michał Nazarewicz wrote:
>> Could we make the "printk(KERN_WARNING);" go away and require caller
>> to specify level?
On Fri, 08 Apr 2011 23:02:02 +0200, Dave Hansen wrote:
> The core problem is this: I want two lines of output: one for the
> order/mode gunk, and one for the user-specified message.
>
> If we have the user pass in a string for the printk() level, we're stuck
> doing what I have here. If we have them _prepend_ it to the "fmt"
> string, then it's harder to figure out below. I guess we could fish in
> the string for it.
This is a bit unfortunate, but that's what I was worried anyway. I guess
creating a macro which automatically prepends format with KERN_WARNING
would solve the issue but that's probably not the most elegant solution.
--
Best regards, _ _
o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
.o | Computer Science, Michal "mina86" Nazarewicz (o o)
ooo +-----<email/xmpp: mnaza...@google.com>-----ooO--(_)--Ooo--
But, I do think there's a lot of value in what
__alloc_pages_slowpath() does with its filtering and so
forth.
This patch creates a new function which other allocators
can call instead of relying on the internal page allocator
warnings. It also gives this function private rate-limiting
which separates it from other printk_ratelimit() users.
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/include/linux/mm.h | 2 +
linux-2.6.git-dave/mm/page_alloc.c | 63 ++++++++++++++++++++++------------
2 files changed, 44 insertions(+), 21 deletions(-)
diff -puN include/linux/mm.h~break-out-alloc-failure-messages include/linux/mm.h
--- linux-2.6.git/include/linux/mm.h~break-out-alloc-failure-messages 2011-04-15 08:44:06.911445625 -0700
+++ linux-2.6.git-dave/include/linux/mm.h 2011-04-15 08:45:10.087416551 -0700
@@ -1365,6 +1365,8 @@ extern void si_meminfo(struct sysinfo *
extern void si_meminfo_node(struct sysinfo *val, int nid);
extern int after_bootmem;
+extern void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...);
+
extern void setup_per_cpu_pageset(void);
extern void zone_pcp_update(struct zone *zone);
diff -puN mm/page_alloc.c~break-out-alloc-failure-messages mm/page_alloc.c
--- linux-2.6.git/mm/page_alloc.c~break-out-alloc-failure-messages 2011-04-15 08:44:06.915445623 -0700
+++ linux-2.6.git-dave/mm/page_alloc.c 2011-04-15 08:48:34.255321834 -0700
@@ -54,6 +54,7 @@
#include <trace/events/kmem.h>
#include <linux/ftrace_event.h>
#include <linux/memcontrol.h>
+#include <linux/ratelimit.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -1734,6 +1735,46 @@ static inline bool should_suppress_show_
return ret;
}
+static DEFINE_RATELIMIT_STATE(nopage_rs,
+ DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...)
+{
+ va_list args;
+ unsigned int filter = SHOW_MEM_FILTER_NODES;
+ const gfp_t wait = gfp_mask & __GFP_WAIT;
+
+ if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+ return;
+
+ /*
+ * This documents exceptions given to allocations in certain
+ * contexts that are allowed to allocate outside current's set
+ * of allowed nodes.
+ */
+ if (!(gfp_mask & __GFP_NOMEMALLOC))
+ if (test_thread_flag(TIF_MEMDIE) ||
+ (current->flags & (PF_MEMALLOC | PF_EXITING)))
+ filter &= ~SHOW_MEM_FILTER_NODES;
+ if (in_interrupt() || !wait)
+ filter &= ~SHOW_MEM_FILTER_NODES;
+
+ if (fmt) {
+ printk(KERN_WARNING);
+ va_start(args, fmt);
+ vprintk(fmt, args);
+ va_end(args);
+ }
+
+ printk(KERN_WARNING "%s: page allocation failure: order:%d, mode:0x%x\n",
+ current->comm, order, gfp_mask);
+
+ dump_stack();
+ if (!should_suppress_show_mem())
+ show_mem(filter);
+}
+
static inline int
should_alloc_retry(gfp_t gfp_mask, unsigned int order,
unsigned long pages_reclaimed)
@@ -2176,27 +2217,7 @@ rebalance:
}
nopage:
- if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) {
- unsigned int filter = SHOW_MEM_FILTER_NODES;
-
- /*
- * This documents exceptions given to allocations in certain
- * contexts that are allowed to allocate outside current's set
- * of allowed nodes.
- */
- if (!(gfp_mask & __GFP_NOMEMALLOC))
- if (test_thread_flag(TIF_MEMDIE) ||
- (current->flags & (PF_MEMALLOC | PF_EXITING)))
- filter &= ~SHOW_MEM_FILTER_NODES;
- if (in_interrupt() || !wait)
- filter &= ~SHOW_MEM_FILTER_NODES;
-
- pr_warning("%s: page allocation failure. order:%d, mode:0x%x\n",
- current->comm, order, gfp_mask);
- dump_stack();
- if (!should_suppress_show_mem())
- show_mem(filter);
- }
+ warn_alloc_failed(gfp_mask, order, NULL);
return page;
got_pg:
if (kmemcheck_enabled)
_
During recovery, vmalloc() also nicely frees all of the memory that it
got up to the point of the failure. That is wonderful, but it also
quickly hides any issues. We have a much different sitation if vmalloc()
repeatedly fails 10GB in to:
vmalloc(100 * 1<<30);
versus repeatedly failing 4096 bytes in to a:
vmalloc(8192);
This patch will print out messages that look like this:
[ 68.123503] vmalloc: allocation failure, allocated 6680576 of 13426688 bytes
[ 68.124218] bash: page allocation failure: order:0, mode:0xd2
[ 68.124811] Pid: 3770, comm: bash Not tainted 2.6.39-rc3-00082-g85f2e68-dirty #333
[ 68.125579] Call Trace:
[ 68.125853] [<ffffffff810f6da6>] warn_alloc_failed+0x146/0x170
[ 68.126464] [<ffffffff8107e05c>] ? printk+0x6c/0x70
[ 68.126791] [<ffffffff8112b5d4>] ? alloc_pages_current+0x94/0xe0
[ 68.127661] [<ffffffff8111ed37>] __vmalloc_node_range+0x237/0x290
..
The 'order' variable is added for clarity when calling
warn_alloc_failed() to avoid having an unexplained '0' as an argument.
The 'tmp_mask' is there to keep the alloc_pages_node() looking sane.
Adding __GFP_NOWARN is done because we now have our own, full error
message in vmalloc code.
As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
solely on an unverified value passed in from userspace. Granted, it's
under CAP_SYS_ADMIN, but it still frightens me a bit.
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/mm/vmalloc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
--- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-15 08:49:06.823306620 -0700
+++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-15 09:20:17.926460283 -0700
@@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
pgprot_t prot, int node, void *caller)
{
+ int order = 0;
struct page **pages;
unsigned int nr_pages, array_size, i;
gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
@@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
+ gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(tmp_mask);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, tmp_mask, order);
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
@@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
return area->addr;
fail:
+ warn_alloc_failed(gfp_mask, order, "vmalloc: allocation failure, "
+ "allocated %ld of %ld bytes\n",
+ (area->nr_pages*PAGE_SIZE), area->size);
vfree(area->addr);
return NULL;
Could we make that const?
> struct page **pages;
> unsigned int nr_pages, array_size, i;
> gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
> @@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
> for (i = 0; i < area->nr_pages; i++) {
> struct page *page;
> + gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
> if (node < 0)
> - page = alloc_page(gfp_mask);
> + page = alloc_page(tmp_mask);
> else
> - page = alloc_pages_node(node, gfp_mask, 0);
> + page = alloc_pages_node(node, tmp_mask, order);
so it'll be more visible that we are passing 0 here.
> if (unlikely(!page)) {
> /* Successfully allocated i pages, free them in __vunmap() */
> @@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
> return area->addr;
> fail:
> + warn_alloc_failed(gfp_mask, order, "vmalloc: allocation failure, "
> + "allocated %ld of %ld bytes\n",
> + (area->nr_pages*PAGE_SIZE), area->size);
> vfree(area->addr);
> return NULL;
> }
> _
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majo...@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign
> http://stopthemeter.ca/
> Don't email: <a href=mailto:"do...@kvack.org"> em...@kvack.org </a>
--
Best regards, _ _
o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
.o | Computer Science, Michal "mina86" Nazarewicz (o o)
ooo +-----<email/xmpp: mnaza...@google.com>-----ooO--(_)--Ooo--
Sure. Here's a replacement patch. Compiles and boots for me.
--
vmalloc(100 * 1<<30);
vmalloc(8192);
diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
--- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-15 10:39:05.928793559 -0700
+++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-15 10:39:18.716789177 -0700
@@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
pgprot_t prot, int node, void *caller)
{
+ const int order = 0;
struct page **pages;
unsigned int nr_pages, array_size, i;
gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
@@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
+ gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(tmp_mask);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, tmp_mask, order);
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
@@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
return area->addr;
fail:
+ warn_alloc_failed(gfp_mask, order, "vmalloc: allocation failure, "
+ "allocated %ld of %ld bytes\n",
+ (area->nr_pages*PAGE_SIZE), area->size);
vfree(area->addr);
return NULL;
}
_
-- Dave
"wait" is unnecessary. You didn't do "const gfp_t nowarn = gfp_mask &
__GFP_NOWARN;" for the same reason.
> + if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
> + return;
> +
> + /*
> + * This documents exceptions given to allocations in certain
> + * contexts that are allowed to allocate outside current's set
> + * of allowed nodes.
> + */
> + if (!(gfp_mask & __GFP_NOMEMALLOC))
> + if (test_thread_flag(TIF_MEMDIE) ||
> + (current->flags & (PF_MEMALLOC | PF_EXITING)))
> + filter &= ~SHOW_MEM_FILTER_NODES;
> + if (in_interrupt() || !wait)
> + filter &= ~SHOW_MEM_FILTER_NODES;
> +
> + if (fmt) {
> + printk(KERN_WARNING);
> + va_start(args, fmt);
> + vprintk(fmt, args);
> + va_end(args);
> + }
> +
> + printk(KERN_WARNING "%s: page allocation failure: order:%d, mode:0x%x\n",
> + current->comm, order, gfp_mask);
pr_warning()?
current->comm should always be printed with get_task_comm() to avoid
racing with /proc/pid/comm. Since this function can be called potentially
deep in the stack, you may need to serialize this with a
statically-allocated buffer.
> diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
Sorry, I still don't understand why this isn't just a three-liner patch to
call warn_alloc_failed(). I don't see the benefit of the "order" or
"tmp_mask" variables at all, they'll just be removed next time someone
goes down the mm/* directory and looks for variables that are used only
once or are unchanged as a cleanup.
This line is just a copy from the __alloc_pages_slowpath() one. I guess
we only use it once, so I've got no problem killing it.
OK, I'll change it back.
> current->comm should always be printed with get_task_comm() to avoid
> racing with /proc/pid/comm. Since this function can be called potentially
> deep in the stack, you may need to serialize this with a
> statically-allocated buffer.
This code was already in page_alloc.c. I'm simply breaking it out here
trying to keep the changes down to what is needed minimally to move the
code. Correcting this preexisting problem sounds like a great follow-on
patch.
-- Dave
Without the "order" variable, we have:
warn_alloc_failed(gfp_mask, 0, "vmalloc: allocation failure, "
"allocated %ld of %ld bytes\n",
(area->nr_pages*PAGE_SIZE), area->size);
I *HATE* those with a passion. What is the '0' _doing_? Is it for "0
pages", "do not print", "_do_ print"? There's no way to tell without
going and finding warn_alloc_failed()'s definition.
With 'order' in there, the code self-documents, at least from the
caller's side. It makes it 100% clear that the "0" being passed to the
allocators is that same as the one passed to the warning; it draws a
link between the allocations and the allocation error message:
warn_alloc_failed(gfp_mask, order, "vmalloc: allocation failure, "
"allocated %ld of %ld bytes\n",
(area->nr_pages*PAGE_SIZE), area->size);
As for the 'tmp_mask' business. Right now we have:
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
+ gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(tmp_mask);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, tmp_mask, order);
The alternative is this:
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(gfp_mask | __GFP_NOWARN);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, gfp_mask | __GFP_NOWARN,
+ order);
I can go look, but I bet the compiler compiles down to the same thing.
Plus, they're the same number of lines in the end. I know which one
appeals to me visually.
I think we're pretty deep in personal preference territory here. If I
hear a consensus that folks like it one way over another, I'm happy to
change it.
-- Dave
It shouldn't be a follow-on patch since you're introducing a new feature
here (vmalloc allocation failure warnings) and what I'm identifying is a
race in the access to current->comm. A bug fix for a race should always
preceed a feature that touches the same code.
There's two options to fixing the race:
- provide a statically-allocated buffer to use for get_task_comm() and
copy current->comm over before printing it, or
- take task_lock(current) to protect against /proc/pid/comm.
The latter probably isn't safe because we could potentially already be
holding task_lock(current) during a GFP_ATOMIC page allocation.
Dude. Seriously. Glass house! a63d83f4
I'll go look in to it, though.
-- Dave
I'm not sure get_task_comm() is suitable, either. It takes the task
lock:
char *get_task_comm(char *buf, struct task_struct *tsk)
{
/* buf must be at least sizeof(tsk->comm) in size */
task_lock(tsk);
strncpy(buf, tsk->comm, sizeof(tsk->comm));
task_unlock(tsk);
return buf;
}
-- Dave
So, what's the race here? kmemleak.c says?
/*
* There is a small chance of a race with set_task_comm(),
* however using get_task_comm() here may cause locking
* dependency issues with current->alloc_lock. In the worst
* case, the command line is not correct.
*/
strncpy(object->comm, current->comm, sizeof(object->comm));
We're trying to make sure we don't print out a partially updated
tsk->comm? Or, is there a bigger issue here like potential oopses or
kernel information leaks.
1. We require that no memory allocator ever holds the task lock for the
current task, and we audit all the existing GFP_ATOMIC users in the
kernel to ensure they're not doing it now. In the case of a problem,
we end up with a hung kernel while trying to get a message out to the
console.
2. We remove current->comm from the printk(), and deal with the
information loss.
3. We live with corrupted output, like the other ~400 in-kernel users of
->comm do. (I'm assuming that very few of them hold the task lock).
In the case of a race, we get junk on the console, but an otherwise
fine bug report (the way it is now).
4. We come up with some way to print out current->comm, without holding
any task locks. We could do this by copying it somewhere safe on
each context switch. Could probably also do it with RCU.
There's also a very, very odd message in fs/exec.c:
/*
* Threads may access current->comm without holding
* the task lock, so write the string carefully.
* Readers without a lock may see incomplete new
* names but are safe from non-terminating string reads.
*/
-- Dave
The rule is,
1) writing comm
need task_lock
2) read _another_ thread's comm
need task_lock
3) read own comm
no need task_lock
That's the reason why oom-kill.c need task_lock and other a lot of place don't need
task_lock. I agree this is very strange. it's only historical reason.
The comment of set_task_comm() explained a race against (3).
Thanks.
--
This originally started as a simple patch to give vmalloc()
some more verbose output on failure on top of the plain
page allocator messages. Johannes suggested that it might
be nicer to lead with the vmalloc() info _before_ the page
allocator messages.
But, I do think there's a lot of value in what
__alloc_pages_slowpath() does with its filtering and so
forth.
This patch creates a new function which other allocators
can call instead of relying on the internal page allocator
warnings. It also gives this function private rate-limiting
which separates it from other printk_ratelimit() users.
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/include/linux/mm.h | 2 +
linux-2.6.git-dave/mm/page_alloc.c | 62 ++++++++++++++++++++++------------
2 files changed, 43 insertions(+), 21 deletions(-)
diff -puN include/linux/mm.h~break-out-alloc-failure-messages include/linux/mm.h
--- linux-2.6.git/include/linux/mm.h~break-out-alloc-failure-messages 2011-04-18 14:59:51.278529173 -0700
+++ linux-2.6.git-dave/include/linux/mm.h 2011-04-18 14:59:51.290529171 -0700
@@ -1365,6 +1365,8 @@ extern void si_meminfo(struct sysinfo *
extern void si_meminfo_node(struct sysinfo *val, int nid);
extern int after_bootmem;
+extern void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...);
+
extern void setup_per_cpu_pageset(void);
extern void zone_pcp_update(struct zone *zone);
diff -puN mm/page_alloc.c~break-out-alloc-failure-messages mm/page_alloc.c
--- linux-2.6.git/mm/page_alloc.c~break-out-alloc-failure-messages 2011-04-18 14:59:51.282529173 -0700
+++ linux-2.6.git-dave/mm/page_alloc.c 2011-04-18 14:59:51.294529170 -0700
@@ -54,6 +54,7 @@
#include <trace/events/kmem.h>
#include <linux/ftrace_event.h>
#include <linux/memcontrol.h>
+#include <linux/ratelimit.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -1734,6 +1735,45 @@ static inline bool should_suppress_show_
return ret;
}
+static DEFINE_RATELIMIT_STATE(nopage_rs,
+ DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...)
+{
+ va_list args;
+ unsigned int filter = SHOW_MEM_FILTER_NODES;
+
+ if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs))
+ return;
+
+ /*
+ * This documents exceptions given to allocations in certain
+ * contexts that are allowed to allocate outside current's set
+ * of allowed nodes.
+ */
+ if (!(gfp_mask & __GFP_NOMEMALLOC))
+ if (test_thread_flag(TIF_MEMDIE) ||
+ (current->flags & (PF_MEMALLOC | PF_EXITING)))
+ filter &= ~SHOW_MEM_FILTER_NODES;
+ if (in_interrupt() || !(gfp_mask & __GFP_WAIT))
+ filter &= ~SHOW_MEM_FILTER_NODES;
+
+ if (fmt) {
+ printk(KERN_WARNING);
+ va_start(args, fmt);
+ vprintk(fmt, args);
+ va_end(args);
+ }
+
+ pr_warning("%s: page allocation failure: order:%d, mode:0x%x\n",
+ current->comm, order, gfp_mask);
+
+ dump_stack();
+ if (!should_suppress_show_mem())
+ show_mem(filter);
+}
+
static inline int
should_alloc_retry(gfp_t gfp_mask, unsigned int order,
unsigned long pages_reclaimed)
@@ -2176,27 +2216,7 @@ rebalance:
_
--
I was tracking down a page allocation failure that ended up in vmalloc().
Since vmalloc() uses 0-order pages, if somebody asks for an insane amount
of memory, we'll still get a warning with "order:0" in it. That's not
very useful.
During recovery, vmalloc() also nicely frees all of the memory that it
got up to the point of the failure. That is wonderful, but it also
quickly hides any issues. We have a much different sitation if vmalloc()
repeatedly fails 10GB in to:
vmalloc(100 * 1<<30);
versus repeatedly failing 4096 bytes in to a:
vmalloc(8192);
This patch will print out messages that look like this:
[ 68.123503] vmalloc: allocation failure, allocated 6680576 of 13426688 bytes
[ 68.124218] bash: page allocation failure: order:0, mode:0xd2
[ 68.124811] Pid: 3770, comm: bash Not tainted 2.6.39-rc3-00082-g85f2e68-dirty #333
[ 68.125579] Call Trace:
[ 68.125853] [<ffffffff810f6da6>] warn_alloc_failed+0x146/0x170
[ 68.126464] [<ffffffff8107e05c>] ? printk+0x6c/0x70
[ 68.126791] [<ffffffff8112b5d4>] ? alloc_pages_current+0x94/0xe0
[ 68.127661] [<ffffffff8111ed37>] __vmalloc_node_range+0x237/0x290
..
The 'order' variable is added for clarity when calling
warn_alloc_failed() to avoid having an unexplained '0' as an argument.
The 'tmp_mask' is because adding an open-coded '| __GFP_NOWARN' would
take us over 80 columns for the alloc_pages_node() call. If we are
going to add a line, it might as well be one that makes the sucker
easier to read.
As a side issue, I also noticed that ctl_ioctl() does vmalloc() based
solely on an unverified value passed in from userspace. Granted, it's
under CAP_SYS_ADMIN, but it still frightens me a bit.
Signed-off-by: Dave Hansen <da...@linux.vnet.ibm.com>
---
linux-2.6.git-dave/mm/vmalloc.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff -puN mm/vmalloc.c~vmalloc-warn mm/vmalloc.c
--- linux-2.6.git/mm/vmalloc.c~vmalloc-warn 2011-04-18 15:03:35.658506887 -0700
+++ linux-2.6.git-dave/mm/vmalloc.c 2011-04-18 15:04:48.762499842 -0700
@@ -1534,6 +1534,7 @@ static void *__vmalloc_node(unsigned lon
static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
pgprot_t prot, int node, void *caller)
{
+ const int order = 0;
struct page **pages;
unsigned int nr_pages, array_size, i;
gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO;
@@ -1560,11 +1561,12 @@ static void *__vmalloc_area_node(struct
for (i = 0; i < area->nr_pages; i++) {
struct page *page;
+ gfp_t tmp_mask = gfp_mask | __GFP_NOWARN;
if (node < 0)
- page = alloc_page(gfp_mask);
+ page = alloc_page(tmp_mask);
else
- page = alloc_pages_node(node, gfp_mask, 0);
+ page = alloc_pages_node(node, tmp_mask, order);
if (unlikely(!page)) {
/* Successfully allocated i pages, free them in __vunmap() */
@@ -1579,6 +1581,9 @@ static void *__vmalloc_area_node(struct
return area->addr;
fail:
+ warn_alloc_failed(gfp_mask, order, "vmalloc: allocation failure, "
+ "allocated %ld of %ld bytes\n",
+ (area->nr_pages*PAGE_SIZE), area->size);
vfree(area->addr);
return NULL;
}
> The rule is,
>
> 1) writing comm
> need task_lock
> 2) read _another_ thread's comm
> need task_lock
> 3) read own comm
> no need task_lock
>
That was true a while ago, but you now need to protect every thread's
->comm with get_task_comm() or ensuring task_lock() is held to protect
against /proc/pid/comm which can change other thread's ->comm. That was
different before when prctl(PR_SET_NAME) would only operate on current, so
no lock was needed when reading current->comm.
> > It shouldn't be a follow-on patch since you're introducing a new feature
> > here (vmalloc allocation failure warnings) and what I'm identifying is a
> > race in the access to current->comm. A bug fix for a race should always
> > preceed a feature that touches the same code.
>
> Dude. Seriously. Glass house! a63d83f4
>
Not sure what you're implying here. The commit you've identified is the
oom killer rewrite and the oom killer is very specific about making sure
to always hold task_lock() whenever dereferencing ->comm, even for
current, to guard against /proc/pid/comm or prctl(). The oom killer is
different from your usecase, however, because we can always take
task_lock(current) in the oom killer because it's in a blockable context,
whereas page allocation warnings can occur in a superset.
Right. /proc/pid/comm is evil. We have to fix it. otherwise we need change
all of current->comm user. It's very lots!
Everybody still goes through set_task_comm() to _set_ it, though. That
means that the worst case scenario that we get is output truncated
(possibly to nothing). We already have at least one existing user in
mm/ (kmemleak) that thinks this is OK. I'd tend to err in the direction
of taking a truncated or empty task name to possibly locking up the
system.
There are also plenty of instances of current->comm going in to the
kernel these days. I count 18 added since 2.6.37.
As for a long-term fix, locks probably aren't the answer. Would
something like this completely untested patch work? It would have the
added bonus that it keeps tsk->comm users working for the moment. We
could eventually add an rcu_read_lock()-annotated access function.
---
linux-2.6.git-dave/fs/exec.c | 22 +++++++++++++++-------
linux-2.6.git-dave/include/linux/init_task.h | 3 ++-
linux-2.6.git-dave/include/linux/sched.h | 3 ++-
3 files changed, 19 insertions(+), 9 deletions(-)
diff -puN mm/page_alloc.c~tsk_comm mm/page_alloc.c
diff -puN include/linux/sched.h~tsk_comm include/linux/sched.h
--- linux-2.6.git/include/linux/sched.h~tsk_comm 2011-04-19 18:23:58.435013635 -0700
+++ linux-2.6.git-dave/include/linux/sched.h 2011-04-19 18:24:44.651034028 -0700
@@ -1334,10 +1334,11 @@ struct task_struct {
* credentials (COW) */
struct cred *replacement_session_keyring; /* for KEYCTL_SESSION_TO_PARENT */
- char comm[TASK_COMM_LEN]; /* executable name excluding path
+ char comm_buf[TASK_COMM_LEN]; /* executable name excluding path
- access with [gs]et_task_comm (which lock
it with task_lock())
- initialized normally by setup_new_exec */
+ char __rcu *comm;
/* file system info */
int link_count, total_link_count;
#ifdef CONFIG_SYSVIPC
diff -puN include/linux/init_task.h~tsk_comm include/linux/init_task.h
--- linux-2.6.git/include/linux/init_task.h~tsk_comm 2011-04-19 18:24:48.703035798 -0700
+++ linux-2.6.git-dave/include/linux/init_task.h 2011-04-19 18:25:22.147050279 -0700
@@ -161,7 +161,8 @@ extern struct cred init_cred;
.group_leader = &tsk, \
RCU_INIT_POINTER(.real_cred, &init_cred), \
RCU_INIT_POINTER(.cred, &init_cred), \
- .comm = "swapper", \
+ .comm_buf = "swapper", \
+ .comm = &tsk.comm_buf, \
.thread = INIT_THREAD, \
.fs = &init_fs, \
.files = &init_files, \
diff -puN fs/exec.c~tsk_comm fs/exec.c
--- linux-2.6.git/fs/exec.c~tsk_comm 2011-04-19 18:25:32.283054625 -0700
+++ linux-2.6.git-dave/fs/exec.c 2011-04-19 18:37:47.991485880 -0700
@@ -1007,17 +1007,25 @@ char *get_task_comm(char *buf, struct ta
void set_task_comm(struct task_struct *tsk, char *buf)
{
+ char tmp_comm[TASK_COMM_LEN];
+
task_lock(tsk);
+ memcpy(tmp_comm, tsk->comm_buf, TASK_COMM_LEN);
+ tsk->comm = tmp;
/*
- * Threads may access current->comm without holding
- * the task lock, so write the string carefully.
- * Readers without a lock may see incomplete new
- * names but are safe from non-terminating string reads.
+ * Make sure no one is still looking at tsk->comm_buf
*/
- memset(tsk->comm, 0, TASK_COMM_LEN);
- wmb();
- strlcpy(tsk->comm, buf, sizeof(tsk->comm));
+ synchronize_rcu();
+
+ strlcpy(tsk->comm_buf, buf, sizeof(tsk->comm));
+ tsk->comm = tsk->com_buff;
+ /*
+ * Make sure no one is still looking at the
+ * stack-allocated buffer
+ */
+ synchronize_rcu();
+
task_unlock(tsk);
perf_event_comm(tsk);
}
-- Dave
(Cc to John Stultz who/proc/<pid>/comm author. I think we need to hear his opinion)
The concept is ok to me. but AFAIK some caller are now using ARRAY_SIZE(tsk->comm).
or sizeof(tsk->comm). Probably callers need to be changed too.
Thanks.
one more correction.
> void set_task_comm(struct task_struct *tsk, char *buf)
> {
> + char tmp_comm[TASK_COMM_LEN];
> +
> task_lock(tsk);
>
> + memcpy(tmp_comm, tsk->comm_buf, TASK_COMM_LEN);
> + tsk->comm = tmp;
> /*
> - * Threads may access current->comm without holding
> - * the task lock, so write the string carefully.
> - * Readers without a lock may see incomplete new
> - * names but are safe from non-terminating string reads.
> + * Make sure no one is still looking at tsk->comm_buf
> */
> - memset(tsk->comm, 0, TASK_COMM_LEN);
> - wmb();
> - strlcpy(tsk->comm, buf, sizeof(tsk->comm));
> + synchronize_rcu();
The doc says,
/**
* synchronize_rcu - wait until a grace period has elapsed.
*
And here is under spinlock.
Yeah, yeah... see "completely untested". :)
I'll see if dropping the locks or something else equally hackish can
help.
-- Dave
> > That was true a while ago, but you now need to protect every thread's
> > ->comm with get_task_comm() or ensuring task_lock() is held to protect
> > against /proc/pid/comm which can change other thread's ->comm. That was
> > different before when prctl(PR_SET_NAME) would only operate on current, so
> > no lock was needed when reading current->comm.
>
> Right. /proc/pid/comm is evil. We have to fix it. otherwise we need change
> all of current->comm user. It's very lots!
>
Fixing it in this case would be removing it and only allowing it for
current via the usual prctl() :) The code was introduced in 4614a696bd1c
(procfs: allow threads to rename siblings via /proc/pid/tasks/tid/comm) in
December 2009 and seems to originally be meant for debugging. We simply
can't continue to let it modify any thread's ->comm unless we change the
over 300 current->comm deferences in the kernel.
I'd prefer that we remove /proc/pid/comm entirely or at least prevent
writing to it unless CONFIG_EXPERT.
Eeeh. That's probably going to be a tough sell, as I think there is
wider interest in what it provides. Its useful for debugging
applications not kernels, so I doubt folks will want to rebuild their
kernel to try to analyze a java issue.
So I'm well aware that there is the chance that you catch the race and
read an incomplete/invalid comm (it was discussed at length when the
change went in), but somewhere I've missed how that's causing actual
problems. Other then just being "evil" and having the documented race,
could you clarify what the issue is that your hitting?
thanks
-john
The problem is, there is no documented as well. Okay, I recognized you
introduced new locking rule for task->comm. But there is no documented
it. Thus, We have no way to review current callsites are correct or not.
Can you please do it? And, I have a question. Do you mean now task->comm
reader don't need task_lock() even if it is another thread?
_if_ every task->comm reader have to realize it has a chance to read
incomplete/invalid comm, task_lock() doesn't makes any help.
And one correction.
------------------------------------------------------------------
static ssize_t comm_write(struct file *file, const char __user *buf,
size_t count, loff_t *offset)
{
struct inode *inode = file->f_path.dentry->d_inode;
struct task_struct *p;
char buffer[TASK_COMM_LEN];
memset(buffer, 0, sizeof(buffer));
if (count > sizeof(buffer) - 1)
count = sizeof(buffer) - 1;
if (copy_from_user(buffer, buf, count))
return -EFAULT;
p = get_proc_task(inode);
if (!p)
return -ESRCH;
if (same_thread_group(current, p))
set_task_comm(p, buffer);
else
count = -EINVAL;
------------------------------------------------------------------
This code doesn't have proper credential check. IOW, you forgot to
pthread_setuid_np() case.
ping?
Sorry if this somehow got off on the wrong foot. Its just surprising to
see such passion bubble up after almost two years of quiet since the
proc patch went in.
So I'm not proposing comm be totally lock free (Dave Hansen might do
that for me, we'll see :) but when the original patch was proposed, the
idea that transient empty or incomplete comms would be possible was
brought up and didn't seem to be a big enough issue at the time to block
it from being merged.
Its just having a more specific case where these transient
null/incomplete comms causes an issue would help prioritize the need for
correctness.
In the meantime, I'll put some effort into trying to protect unlocked
current->comm acccess using get_task_comm() where possible. Won't happen
in a day, and help would be appreciated.
When we hit the point where the remaining places are where the task_lock
can't be taken, we can either live with the possible incomplete comm or
add a new lock to protect just the comm.
thanks
-john
Sorry, could you expand on this a bit? Google isn't coming up with much
for pthread_setuid_np. Can a thread actually end up with different uid
then the process it is a member of?
Or is same_thread_group not really what I think it is? What would be a
better way to check that the two threads are members of the same
process?
thanks
-john
> Sorry if this somehow got off on the wrong foot. Its just surprising to
> see such passion bubble up after almost two years of quiet since the
> proc patch went in.
>
It hasn't been two years, it hasn't even been 18 months.
$ git diff 4614a696bd1c.. | grep "^+.*current\->comm" | wc -l
42
Apparently those dozens of new references directly to current->comm since
the change also were unaware of the need to use get_task_comm() to avoid a
racy writer. I don't think there's any code in the kernel that is ok with
corrupted task names being printed: those messages are usually important.
> So I'm not proposing comm be totally lock free (Dave Hansen might do
> that for me, we'll see :) but when the original patch was proposed, the
> idea that transient empty or incomplete comms would be possible was
> brought up and didn't seem to be a big enough issue at the time to block
> it from being merged.
>
I'm not really interested in the discussion that happened at the time, I'm
concerned about racy readers of any thread's comm that result in corrupted
strings being printed or used in the kernel.
> Its just having a more specific case where these transient
> null/incomplete comms causes an issue would help prioritize the need for
> correctness.
>
It doesn't seem like there was any due diligence to ensure other code
wasn't broken. When comm could only be changed by prctl(), we needed no
protection for current->comm and so code naturally will reference it
directly. Since that's now changed, no audit was done to ensure the 300+
references throughout the tree doesn't require non-racy reads.
> In the meantime, I'll put some effort into trying to protect unlocked
> current->comm acccess using get_task_comm() where possible. Won't happen
> in a day, and help would be appreciated.
>
We need to stop protecting ->comm with ->alloc_lock since it is used for
other members of task_struct that may or may not be held in a function
that wants to read ->comm. We should probably introduce a seqlock.
--
Agreed. My initial approach is to consolidate accesses to use
get_task_comm(), with special case to skip the locking if tsk==current,
as well as a lock free __get_task_comm() for cases where its not current
being accessed and the task locking is already done.
Once that's all done, the next step is to switch to a seqlock (or
possibly RCU if Dave is still playing with that idea), internally in the
get_task_comm implementation and then yank the special __get_task_comm.
But other suggestions are welcome.
thanks
-john
So thinking further, this can be simplified by adding the seqlock first,
and then retaining the task_locking only in the set_task_comm path until
all comm accessors are converted to using get_task_comm.
I'll be sending out some initial patches for review shortly.
Yes. Linux kernel _always_ only care per-thread uid.
glibc 2.3.3 or earlier, it use kernel syscall straight forward. and then
userland application also don't have a way to change per-process uid.
glbc 2.3.4 or later, glibc implement per-process setuid by using signal
for inter thread communication. (ie, every thread call setuid() syscall
internally). Hm, currently pthread_setuid_np don't have proper exported
header file. so, parpaps, we need to only worry about syscall(NR_uid) and
old libc?
Anyway, If you see task_struct definition, you can easily find it has
cred.
Thanks.
> So thinking further, this can be simplified by adding the seqlock first,
> and then retaining the task_locking only in the set_task_comm path until
> all comm accessors are converted to using get_task_comm.
>
On second thought, I think it would be better to just retain using a
spinlock but instead of using alloc_lock, introduce a new spinlock to
task_struct for the sole purpose of protecting comm.
And, instead, of using get_task_comm() to write into a preallocated
buffer, I think it would be easier in the vast majority of cases that
you'll need to convert to just provide task_comm_lock(p) and
task_comm_unlock(p) so that p->comm can be dereferenced safely.
get_task_comm() could use that interface itself and then write into a
preallocated buffer.
The problem with using get_task_comm() everywhere is it requires 16
additional bytes to be allocated on the stack in hundreds of locations
around the kernel which may or may not be safe.
So my concern with this is that it means one more lock that could be
mis-nested. By keeping the locking isolated to the get/set_task_comm, we
can be sure that won't happen.
Also tracking new current->comm references will be easier if we just
don't allow new ones. Validating that all the comm references are
correctly locked becomes more difficult if we need locking at each use
site.
Further, since I'm not convinced that we never reference current->comm
from irq context, if we go with spinlocks, we're going to have to
disable irqs in the read path as well. seqlocks were nice for that
aspect.
> get_task_comm() could use that interface itself and then write into a
> preallocated buffer.
>
> The problem with using get_task_comm() everywhere is it requires 16
> additional bytes to be allocated on the stack in hundreds of locations
> around the kernel which may or may not be safe.
True. Although is this maybe a bit overzealous?
Maybe I can make sure not to add any mid-layer stack nesting by limiting
the scope of the 16bytes to just around where it is used. This would
ensure we're only adding 16bytes to any current usage.
Other ideas?
thanks
-john
Ok.. trying to find a middle ground here by replying to my own
concerns. :)
> So my concern with this is that it means one more lock that could be
> mis-nested. By keeping the locking isolated to the get/set_task_comm, we
> can be sure that won't happen.
>
> Also tracking new current->comm references will be easier if we just
> don't allow new ones. Validating that all the comm references are
> correctly locked becomes more difficult if we need locking at each use
> site.
So maybe we still ban current->comm access and instead have a
lightweight get_comm_locked() accessor or something that. Then we can
add debugging options to validate that the lock is properly held
internally.
> Further, since I'm not convinced that we never reference current->comm
> from irq context, if we go with spinlocks, we're going to have to
> disable irqs in the read path as well. seqlocks were nice for that
> aspect.
rwlocks can resolve this concern.
Any other thoughts?