I noticed the memory was growing without doing anything in my system ....after
some investigations , it looks like some kernel components may be involved in
this problem.
I would like to know if there is a way to monitor activity of memory (de)alloc
of the kernel in order to target which partof the system/kernel could do this..?
Best Regards
S.Ancelot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
You can do
cat /proc/meminfo | grep Slab
and if that grows too much over time you can do
cat /proc/slabinfo
or use a tool such as slabtop to see where the memory is going. If the
memory is being leaked in the kmalloc caches, you can use
CONFIG_DEBUG_SLAB_LEAK which part of the kernel is doing all those
allocations (not really suitable for production machines).
Also remember to check that Active + Inactive + Buffers + Cached is
roughly the same size as MemTotal - MemFree; otherwise your kernel
might be leaking full pages.
Christoph, I suppose there's some option to
Documentation/vm/slabinfo.c that provides similar output to
CONFIG_DEBUG_SLAB_LEAK for SLUB?
Pekka
> or use a tool such as slabtop to see where the memory is going. If the
> memory is being leaked in the kmalloc caches, you can use
> CONFIG_DEBUG_SLAB_LEAK which part of the kernel is doing all those
> allocations (not really suitable for production machines).
For slub: Just enable debugging and then do a cat on
/sys/kernel/slab/<cachename>/allocs or frees to see where memory is
allocated or freed.
> Christoph, I suppose there's some option to
> Documentation/vm/slabinfo.c that provides similar output to
> CONFIG_DEBUG_SLAB_LEAK for SLUB?
If you run a report on a slabcache with f.e.
slabinfo kmalloc-2048
and debugging is enabled then all functions that allocate and free objects
in kmalloc-2048 are listed.
Just doing
slabinfo
list all caches with the number of objects allocated.
> however, since there are always cache filled, is there a way to flush
> all cashes and then consult slabinfo with caches empty ???
Not sure what this is about. per cpu cached objects?
slabinfo -s
will shrink all caches and throw all cpu slabs away. The system will touch
some essential caches immediately though. So some processor will
immediately reallocate cpu slabs.
I kept my kernel running with few applications for 5 days , doing
nothing more than backing up few kb of data on disk and refresh few X apps.
Ater five days the global memory available go down from 24Mb to 8Mb ...
The are some signifiant changes in slabinfo but now, I do not know where
to search ?
active_jobs :
proc_inode_cache : 150 -> 299
radix_tree_node : 807 -> 870
dentry: 4092 ->4249
buffer_head: 1138 -> 4824
pid_1: 64 -> 76
size-64 : 885 -> 944
memory BEFORE AFTER :
MemFree: 24324 kB.................8692Kb
Buffers: 3956 kB...............18740Kb
Cached: 34080 kB................34452Kb
Active: 131176 kB................147592Kb
Inactive: 26708 kB..................25440Kb
Slab: 4692 kB........4952kB
SReclaimable: 1480 kB ............. 1744kb
SUnreclaim: 3212 kB...............3208kb
in the ps tree only Xorg SHR memory growed from 5592 to 5612
Look at these values more closely - this is where your memory is "gone":
> MemFree: 24324 kB.................8692Kb
> Buffers: 3956 kB...............18740Kb
--
Tomasz Chmielewski
http://wpkg.org
Can you please post your full dmesg, /proc/slabinfo and /proc/meminfo
output please?
Pekka Enberg a écrit :
> On Tue, May 13, 2008 at 11:50 AM, Stéphane ANCELOT <sanc...@free.fr> wrote:
>
>> The are some signifiant changes in slabinfo but now, I do not know where
>> to search ?
>>
>
> Can you please post your full dmesg, /proc/slabinfo and /proc/meminfo
> output please?
> --
>
Enclosed are outputs.
However, after some analyzes and enabling DEBUG mem features of the
kernel , I found anormal proc_inode_cache , and a bug in fork process
(my kernel has got patches enabling Realtime features)
This has been corrected, but know, I am waiting again for new results of
stability , there may be other problems.
Best Regards
Steph
> I kept my kernel running with few applications for 5 days , doing
> nothing more than backing up few kb of data on disk and refresh few X apps.
>
> Ater five days the global memory available go down from 24Mb to 8Mb ...
That is normal. Linux tries to put all memory to use and will free on
demand.
> The are some signifiant changes in slabinfo but now, I do not know where
> to search ?
Compile the slabinfo tool.
gcc -o slabinfo linux/Documentation/vm/slabinfo.c
Then you can do
slabinfo -T
to get an overview of how much is used by slabs. But I do not see that
slabs are using an excessive amount. So toying around with slabinfo is
not going to get you anywhere.
Christoph Lameter a écrit :
1) slabinfo tells me SYSFS support for SLUB not active
In the kernel, there is SLAB or SLUB , my kernel is at this time
configured for SLAB allocator.
it is documented SLUB minimizes cache line usage.
Do you think I have to switch to SLUB ?
2) regarding memory debugging, your reply and some mesages told it was
normal the memory was growing (with ext3 buffer_heads...) and released
on demand.
This sounds to me it becomes VERY VERY difficult telling if my system
is STABLE or NOT. Is there a way to bypass it ?
I assume I have to do some kind of small program trying to allocate
almost the full remaining memory available at startup to empty caches ?
Best Regards
Steph
> In the kernel, there is SLAB or SLUB , my kernel is at this time configured
> for SLAB allocator.
SLAB does not support the slabinfo tool. It only supports /proc/slabinfo.
> it is documented SLUB minimizes cache line usage.
> Do you think I have to switch to SLUB ?
If you want to use the slabinfo tool then yes.
> 2) regarding memory debugging, your reply and some mesages told it was normal
> the memory was growing (with ext3 buffer_heads...) and released on demand.
> This sounds to me it becomes VERY VERY difficult telling if my system is
> STABLE or NOT. Is there a way to bypass it ?
This the basic design of memory handling in Linux. Why would the use of
memory mean that your system is unstable?
> I assume I have to do some kind of small program trying to allocate almost the
> full remaining memory available at startup to empty caches ?
There is a way to drop caches. See what you can do with
/proc/sys/vm/drop_caches
f.e.
echo 1 >/proc/sys/vm/drop_caches
echo 2 >/proc/sys/vm/drop_caches
Should free most of memory.