memory.limit_in_bytes has no effect on Debian 11

7 views
Skip to first unread message

Narcis Garcia

unread,
Dec 23, 2021, 2:53:19 AM12/23/21
to LXC users SPM
https://github.com/lxc/lxc/issues/4049

Can anybody guide me to what to check or test?

Thank you.



Narcis Garcia

__________
I'm using this dedicated address because personal addresses aren't
masked enough at this mail public archive. Public archive administrator
should fix this against automated addresses collectors.

Andreas Laut

unread,
Dec 23, 2021, 3:50:43 AM12/23/21
to lxc-...@lists.linuxcontainers.org
Hi,

if you're host is Debian 11 you must use 'lxc.cgroup2.memory.max'
instead of the old memory.limit_in_bytes.

Debian 11 (means the kernel) comes with the new cgroupv2 structure.

https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html

Kind regards,

Andreas Laut


Am 23.12.21 um 08:53 schrieb Narcis Garcia:

Ervin Hegedüs

unread,
Dec 23, 2021, 4:01:31 AM12/23/21
to Narcis Garcia, LXC users SPM
hi,

On Thu, Dec 23, 2021 at 08:53:12AM +0100, Narcis Garcia wrote:
> https://github.com/lxc/lxc/issues/4049
>
> Can anybody guide me to what to check or test?

may be this one can help you:

https://alioth-lists.debian.net/pipermail/pkg-lxc-devel/Week-of-Mon-20210628/001796.html


a.

Narcis Garcia

unread,
Dec 24, 2021, 12:45:40 PM12/24/21
to lxc-...@lists.linuxcontainers.org
Okay with lxc.cgroup2.memory.max and lxc.cgroup.memory.swap.max , but
lxc-ls does not get any of the RAM or SWAP values (neither allocated nor
limits)

Maybe lxc-ls should be bug-reviewed and formats expanded to more
possible columns (RAM.low RAM.min RAM.high RAM.max RAM.current)

Apart, documentation does not speak clear about real behaviour at host
with container's virtual swap on being used.


Narcis Garcia

__________
I'm using this dedicated address because personal addresses aren't
masked enough at this mail public archive. Public archive administrator
should fix this against automated addresses collectors.
El 23/12/21 a les 9:50, Andreas Laut ha escrit:
Reply all
Reply to author
Forward
0 new messages