Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Bug#1057466: htop: CPU limit Regression in htop 3.2.2 caused by patch in deb package (?)

26 views
Skip to first unread message

Claudio Kuenzler

unread,
Dec 5, 2023, 9:10:05 AM12/5/23
to
Package: htop
Version: 3.2.2-2
Severity: normal

Dear Maintainer,

It seems that htop 3.2.2 for Debian 12/Bookworm contains a patch which
removes LXC specific code to identify cgroup limited cpu cores.

Due to that patch, which removes that cgroup limit lookup, htop shows
all physical cores inside a LXC container. The container in question has
a cpu limit set to 2 CPUs. htop should show two CPUs at the top of the
ncurses UI - however on Debian 12 all physical cores are showing up.

Manually compiling and running htop 3.2.2 from source shows the correct
number of cgroup limited CPUs inside a LXC container.

To me it looks as if the patch, added in the deb package, is a
regression. Was there a specific reason to add this patch? I cant see
what is supposed to be broken without that patch.

-- System Information:
Debian Release: 12.2
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 5.10.0-0.deb10.16-amd64 (SMP w/16 CPU threads)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US:en
Shell: /bin/sh linked to /usr/bin/bash
Init: systemd (via /run/systemd/system)

Versions of packages htop depends on:
ii libc6 2.36-9+deb12u3
ii libncursesw6 6.4-4
ii libnl-3-200 3.7.0-0.2+b1
ii libnl-genl-3-200 3.7.0-0.2+b1
ii libtinfo6 6.4-4

htop recommends no packages.

Versions of packages htop suggests:
pn lm-sensors <none>
ii lsof 4.95.0-1
pn strace <none>

-- no debconf information

Daniel Lange

unread,
Dec 5, 2023, 11:10:06 AM12/5/23
to
The rationale is given on top of the patch that you found
(001_remove_lxc_special_handling.patch) and the matching commit

<https://github.com/htop-dev/htop/commit/11318b5ef6de6b2f80186a888cd5477e0ff167bb>

We don't have any better LXC handling, so I opted for showing the
reality (visible CPUs that the container cannot schedule load on) over
the bugs we had otherwise.

You created upstream #1332 but did not try the latest htop main branch,
did you? I suspect it will be the same but would be nice to confirm.
NB: There has been some improvements in the cgroup name handling for LXC
since the 3.2.2 release.

Claudio Kuenzler

unread,
Dec 5, 2023, 12:00:06 PM12/5/23
to
On Tue, Dec 5, 2023 at 4:56 PM Daniel Lange <DLa...@debian.org> wrote:
The rationale is given on top of the patch that you found
(001_remove_lxc_special_handling.patch) and the matching commit

<https://github.com/htop-dev/htop/commit/11318b5ef6de6b2f80186a888cd5477e0ff167bb>

We don't have any better LXC handling, so I opted for showing the
reality (visible CPUs that the container cannot schedule load on) over
the bugs we had otherwise.

Thank you, Daniel, for your quick response.

Probably depends on the view point, but the reality for me is that this container is able to use two cpus.
The fact that all physical cores are showing up is misleading - especially as htop is often used for a "quick view" on resource usage.
 
You created upstream #1332 but did not try the latest htop main branch,
did you? I suspect it will be the same but would be nice to confirm.
NB: There has been some improvements in the cgroup name handling for LXC
since the 3.2.2 release.

The current main branch shows the same behaviour: All physical cores are showing up.
Here is a comparison.

htop 3.2.1 shows 2 cpus, 31 tasks, 66 threads. htop uses roughly 0.7% CPU
htop 3.2.2 shows 2 cpus, 31 tasks, 66 threads. htop uses roughly 1.3% CPU
htop 3.2.2 with that patch shows 16 cpus, 31 tasks, 66 threads. htop uses between 5.3% and 10.6% CPU
htop current main branch shows 16 cpus, 31 tasks, 66 threads. htop uses between 5.3% and 10.6% CPU

I guess I fail to see the reason why this patch (removing LXC handling) was implemented. What was the bug being the reason to apply this patch?

Let me know If I can help with some additional tests in my environments.

Daniel Lange

unread,
Dec 5, 2023, 12:30:05 PM12/5/23
to
Am 05.12.23 um 17:43 schrieb Claudio Kuenzler:
> I guess I fail to see the reason why this patch (removing LXC handling)
> was implemented. What was the bug being the reason to apply this patch?

The processors shown were not necessarily the ones running the load,
easily seen by not matching temp and speed measurements.

Claudio Kuenzler

unread,
Dec 5, 2023, 1:40:05 PM12/5/23
to
The processors shown were not necessarily the ones running the load,
easily seen by not matching temp and speed measurements.

Oh yes, I agree, that's annoying. More annoying than seeing the unused CPUs actually.

However the situation of the OP in issue https://github.com/htop-dev/htop/issues/1195, which has "caused" the patch, is slightly different.
OP seems to be using LXD with just a CPU limit set but with random threads and no fixed CPU assignment.
In my situation I have assigned a specific set of cpu's (e.g. 6-7) to the LXC container.
Here htop represents the correct CPU usage. Compared CPU usage on host and inside container with htop 3.2.2 (source) and htop 3.2.2 (deb) and they show the same values (with some millisecond diffs of course).
You can find both htop 3.2.2 from source and from deb package running in parallel inside the same container: https://www.claudiokuenzler.com/media/htop-3.2.2-comparison.png

Now the question is whether or not this situation is somehow detectable. Is there some "knowledge" inside the container that there are fixed cpu threads assigned to the container or are these randomly assigned threads? But this is a discussion which is out of scope of Debian and should be discussed in htop upstream. Maybe also include Stephane Graber or someone from the LXC maintainers in the discussion.

I let you close this Debian bug now, as I agree, in a situation with random cpu threads the htop cpu usage can be wrong and this is worse than seeing all cpu threads from the host.
0 new messages