Graphical Qubes Memory Monitor

203 views
Skip to first unread message

Johny Jukya

unread,
Apr 5, 2017, 2:22:35 PM4/5/17
to qubes...@googlegroups.com
Q-Devs,

I've always found it a bit hard to figure out exactly what Qubes was
doing with the memory, shuffling it between VM's. The memory bar graph
in the manager isn't terribly informative. Having a relatively low
memory system, management of it is important.

I had dug around the manager's source to enlighten myself, as well as
writting a small text utility to pull this information and show it in a
text window. I just recently added a live pie-chart feature to it and
thought I'd share..

It could use some polish but I think it's useful in it's current state
so I thought I'd fly it by the Qubes dev list for feedback.

It shows the memory actually in use by programs (hatched/dotted area) as
well as the full allocation for the VM (the whole pie slice).

https://github.com/johnyjukya/qmemmon

Obviously, never run any new code in dom0 unless you know what you're
doing, and you've preferrably looked over the code yourself. This is
170-ish lines of Python, so giving it a sanity check isn't a huge deal.

Usage is just "python qmemmon.py [-i updateinterval]". It defaults to
updating every 2 seconds.

(Tooltips giving details on the VM's seem to be disappearing on me, will
update the source when I figure that out.)

Next feature will be a stacked bar-graph showing the relative VM memory
usage over time.

Cheers,

-JJ

Holger Levsen

unread,
Apr 6, 2017, 7:52:51 AM4/6/17
to qubes...@googlegroups.com
Hi Johny,

On Wed, Apr 05, 2017 at 02:09:18PM -0400, Johny Jukya wrote:
> I've always found it a bit hard to figure out exactly what Qubes was doing
> with the memory, shuffling it between VM's. The memory bar graph in the
> manager isn't terribly informative. Having a relatively low memory system,
> management of it is important.

agreed.

[...]
> It shows the memory actually in use by programs (hatched/dotted area) as
> well as the full allocation for the VM (the whole pie slice).
>
> https://github.com/johnyjukya/qmemmon

wow, that's a pretty neat looking screenshot there!

> Obviously, never run any new code in dom0 unless you know what you're doing,
> and you've preferrably looked over the code yourself. This is 170-ish lines
> of Python, so giving it a sanity check isn't a huge deal.

agreed (+done so), but I still dont want to run more qt stuff in dom0… :)

however, this might be just me + now, I also do think that the information
this tool is providing is currently missing in QubesOS and that *maybe* your
tool could be tuned to be included by default…

However, I'm wondering whether it would be possible to split it: do the data
collection in dom0, cause thats where it has to be done and do the data
processing and presentation in a VM (either existing or specially started dispVM).

For myself, I've modified qubes-i3status' status_mem function like this:

status_mem() {
local mem=$(((`xentop -b -i 1 | grep -v NAME | cut -b 38-47|xargs echo|sed "s# #+#g#"`)/1024))
local reallymem=$(((`( vmstat -s -S K ; for VM in $(xl list|egrep -v "(Name|dom0)"|cut -d " " -f1) ; do /usr/lib/qubes/qrexec-client -d $VM user:"/usr/bin/vmstat -s -S K" -t -T ; done ) | grep "used memory" | cut -d "K" -f1 | xargs echo | sed "s# #+#g#"`)/1024))
json mem "Mem: ${reallymem}M/${mem}M"
}

and then qubes-i3status takes care to only run this once a minute.

But this (my solution) only tells me how much memory is really used, while
yours has that and more details. Nice work!


--
cheers,
Holger
signature.asc

Johny Jukya

unread,
Apr 8, 2017, 8:56:43 PM4/8/17
to qubes...@googlegroups.com
(Sorry if this isn't linked to the original thread, my qubes-devel
subscription messed up and I don't have a reference to the original.)

>> It shows the memory actually in use by programs (hatched/dotted area)
>> as
>> well as the full allocation for the VM (the whole pie slice).
>>
>> https://github.com/johnyjukya/qmemmon

FYI, updated version on github that fixes the Tooltip information (and
some general tidying up).

>> Obviously, never run any new code in dom0 unless you know what you're
>> doing,
>> and you've preferrably looked over the code yourself. This is 170-ish
>> lines
>> of Python, so giving it a sanity check isn't a huge deal.

Holger Levsen wrote:
> agreed (+done so), but I still dont want to run more qt stuff in dom0…
> :)
>
> however, this might be just me + now, I also do think that the
> information
> this tool is providing is currently missing in QubesOS and that *maybe*
> your
> tool could be tuned to be included by default…

Yes, this tool would most naturally and elegantly be part of
qubes-manager.py. Looking over the manager's code provided the
inspiration for the monitor.

And it's written in Python with Qt4, same to the manager, using the same
PyQt4/Qubes libs.

Since the manager is somewhat "sacred" and not to be updated lightly I
figured separating out the monitor for early feedback might be more
practical to get rolling.

> However, I'm wondering whether it would be possible to split it: do the
> data
> collection in dom0, cause thats where it has to be done and do the data
> processing and presentation in a VM (either existing or specially
> started dispVM).

I'm not sure what you'd gain, other than isolating the new code (which,
as it's author, isn't a big concern for me personally :)).

I would estimate that the code involved in moving the data around would
be more than the 170-ish lines of Python code in the dom0 qmemmon.py,
and more prone to the potential error/failure of moving data around.

The current qubes-manager pokes through xenstore to get its data, then
uses PyQt4 to present it, all in dom0.

This utility does the same thing (xenstore/pyqt4) so you're not adding
any new library/tool/executable dependencies in dom0 over what the
manager requires.

If you tried to present in in a domU, you're exposing all of the memory
information from all domU's to the domU you're using for presentation,
rather than keeping it aggregated in xenstore in dom0 (as is the current
case).

Given that the graphical monitor uses the same tech/libs as
qubes-manager in dom0, I'd say that it's safer (adds less new attack
surface) to keep it as is in dom0, rather than trying to shuffle spread
the data to a domU for presentation and the associated risks of errors
and data leaks.

I'll look at doing a minimal set of patches to integrate the
functionality qubes-manager. Since the manager collects/summarizes the
same information anyway, the code changes for the pie chart etc. might
be quite small. I have a handful of cosmetic improvements I make to my
own copy of the manager, which I'll toss out there as well. Stay tuned.

-JJ

Johny Jukya

unread,
Apr 14, 2017, 1:53:07 PM4/14/17
to qubes...@googlegroups.com
On 2017-04-08 20:56, Johny Jukya wrote:
> (Sorry if this isn't linked to the original thread, my qubes-devel
> subscription messed up and I don't have a reference to the original.)
>
>>> It shows the memory actually in use by programs (hatched/dotted area)
>>> as
>>> well as the full allocation for the VM (the whole pie slice).
>>>
>>> https://github.com/johnyjukya/qmemmon
>
> FYI, updated version on github that fixes the Tooltip information (and
> some general tidying up).

Once again, I've updated the git with some improvements. (Context menu,
better resizing finally, notification area indicator.) I'm having fun
getting reacquainted with python and familiarizing myself with PyQt4 and
its quirks.

It now uses the area of the slice to indicate the memory, rather than
the radius, which is more intuitive/visual.

Also, by default it stays-on-top unless you launch with a "-n" flag.

It also shows on the main screen and the notification area an average %
use (in-use divided-by allocated) of all memory-balanced VM's.
(Non-memory balanced VM's are less interesting from a live
system-performance standpoint.)

Sinces Qubes memory-manager's algorithm tends to split all excess memory
evenly between memory-balanced VM's, the end result is that the VM's
tend towards all having the same % of inuse-vs-allocated memory. A
fairly good algorithm in practice for interactive use.

This can be seen on the pie graph as in the inner circle generally being
the same distance out on all VM's, making a fairly consistent circle.
At first I thought the consistently-sized inner circle was a coding bug,
until I realized it was just the nature of the memory balancing. When a
single VM frees or requests memory, you'll see the inner arc move in or
out, before the size of the pie slices is reshuffled to make the inner
arc (in-use %) of all VM's balance out again. It's neat to see qmemman
in action.

So the average in-use percentage seems to give a nice overall indication
of memory usage/system performance.

When my system gets up in the 70%-80% range things get sluggish so I try
to stay below that.

I won't bother cluttering the list with further minor updates to this
memory monitor tool, but will just check in updates to github for anyone
following along there.

Cheers,

JJ


Reply all
Reply to author
Forward
0 new messages