Performance/Ressource Monitor for the XEN Hypervisor / attachments of VCPUs to VMs

571 views
Skip to first unread message

piitb...@gmail.com

unread,
Mar 18, 2016, 2:24:02 PM3/18/16
to qubes-users
Hello,

1st question
is there a way to get more information regarding ressource utilization when running the Qube OS XEN Hypervisor?
Especially regading:
  • CPU Utlilization
  • RAM Utilization
  • IOPS / Disk Queue
2nd question
Currently my VMs all have the same setting under Settings / Enhanced / VCPUs no: 8
Should I leave this? Our virtualization system engineer told me that it is better no to attach to much vCPUs to a VM (regarding vmware vsphere) as this could reduce (!) performance / can lead to a higher CPU utilization.


- Piit


Zrubi

unread,
Mar 21, 2016, 6:19:58 AM3/21/16
to piitb...@gmail.com, qubes-users, Marek Marczykowski-Górecki
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/18/2016 07:24 PM, piitb...@gmail.com wrote:
> *2nd question *Currently my VMs all have the same setting under
> Settings / Enhanced / VCPUs no: 8 Should I leave this? Our
> virtualization system engineer told me that it is better no to
> attach to much vCPUs to a VM (regarding vmware vsphere) as this
> could reduce (!) performance / can lead to a higher CPU
> utilization.

I believe that assigning ALL the real CPUs to all of your VMs will
decrease performance. My experience just confirming this.

I was starting to assign only one single vCPU to my VMs but because(?)
the current kernels are optimized for SMP - it my cause problems.

There was a discussion about this years ago - and we agreed(?) that 2
vCPU would be the ideal default for a general VM.


- --
Zrubi
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJW78q/AAoJEC3TtYFBiXSvqZEP/21nENKEPdBUd+oeQnwdk0c3
5jMXNBmcqtzgahFHzBUAh5dDSioMBZTnQkaD5v4WbEPDCMHKFAEixETcoczB/5j6
t9HhkgkAEzSWUxl3NOVmzHDnc+sKVZUXeLRA+tdxS3Fy/hpdoiM50sUppgrhoFKJ
T44slprvs4rWCQUKxPbYarBvpd6q8CKMweWPmIibMaRjyVY0z6x0oD1lry4MbEyl
UUMkrLlXHCW8w2c1RwGInp/oo3QQ2Nw0QKOK1AWevwnjbNBAO41jj6WK7HKApTtZ
1cGf8YwYQ1Mo94TgtsECdGUeJ8UyYyT9Wa8BNSFIQDVejbzcM0JVvd8YVQmwMbIL
7Mo5mwjqHJoUf3PmgmkWC8oF/WcGuhBCsSzH8MkrhpGGesIIrLMTcjVowwpsTCkB
gIllqIcHDDaFnJKkVsa75DgALGdhTk2debwLd1e43FNpze0sQuI0LfoYN4JEvR3x
8meX0/zOPcqJIV6iEPvcvBxiV9Jr+FsQuPspgHP8K4x5vL51wFKYjPwOlKeJRnIC
MLLzDqRXG0hl98AWhkccJbnaVgWcAQPvBVRQnP+FzmEbvUb40CnXT/idodk7YZhW
7pz86+hQSokaEagO2E+Wj+3LsVXKBYszBf2hQupBT37hQEpIcxdrf6V8Aji9IFuj
K8a+MYshf0x7IAkkZ49T
=ciQQ
-----END PGP SIGNATURE-----

piitb...@gmail.com

unread,
Mar 27, 2016, 6:44:26 PM3/27/16
to qubes-users, piitb...@gmail.com, marm...@invisiblethingslab.com
Am Montag, 21. März 2016 11:19:58 UTC+1 schrieb Laszlo Zrubecz:
I believe that assigning ALL the real CPUs to all of your VMs will
decrease performance. My experience just confirming this.
I was starting to assign only one single vCPU to my VMs but because(?)
the current kernels are optimized for SMP - it my cause problems.
There was a discussion about this years ago - and we agreed(?) that 2
vCPU would be the ideal default for a general VM.

I agree, I think adding 2 vCPUs should be fine.
Our VMware guy also told me that adding to much vCPUs will generate a scheduling overhead required to run those multiple vCPUs.
Having 2 vCPUs might be a good thing if a malrunning single process is eating up one (v)cpu core.

Regarding my other question "Performance Monitoring" I found the xentop command, which generates some information about the VMs.
xentop can be launched from Dom0.
More about xentop: https://support.citrix.com/article/CTX127896

I did some testing to understand ressource consumptions.
Start Windows Task Manager and siwtch to the Performance Tab.
Launch cmd.exe within my windows HVM and start a directory listing for all files via "dir c:\ /s".
This will generate some CPU cycles on 4 of my 8 vCPUs, while only 10% total CPU utilization is shown (in Task Manager).

If I look at xentop in Dom0 I see a higher CPU utilization (>100%):

[piit@dom0 ~]$ xentop -d 1 -n
[...]
xentop
- 01:03:34   Xen 4.6.0
7 domains: 2 running, 5 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33132780k total, 15711164k used, 17421616k free    CPUs: 8 @ 2793MHz
      NAME  STATE   CPU
(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS   VBD_OO   VBD_RD   VBD_WR  VBD_RSECT
      dom0
-----r       1036   11.6    4164116   12.6   21332024      64.4     8    0        0        0    0        0        0        0          0    
sys
-firewa --b---         22    0.1    3047412    9.2    3073024       9.3     8    0        0        0    4        0    18672     2622     549475
   sys
-net --b---         43    0.1     298996    0.9     308224       0.9     8    0        0        0    4        0    38764     9149    1264233
sys
-whonix --b---         32    0.3    3071988    9.3    3073024       9.3     8    0        0        0    4        0     8840     1470     484565      untrusted --b---       1407    1.2     409588    1.2     410624       1.2     8    0        0        0    4        6 16272276  4435715  230661555 win7-ready -----r       1461  103.9    4194296   12.7    4195328      12.7     8    0        0        0    2        0        0        0          0     win7-ready --b---         69    0.9      45056    0.1      46080       0.1     1    0        0        0    2        0   306192    42446   11272312  

another question is, why the windows HVM will appear twice in xentop, while it is running only once.

It seems that I need to divide the CPU% throught the numbers of vCPUs (as done here: http://www.doxer.org/cpu-usage-in-xen-vm-using-xentop/).
103 CPU% /8 vCPUS = ~13% (which would match the measured 10% CPU Utilization from within the VM via task manager).

A good approach might be to start with 2 vCPUs and only add more CPU ressources if we see that CPU utilization in xentop is high (CPU% / VCPUS > 80%) on a regular basis.

Another thing that I found out using xentop is, that the untrusted VMs is running out of RAM and using Swap.
I found this by looking at the VBD_OO value in xentop which should be zero.

VBD OO - Prints number of total VBD OO requests.
This shows the number of times that the VBD has encountered an out of requests error.
When that occurs, I/O requests for the VBD are delayed.

strangely the untrusted VM was only using 400 MB of RAM, while the may memory value in Qubes Manager was set to 4 GB for this VM.

Wouldn't it be better to check upon installation of Qubes OS how much Cores the host system has and then decide upon that how much vCPUs should be configured for the VMs?

- Piit



donoban

unread,
Apr 5, 2016, 5:20:36 AM4/5/16
to qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 28/03/16 00:44, piitb...@gmail.com wrote:
> I agree, I think adding 2 vCPUs should be fine. Our VMware guy also
> told me that adding to much vCPUs will generate a scheduling
> overhead required to run those multiple vCPUs. Having 2 vCPUs might
> be a good thing if a malrunning single process is eating up one
> (v)cpu core.

I'm testing in some VM's and I don't notice too much difference. If
this is true, should default VM's start with only 2 vCPU's? (unless it
really needs more processors...)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJXA4NeAAoJEBQTENjj7QilrXAQALp0ngriydqHcAgREALOcm2S
BPtzqmgr3kNOHradoCpeQ7CnJ+KvvEV0HhcEpaVDF16H7yePLlkDJ6uXjXa9KrNm
rbphZUPtv8snOLBazn1ez8uXIEBotwt0yCYa/dKXJ2ZMbNg9mxVYaN1lvtcQTmio
YHvl6zhYFzYwjfirGEE/orxg7R2m6yewj1b0Lbb8tBg7jYR95I9jXwUnAUKHltlt
MrxDXJAnYoGTOPbSEn1WBCoGEfPpnq8VezVOcYCpsxO/4R31j9KulMEfb8m1d0R5
hEl1Bcp1iPmVlptOlLEJXhSrXH4ca1gBMNmjgChxUQKIVwKZ1rTLj+3hGMfqYUH+
qt22J+wPeqQyoCEY2cFMZlEzzd3yQl2iZsvcrTzZHEN6hjqBiAb5NUKEHzvyOzV6
kZ3eTTOPDMKoaBtxNcwrdGArRlZPZo/+ztqEgMdRZE+eHmE/+zqz/I+rBVjoKrYx
2ZBePg7SuictSUDq5wO/K8Ze+A0GVqg8SaE1LVD6UqlT/GeMiU70mgpL7PfPv1y6
W2ZmRboM7jDVsQRU3XAg6TGA54OJgEI/ATPIlpWxBdJg8BJ9NlkzVTQXREP0YyKS
vDAYqrGjNZR0PFbuRVSxf/iInyJ6ZoUA5Td/+Gx+Mx8s1dUUoypX1yMfvTruPrLy
1wJLdn0ZMXtZcu7gQcm/
=cFXk
-----END PGP SIGNATURE-----

Marek Marczykowski-Górecki

unread,
Apr 5, 2016, 6:48:54 AM4/5/16
to donoban, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Tue, Apr 05, 2016 at 11:20:30AM +0200, donoban wrote:
> On 28/03/16 00:44, piitb...@gmail.com wrote:
> > I agree, I think adding 2 vCPUs should be fine. Our VMware guy also
> > told me that adding to much vCPUs will generate a scheduling
> > overhead required to run those multiple vCPUs. Having 2 vCPUs might
> > be a good thing if a malrunning single process is eating up one
> > (v)cpu core.
>
> I'm testing in some VM's and I don't notice too much difference. If
> this is true, should default VM's start with only 2 vCPU's? (unless it
> really needs more processors...)

Yes good idea. Tracking it here:
https://github.com/QubesOS/qubes-issues/issues/1891

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXA5gOAAoJENuP0xzK19cslBoH+gPHAiP9jZctahBPho3dgXTD
LXLLlZuynEGeiBdN9ztUVPP/4l45+7uYt0GESQsM8e5rqNTUH1N5EtChaRzAUUz/
dITW6u8t/NhrVrbIC9OSzZ0bu2XQgN+W3WCrAcQ8oSsnRGFFNE5bHK4t7HjrG9QX
sZyZ/x5Io4tK+A/Wl86xKVcHnhONqgGPyn3lbvzeXNkmlLbXvl81TxCGBF8AeHDK
rNzuy4gfmorzP4S4MJiDv3drS7gM1H0oWN6Dm+3+KbJs+sOQI3dVs1xYmgu/32Za
lPWXIkgvQc+wbH8F5dg/s6FNGZWtGE6Cqah0DDkNEvzUNNLG08+T6312M9xsKUY=
=mBpp
-----END PGP SIGNATURE-----

raah...@gmail.com

unread,
Apr 5, 2016, 3:33:32 PM4/5/16
to qubes-users, don...@riseup.net
tks for that command! i been looking for a way to identify which vm is using most i/o activity sometimes. I think this will help.

Salmiakki

unread,
Apr 6, 2016, 6:19:26 PM4/6/16
to qubes-users
Recalling your question:

Another thing that I found out using xentop is, that the untrusted VMs is running out of RAM and using Swap.
I found this by looking at the VBD_OO value in xentop which should be zero.

VBD OO - Prints number of total VBD OO requests.
This shows the number of times that the VBD has encountered an out of requests error.
When that occurs, I/O requests for the VBD are delayed.

strangely the untrusted VM was only using 400 MB of RAM, while the may memory value in Qubes Manager was set to 4 GB for this VM.

I think this might provide a partial answer: https://www.qubes-os.org/doc/qmemman/
Especially this passage:
Currently, prefmem is simply 130% of current memory usage in a domain (without buffers and cache, but including swap)
So it seems if your VM has swap and it is unused qubes decides there is no shortage of memory.
 

Reply all
Reply to author
Forward
0 new messages