Qubes 3.2-rc2 very high hard disk activity

66 views
Skip to first unread message

donoban

unread,
Aug 16, 2016, 6:55:15 AM8/16/16
to qubes-users
Hi

The computer which I upgraded from 3.1 to 3.2 is having some
performance problems when I have a few VMs running. It has a non-SSD
hard disk so I know it's going to do a bottleneck and I don't expect a
fast perform, but I'm sure that it's running very worse than it did
with 3.1.


Using iotop on dom0 I examined the process which use the hard disk when
it goes totally lagged. I see:

- Some [loopX] processess... are them loops for the VMs? I suppose it's
normal to have a pretty high I/O work from this processes.

- systemd-journald is pretty common when the lag starts, in
fact I have 873Mb's on /var/log/journal , is this standard?

- I also see some rsyslogd process and qvm-sync-clock


I have to say that this never happened with Qubes 3.1, I just couldn't
open more than 4/5 AppVM's, but it always worked pretty fine.

Any idea?

donoban

unread,
Aug 16, 2016, 6:58:41 AM8/16/16
to qubes...@googlegroups.com
I also have experienced a lot of process killed because I run out of
memory. This never happened with 3.1.

Andrew David Wong

unread,
Aug 16, 2016, 12:17:15 PM8/16/16
to donoban, qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On 2016-08-16 03:58, donoban wrote:
> I also have experienced a lot of process killed because I run out of
> memory. This never happened with 3.1.
>

Is it possible that the high HDD activity is excessive swap usage due to low
memory?

- --
Andrew David Wong (Axon)
Community Manager, Qubes OS
https://www.qubes-os.org
-----BEGIN PGP SIGNATURE-----

iQIcBAEBCgAGBQJXszyCAAoJENtN07w5UDAwzkEP/A/9ugXJzH778Mz1Npg4z87+
1310lQghrK6bcQpMB6tnRLrRKKOJuWVKow1vRDdDVJCZ+W2lhK0XetHUVyBeD6VZ
N8UZO6o3DbSeuVxVvRl6mcFvE6NlG9udNWX2zpM3MG64ArQkNu5gLm15RStV/xlY
Np1orD6HbqWHn+YY9ZreEztHSs8bOD//ge7SqAplXfyDiFeJf2X/Rv1eOnrYxYLj
oYvx2B91faS6KmU5gzHLZbGoCntp93+bDasKD0UtDsEN6Dy9AtAiqpLE11IEUr1g
8iJ4IFuixtlq/dnxIF/yqFVaZT+MJWqKRYiPUhM7l8yN4kYyYtIS/HvRt0yst8CK
+JLkULkk0eEw2qZE7y+e9peZW6+NFjnfVKhnXv/RABrdInnC3mFifVjWnuRfA6RJ
ub8rYXOq/32pCp/jd6zQU7u+zyvuvoflRF5RxsTPSYlWZ5wxy4+PXwG2yM7FHVNK
D1L8fjJeFoYZMtbRwwnhCaycIwSKo/YE5NcAN0HJCZ0f6YVdDXcXrp1aMpw7ROGW
SQX9hqCBWlKMzK6QEpxkDo+WhabPFXr2d0377SOikwk2PwocpG73v8mpq54fWhV+
pHsgU9+ZkrCckhGPSOGxFjT9qEZWKDbPmsYw2+yTng2/+oWaJcHKASmn4f1M4ZYl
KBDlAl/mD7q6pcZJE/O7
=iePj
-----END PGP SIGNATURE-----

donoban

unread,
Aug 16, 2016, 1:39:41 PM8/16/16
to qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256



On 08/16/2016 06:17 PM, Andrew David Wong wrote:
> On 2016-08-16 03:58, donoban wrote:
>> I also have experienced a lot of process killed because I run out
>> of memory. This never happened with 3.1.
>
>
> Is it possible that the high HDD activity is excessive swap usage
> due to low memory?
>
>

I don't use swap at this moment and with 3.1 I didn't too. I should
add it probably, but if I "fix" the "hard disk problem".
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJXs0/ZAAoJEBQTENjj7QilxUAP/067D/BolNQXWpDnp0ffYACc
uN5dKDpUscS85Yl6QNhV1+LzgewH01vCxyJ11XHWKyiUYzNDeZoVoaevx6is+Sem
+iWJtaKQIcTr3pgZweY7tFOVWRhsxKSaPqxkWyspCiJm4FqvxEnaFwGsRwbaIB/Y
RFSie0DqhTj0r1RQm0FFgp5ITp6y5+yZA52Vr5lBSe1MLRR9PpCH89aFWLu/UPxl
XxiKfu6d8VhnuDASB+1LBx1BC9JjKTERSYOnyTQLAD9/z9reoesgIK2xphWoQVfB
hBtPsuluMsryl5uNTnkv8x642PDtzIfL27ORS/fZmCPYTK4nUAEXCf1zUHAiNLNE
J85PEpCiNurZbMfUG3ddD0P31wT/cwz9WMha233jgXFgHYrlCNlhirr4X+ATjapI
yHGU8RkJ8KruASczn3lofK0puPM+AokVyMUhrycVzssfujKy1Rwm67TzesOwxacU
5UX89SF69oukwilIIpSIE9BFSRwKxg8jQ3aNnvGQjBAdGWBkYHG6z6CW8JyTl/D8
oo8Lpoc5x8vyzX2mjJk84BKL+6Whu3OKbEnkvkhpsMa0ZcXJL+N9vP3dEjn5um6j
bkVWXMDUIv6vGoCG66ferx5IEJbD9vToVXGnWZij3m7SNEhYxtWlEUeNwDYgnd9i
WGmW2RT52SSa4agLldzB
=01gO
-----END PGP SIGNATURE-----

Chris Laprise

unread,
Aug 16, 2016, 2:42:09 PM8/16/16
to qubes...@googlegroups.com
On 08/16/2016 01:39 PM, donoban wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>
>
> On 08/16/2016 06:17 PM, Andrew David Wong wrote:
>> On 2016-08-16 03:58, donoban wrote:
>>> I also have experienced a lot of process killed because I run out
>>> of memory. This never happened with 3.1.
>>
>> Is it possible that the high HDD activity is excessive swap usage
>> due to low memory?
>>
>>
> I don't use swap at this moment and with 3.1 I didn't too. I should
> add it probably, but if I "fix" the "hard disk problem".
>

Each vm has its own swap. If you misconfigure the vm with too little
memory (this can easily happen if you inadvertently turn off memory
balancing) then it can exhibit the symptoms you describe.

Chris

donoban

unread,
Aug 16, 2016, 2:49:48 PM8/16/16
to qubes...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256



On 08/16/2016 08:42 PM, Chris Laprise wrote:
> Each vm has its own swap. If you misconfigure the vm with too
> little memory (this can easily happen if you inadvertently turn off
> memory balancing) then it can exhibit the symptoms you describe.
>
> Chris
>

oh, maybe my problem is a custom qmemman.conf I will check it.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQIcBAEBCAAGBQJXs2BIAAoJEBQTENjj7Qilo9sP/jhXRxTt296dI/iOURA/QPBm
hmKpuUcl/2MHsfrIkLrTjgrLkZL8EozkSbvR2bL3QStwax8KrA/roMZIvmf0E+BI
Zzm2d0bpM32NKU7kr0WtDIC0unOT+nhezoFBzo+RW/YZDwlmJG/7fSVNaWb2JH0J
QAlbDylSPMEgUBI8OKCs3JNV/IAzsI5F4IxRLvQb/JYMDVYkYmaqMRUW3BdD+qOv
cmFqT2dc5IQOSMqqe87Q84VoUr83mH4mO4QMeggbuPfi6q46XEq9YrS7YTtM2UKy
3YCFvMDrjuD13dtNzYZkPDbe7nynB5crO65u4mttG0Q0xb8gfN9QR893NSIoW7p5
/zOqjoHazUjO/pzsETFCr76fZ16JRh/DaIMD5x8yQXwB/iQhIodsaAMpyO/6ViBD
6TVZiOkGnuovrRpMS6I0Fitb0Qk6SL4hFL1UNeM1wEhMF3hTKn5cjQXAgm4MDjzS
hfsQzKBf1eI5A+7sab+F8raIexm1S2r/D0g680OQIh5O9lw9WldXdXOgUWW4POHV
751yfzYuCl4WNHL/5UoXbLhyER0p9AHDu3QCd0CLsN24ChP8J5WQC8VyrYVpSFJG
K3eIzRCPO1gZmGEd6naItbDUfggSI+bbrNl8+7aW2QHDMx26Hd7e731+CiaYwvz1
+DvCGkZ5e5nIANId7pDi
=gWb1
-----END PGP SIGNATURE-----

Marek Marczykowski-Górecki

unread,
Aug 16, 2016, 2:54:40 PM8/16/16
to donoban, qubes-users
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On Tue, Aug 16, 2016 at 12:55:03PM +0200, donoban wrote:
> Hi
>
> The computer which I upgraded from 3.1 to 3.2 is having some
> performance problems when I have a few VMs running. It has a non-SSD
> hard disk so I know it's going to do a bottleneck and I don't expect a
> fast perform, but I'm sure that it's running very worse than it did
> with 3.1.
>
>
> Using iotop on dom0 I examined the process which use the hard disk when
> it goes totally lagged. I see:
>
> - Some [loopX] processess... are them loops for the VMs? I suppose it's
> normal to have a pretty high I/O work from this processes.

Take a look at `losetup -a` output to match which VM corresponds to
which loop device.

> - systemd-journald is pretty common when the lag starts, in
> fact I have 873Mb's on /var/log/journal , is this standard?

Yes, this is where journald keep logs.

> - I also see some rsyslogd process and qvm-sync-clock

Does rsyslogd use a lot of CPU, or is just idle?

> I have to say that this never happened with Qubes 3.1, I just couldn't
> open more than 4/5 AppVM's, but it always worked pretty fine.

Take a look at `top` in dom0 and sort processes on memory usage (single
'>' key press). If it's about some dom0 process using too much memory,
you should see it there. Normally at the top should be Xorg process (on
my system it has just under 200M in RES column, but you probably have
even less).

Also check swap usage in VMs - maybe its about some VM, not dom0?

- --
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXs2FmAAoJENuP0xzK19csVscH/37bToT7CWefM59YUYmF1Wby
fGwURNsuLxeMiZl+27q5A1dX0H5MtvG+awSr1qzLtjh+TVxA7ENKmJ/OiYfMd7go
f1Sb0v8P4CN0rZxWl58AQhvTAnHl1A2U2TliN2dgdLWTFDDnyxNmNc2s92NbCB0A
x+Lp9r8bdQXAWjtO8+vcLtjXSY/d1BThBSaFAJqIMmQQVM7B8cLMl3MqqqUqfgay
2iwpr7hsTNvU4iFwpNBucv7Tn48a+BiSRZALwI5/sOD11UbYCdcyYQL/UkRnVL+J
HZ2ZqDOZVgxAFE3Aw7XXMJdiZbZqvIfbLPA7nFhy1JgpP+Jc0z2O/p/xmTbKZlo=
=+U30
-----END PGP SIGNATURE-----

donoban

unread,
Aug 20, 2016, 4:57:55 AM8/20/16
to qubes...@googlegroups.com
Hi,

Well, I restored qmmman.conf to defaults and it seems working fine now.
I had a smaller cache-margin-factor which was probably causing
hyperpagination on the AppVM's.

I don't know why I didn't notice the same on 3.1.
Reply all
Reply to author
Forward
0 new messages