After searching for infos on qubesOS website and from google, after various attempt, I'm still getting this message from dom0 :
"None of the selected packages could be updated."
When I click "check for new Updates", i'm getting this message :
"There is no network connection available. Please check your connection settings and try again"
I'm still having 419 Updates to make ....
I just upgraded from Qubes Os 3.1 to Qubes release 3.2 (R3.2)
You can find the logs from the command "sudo qubes-dom0-update --clean" here :
Any help to properly update my dom0 will be very appreciated.
Thanks
[user@dom0 ~]$ sudo dnf update
sudo: dnf: command not found
this is what I've done :
sudo qubes-dom0-update systemd-compat-libs perl-libwww-perl perl-Term-ANSIColor perl-Term-Cap gdk-pixbuf2-xlib speexdsp qubes-mgmt-salt-admin-tools lvm2
--> n
sudo qubes-don0-update --clean
this is the results of "yum info qubes-core-dom0" :
Installed Packages
Name : qubes-core-dom0
Arch : x86_64
Version : 3.1.18
Release : 1.fc20
Size : 1.7 M
Repo : installed
From repo : qubes-dom0-cached
Summary : The Qubes core files (Dom0-side)
URL : http://www.qubes-os.org
License : GPL
Description : The Qubes core files for installation on Dom0.
Available Packages
Name : qubes-core-dom0
Arch : x86_64
Version : 3.2.12
Release : 1.fc23
Size : 328 k
Repo : qubes-dom0-cached
Summary : The Qubes core files (Dom0-side)
URL : http://www.qubes-os.org
License : GPL
Description : The Qubes core files for installation on Dom0.
I'm still stuck with 411 pending updates.
https://www.qubes-os.org/doc/upgrade-to-r3.2/
I think I was able to upgrade to 3.2, but with new problems
Here's the results for "yum info qubes-core-dom0" :
Redirecting to '/usr/bin/dnf info qubes-core-dom0' (see 'man yum2dnf')
Failed to synchonize cache for repo 'updates', disabling.
Failed to synchonize cache for repo 'fedora', disabling.
Failed to synchonize cache for repo 'qubes-templates-itl', disabling.
Failed to synchonize cache for repo 'qubes-dom0-current', disabling.
Installed Packages:
Name : qubes-core-dom0
Arch : x86_64
Version : 3.2.12
Release : 1.fc23
Size : 1.7 M
Repo : @System
Summary : The Qubes core files (Dom0-side)
URL : http://www.qubes-os.org
License : GPL
Description : The Qubes core files for installation on Dom0.
I also notice that there is no more wifi network.
And when I try to start a VM i get this message :
"Error starting VM 'email': Error: Failed to connect to qmemman: [errno 2] no such file or directory"
Anyone can help to properly finish this upgrade to 3.2 process ?
after rebooting as decribe in step #7 from upgrading dom0 (Reboot Dom0) :
Only dom0 is in an active state.
When i try to start a terminal for Template: debian-8, i get this message :
"Starting the 'debian-8' VM ... error while starting the debian-8 'VM': ERROR: failed to connect to qmemman : [errno 2] no such file or directory"
So no i didn't proceed to upgrade my TemplateVMs ... I'm not able to open a terminal ! (And trying to update from Qubes VM Manager gives me the same message)
Also, I don't see how I can upgrade something new if I can't connect to a network (not even wired)
Ok, I have now assign sys-net & sys-firewall to fedora-23 TemplateVM, I still have the same qmemman error.
Is it normal that when I start my computer (after decryption) that my desktop looks like this (this is also how my desktop looked like before the upgrade) :
https://s23.postimg.org/tgs6vy8yj/006.jpg
and now (after the upgrade) looks like this :
https://s30.postimg.org/o1ot24zoh/007.jpg
In the last picture you can see that only dom0 is in active state with 3934 MB as MEM. (its always at 3934 MB)
results for "sudo systemctl status qubes-qmemman" (I'm not able to transfer logs to another computer from dom0 at the moment so i need to write the logs ...) :
Active: failed (Results: exit-code)
Process: 2081 ExecStart=/usr/lib/qubes/qmemman_deamon.py (code=exited, status=1/FAILURE)
Main PID: 2081 (code-exited, status=1/FAILURE)
qmemman_deamon.py[2081]: self.connect_unixsocket(address)
qmemman_deamon.py[2081]: File "/usr/lib64/python2.7/logging/handlers.py" line 789, in connect_unixsocket
qmemman_deamon.py[2081]: self.socket.connect(address)
qmemman_deamon.py[2081]: File "/usr/lib64/python2.7/socket.py", line 228, in meth
qmemman_deamon.py[2081]: return getadttr(self._sock,name)(*args)
qmemman_deamon.py[2081]: socket.error: [errno 111] Connection refused
systemd[1]: qubes-qmemman.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Qubes memory management deamon
systemd[1]: qubes-qmemman.service: Unit entered failed state.
systemd[1]: qubes-qmemman.service: Failed with result 'exit-code'
it seams to be the same error for "sudo journalctl -u qubes-qmemman"
In Qubes VM manager, when I right click on dom0 and then click on VM settings, nothing happens, I can infinitly click on "VM settings". This does not apply to other VMs.
this :
Job for qubes-qmemman.service failed because the control process exited with error code. See "systemctl status qubes-qmemman.service" and "journalctl -xe" for details.
"sudo systemctl status qubes-qmemman.service" returns the same kind of error as "sudo systemctl status qubes-qmemman" and "sudo journalctl -xe" is a bit long to copy, let me know if there is something specific to look there.
[user@dom0 ~]$ ls -l /dev/log
srw-rw-rw- 1 root root 0 Dec 16 07:32 /dev/log
[user@dom0 ~]$/run/systemd/journal/dev-log
bash: /run/systemd/journal/dev-log: Permission denied
[user@dom0 ~]$ sudo /run/systemd/journal/dev-log
sudo: /run/systemd/journal/dev-log: command not found
[user@dom0 ~]$ systemctl status systemd-journcald-dev-log.socket
systemctl status systemd-journcald-dev-log.socket
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[user@dom0 ~]$ sudo rm -rf /dev/log
[user@dom0 ~]$ systemctl status systemd-journcald-dev-log.socket
systemctl status systemd-journcald-dev-log.socket
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
[user@dom0 ~]$
[user@dom0 ~]$ systemctl status systemd-journal-dev-log.socket
systemd-journal-dev-log.socket - Journal Socket (/dev/log)
Loaded: loaded (/usr/lib/systemd/system/systemd-journal-dev-log.socket; statis; vendor preset: disabled)
Active: inactive (dead) since Sat 2016-12-17 00:26:10 EST; 1min 49s ago
Docs: man:systemd-journal.service(8)
man:journald.conf(5)
Listen: /run/systemd/journal/dev-log (Datagram)
Dec 17 00:26:10 dom0 systemd[1]: Stopping Journal Socket (/dev/log).
Dec 17 00:26:10 dom0 systemd[1]: systemd-journald-dev-log.socket: Socket service systemd-journald.service already active, refusing.
Dec 17 00:26:10 dom0 systemd[1]: Failed to listen on Journal Socket (/dev/log).
[user@dom0 ~]$ sudo rm -rf /dev/log
[user@dom0 ~]$ sudo systemctl restart systemd-journald-dev-log.socket
Job for systemd-journald-dev-log.socket failed. See "systemctl status systemd-journald-dev-log.socket" and "journalctl -xe" for details.
[user@dom0 ~]$
Is there something to do with my problems with dom0 ? At the bare minimum I would like at least to recover my files from my VMs.
Thanks
I had similar probs. clean install is way to go in future imo. Backups restored no problems. Only tip is if you make alot of changes to the default templates. clone them instead and use the clone for your vms, to ensure they get restored properly when backed up. But appvms should be no probs.
Luckily, I was able to backup all my VMs from the state my computer was.
I did a clean install, now Qubes work as it should.
Just one thing : I want to use my backed up "debian-8" TemplateVM, how can I replace the original "debian-8" ? I'm not able to delete it and as long as its there I won't be able to retore my backed up "debian-8" Template VM.
All my appVMs restored without problems.
I was able to restore all VMs with this command :
"qvm-backup-restore -d sys-net --rename-conflicting /run/media/user/New\ Volume/qubesbackup/qubes-2016-12-21T080046"
so now my (backed up) "debian-8" is "debian-81". It had been updated.
some AppVMs start properly, others doesn't : with this error message :
"Error starting VM: Requested operation is not valid: PCI device 0000:00:19:.0 is in use by driver xenlight, domain sys-net"
Is there something to do to access those VMs ?
Thanks a lot Marek and Andrew for your help and patience. I now have access to all my AppVMs.