Can "see" but not access secondary storage?

61 views
Skip to first unread message

Gaiko Kyofusho

unread,
May 15, 2017, 11:18:28 PM5/15/17
to qubes...@googlegroups.com
I managed (with help from qubes-users) to setup secondary storage on another computer but I am having trouble with my current computer. 

It (they) are two drives setup with hardware raid, I partd/formated (ext4) them as one raid drive which seemd to go ok. Qubes "sees" it, actually it saw both drives for awhile but now only sees one (don't know what I did there) but now when I try to attach it, it qubes vm manager shows it attached but it doesn't seem to be accessible from things like nautilus? I assume I just need to partition/setup as lvm/encrupt or something like that but am unsure how to go about it correctly? I'd be happy to post some logs or something but am not sure where to go to get them?

Unman

unread,
May 18, 2017, 3:10:35 PM5/18/17
to Gaiko Kyofusho, qubes...@googlegroups.com
When you attach the drive to a qube, open a terminal in the qube and
run dmesg. Look at the contents of /dev/ and see what xvd* devices are
there. If you have attached the whole drive you would expect to see
/dev/xvdi, and associated partitions.

You haven't said what template you are using, but it is possible that
you will have to manually mount the device/partition.

unman

Gaiko Kyofusho

unread,
May 19, 2017, 3:08:13 PM5/19/17
to Unman, qubes...@googlegroups.com
Thanks for the reply. I am not sure I 100% follow you (noobness). Other devices do show up, here is what I got:

[user@personal ~]$ cd /dev/
[user@personal dev]$ ls xvd*
xvda  xvdb  xvdc  xvdc1  xvdc2  xvdd  xvdi

Do you mean like xvdi1 xvdi2 etc?

As for template, sorry about that, I have tried both the fedora-24 and debian-9 templates (the above is output from a fed appvm)

I generally got mounting partitions but that does lead me to two questions:
#1 I assume I need to see more than just xvdi? (like partition numbers like i do for xvdc?)
#2 would I have to manually mount it on each appvm everytime I want to use it? (perhaps not a huge deal as I eventually want to have it permantly used by 2-3appvms and no others so I assume that would involve some fstab tinkering?)

Unman

unread,
May 19, 2017, 5:59:01 PM5/19/17
to Gaiko Kyofusho, qubes...@googlegroups.com
I dont know if you attached the drive or a partition - if just the
latter then you would only expect to see xvdi - Try mounting that and
see what happens.
Alternatively run fdisk or cfdisk /dev/xvdi and see what is reported.

You can mount these during qube startup from /rw/config/rc.local or
alternatively use bind-dirs to have a per-qube fstab. (Check the docs on
bind-dirs)

unman
Message has been deleted

Gaiko

unread,
May 28, 2017, 1:28:52 PM5/28/17
to qubes-users, gaikokuji...@gmail.com, un...@thirdeyesecurity.org
On Friday, May 19, 2017 at 5:59:01 PM UTC-4, Unman wrote:
> On Fri, May 19, 2017 at 03:07:40PM -0400, Gaiko Kyofusho wrote:

Thanks for that.

I am barely sure what I am doing at the moment but think I am making progress. So as I think you mentioned i think I was looking at or messing with separate partitions so I tried to access the device and start over, that is
sudo fdisk /dev/md126
I deleted the two partitions then created a new one and it shows up as md126p1 and automatically with "Linux filesystem" as the type.
I thought that I should then format it:
sudo mkfs.ext4 /dev/md126p1
but it told me:
/dev/md126p1 contains a crypto_LUKS file system
How is that? I mean good i guess, except I didn't setup a crypto_LUKS fs or is that default?
I tried to access it via a VM and it now shows up in nautilus and asked me for a pw (like an encrypted dev i guess) except... I don't know the pw? I tried a pw, it wasn't right, then the dev disapeared from nautilus, i tried detaching/reattaching it, still didn't show up in nautilus.

I am stumped? (though hope I am getting close)

Reply all
Reply to author
Forward
0 new messages