How do I mount a zfs image?

2,600 views
Skip to first unread message

TuPari

unread,
Dec 24, 2009, 8:07:48 PM12/24/09
to zfs-fuse
I create a solaris virtual machine using a file as a virtual disk.
How can I mount the zfs filesystem? I'm not getting anywhere with the
man pages. I used kpartx to create a file in /dev/mapper for the
partition, but "zpool import -d /dev/mapper/" does not do anything.

Seth Heeren

unread,
Dec 28, 2009, 7:41:00 PM12/28/09
to zfs-fuse
Merry Christmas!

You would probably attract more answers with more info.

What is the VM type (vbox,xen,vmware,qemu,uname-it?)
What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split
vhd etc)
What is the kpartx command you run, and what is the output of kpartx -
l?

If you then

partprobe
blkid

What does it list?

Cheers

TuPari

unread,
Dec 28, 2009, 8:21:08 PM12/28/09
to zfs-fuse

On Dec 28, 7:41 pm, Seth Heeren <sghee...@hotmail.com> wrote:
> Merry Christmas!
>
> You would probably attract more answers with more info.
>
> What is the VM type (vbox,xen,vmware,qemu,uname-it?)
> What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split
> vhd etc)

qemu, and raw.


> What is the kpartx command you run, and what is the output of kpartx -
> l?
>

# losetup /dev/loop1 my-solaris.img
# kpartx -av /dev/loop1

# kpartx -l /dev/loop1
loop1p1 : 0 16769024 /dev/loop1 4096


> If you then
>
> partprobe
> blkid
>
> What does it list?

It doesn't list my disk image, just other system partitions. A loop1p1
file does exist in /dev/mapper

sgheeren

unread,
Dec 28, 2009, 9:13:52 PM12/28/09
to zfs-...@googlegroups.com
TuPari wrote:
On Dec 28, 7:41 pm, Seth Heeren <sghee...@hotmail.com> wrote:
  
Merry Christmas!

You would probably attract more answers with more info.

What is the VM type (vbox,xen,vmware,qemu,uname-it?)
What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split
vhd etc)
    
qemu, and raw.
  
What is the kpartx command you run, and what is the output of kpartx -
l?

    
# losetup /dev/loop1 my-solaris.img
# kpartx -av /dev/loop1

# kpartx -l /dev/loop1
loop1p1 : 0 16769024 /dev/loop1 4096


  
If you then

partprobe
blkid

What does it list?
    
It doesn't list my disk image, just other system partitions. A loop1p1
file does exist in /dev/mapper
  
Ok thanks for prompt and precise response. Unfortunately I donot recognize this behaviour. Although I'm not personally using qemu (anymore) and don't ever employ kpartx in this fashion (well maybe back in 2006 once...) I'm pretty positive your approach should be correct. The only kink I can envision happening is when the disk image is in fact less 'raw' than advertised.

You might want to move the disk image onto some other filesystem (physically) but I'm getting quite superstitious there.

Be sure to post this wherever kpartx would be supported,

Good luck
Seth

Fajar A. Nugraha

unread,
Dec 28, 2009, 10:09:46 PM12/28/09
to zfs-...@googlegroups.com
On Tue, Dec 29, 2009 at 8:21 AM, TuPari <goo...@jks.tupari.net> wrote:
> # losetup /dev/loop1 my-solaris.img
> # kpartx -av /dev/loop1
>
> # kpartx -l /dev/loop1
> loop1p1 : 0 16769024 /dev/loop1 4096

> A loop1p1


> file does exist in /dev/mapper

That seems wrong. Am I right in assuming that the OS was (open)solaris
and that the zfs is part of rpool? If yes, there should be at least
one partition (the solaris one) and several slices. zfs should be on
the first slice, not the first partition. It might be possible that
kpartx does not support solaris slices.

You may have better luck importing the image as disk, either using "xm
block-attach" (when using Xen) or setup iscsi and do iscsi
export-import. These will let the kernel handle partition and slices
(as least RHEL5's kernel know how to handle solaris slice correctly)
thus eliminating the need for kpartx.

--
Fajar

TuPari

unread,
Dec 29, 2009, 12:55:41 AM12/29/09
to zfs-fuse

On Dec 28, 10:09 pm, "Fajar A. Nugraha" <fa...@fajar.net> wrote:
> On Tue, Dec 29, 2009 at 8:21 AM, TuPari <goo...@jks.tupari.net> wrote:
> > # losetup /dev/loop1 my-solaris.img
> > # kpartx -av /dev/loop1
>
> > # kpartx -l /dev/loop1
> > loop1p1 : 0 16769024 /dev/loop1 4096
> > A loop1p1
> > file does exist in /dev/mapper
>
> That seems wrong. Am I right in assuming that the OS was (open)solaris
> and that the zfs is part of rpool? If yes, there should be at least

I don't really know, I just accepted the defaults when I did the
installation.

fisk prints this for the raw image file:

Disk solaris.img: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
solaris.img1 * 1 1045 8384512 bf Solaris
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(1044, 19, 63)

Fajar A. Nugraha

unread,
Dec 29, 2009, 1:57:59 AM12/29/09
to zfs-...@googlegroups.com

Yup, it's just as I suspected then. fdisk (and apparently, kpartx) can
work with partition entries, but (apparently) not with slices.

Now if you REALLY need to mount this disk on the host, and you use
qemu, your best bet is probably to use iscsi. That is, setup iscsi
server to export /path/to/your/solaris.img usig tgtadm (part of
scsi-target-utils on RHEL5), and then import the disk back using
iscsiadm (part of iscsi-initiator-utils). See
http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.html
and "man iscsiadm" for examples.

Once you get it working, you should be able to do something like this

# zpool import -d /dev/disk/by-path/
pool: rpool
id: 12985232848676779931
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:

rpool
ONLINE
disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.xen01.com.example:disk1-lun-1-part5
ONLINE

# zpool import -d /dev/disk/by-path/ -f rpool
cannot share 'rpool/dump': feature not implemented yet
cannot share 'rpool/vbd/test1': feature not implemented yet
cannot share 'rpool/swap': feature not implemented yet

# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 29.8G 4.42G 25.3G 14% 1.00x ONLINE -

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.81G 24.5G 82K /rpool
rpool/ROOT 3.82G 24.5G 21K legacy
rpool/ROOT/opensolaris 3.82G 24.5G 3.63G /
rpool/dump 500M 24.5G 500M -
rpool/export 70K 24.5G 23K /export
rpool/export/home 47K 24.5G 23K /export/home
rpool/export/home/fajar 24K 24.5G 24K /export/home/fajar
rpool/swap 512M 24.9G 110M -
rpool/vbd 37K 24.5G 21K /rpool/vbd
rpool/vbd/test1 16K 24.5G 16K -


Note that I use /dev/disk/by-path to make sure path names stays the
same. Also BIG WARNING: once you import the pool in another system
(like on linux) you MIGHT not be able to boot it on your virtual
machine anymore without first booting using live CD, import, then
export the pool. This is a known issue due to difference in hostid and
path.

--
Fajar

Reply all
Reply to author
Forward
0 new messages