You would probably attract more answers with more info.
What is the VM type (vbox,xen,vmware,qemu,uname-it?)
What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split
vhd etc)
What is the kpartx command you run, and what is the output of kpartx -
l?
If you then
partprobe
blkid
What does it list?
Cheers
On Dec 28, 7:41 pm, Seth Heeren <sghee...@hotmail.com> wrote:
> Merry Christmas!
>
> You would probably attract more answers with more info.
>
> What is the VM type (vbox,xen,vmware,qemu,uname-it?)
> What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split
> vhd etc)
qemu, and raw.
> What is the kpartx command you run, and what is the output of kpartx -
> l?
>
# losetup /dev/loop1 my-solaris.img
# kpartx -av /dev/loop1
# kpartx -l /dev/loop1
loop1p1 : 0 16769024 /dev/loop1 4096
> If you then
>
> partprobe
> blkid
>
> What does it list?
It doesn't list my disk image, just other system partitions. A loop1p1
file does exist in /dev/mapper
On Dec 28, 7:41 pm, Seth Heeren <sghee...@hotmail.com> wrote:Merry Christmas! You would probably attract more answers with more info. What is the VM type (vbox,xen,vmware,qemu,uname-it?) What is the Virtual Disk File format (vmdk, vhd, vdi, raw, qemu, split vhd etc)qemu, and raw.What is the kpartx command you run, and what is the output of kpartx - l?# losetup /dev/loop1 my-solaris.img # kpartx -av /dev/loop1 # kpartx -l /dev/loop1 loop1p1 : 0 16769024 /dev/loop1 4096If you then partprobe blkid What does it list?It doesn't list my disk image, just other system partitions. A loop1p1 file does exist in /dev/mapper
> A loop1p1
> file does exist in /dev/mapper
That seems wrong. Am I right in assuming that the OS was (open)solaris
and that the zfs is part of rpool? If yes, there should be at least
one partition (the solaris one) and several slices. zfs should be on
the first slice, not the first partition. It might be possible that
kpartx does not support solaris slices.
You may have better luck importing the image as disk, either using "xm
block-attach" (when using Xen) or setup iscsi and do iscsi
export-import. These will let the kernel handle partition and slices
(as least RHEL5's kernel know how to handle solaris slice correctly)
thus eliminating the need for kpartx.
--
Fajar
On Dec 28, 10:09 pm, "Fajar A. Nugraha" <fa...@fajar.net> wrote:
> On Tue, Dec 29, 2009 at 8:21 AM, TuPari <goo...@jks.tupari.net> wrote:
> > # losetup /dev/loop1 my-solaris.img
> > # kpartx -av /dev/loop1
>
> > # kpartx -l /dev/loop1
> > loop1p1 : 0 16769024 /dev/loop1 4096
> > A loop1p1
> > file does exist in /dev/mapper
>
> That seems wrong. Am I right in assuming that the OS was (open)solaris
> and that the zfs is part of rpool? If yes, there should be at least
I don't really know, I just accepted the defaults when I did the
installation.
fisk prints this for the raw image file:
Disk solaris.img: 0 MB, 0 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
solaris.img1 * 1 1045 8384512 bf Solaris
Partition 1 has different physical/logical endings:
phys=(1023, 254, 63) logical=(1044, 19, 63)
Yup, it's just as I suspected then. fdisk (and apparently, kpartx) can
work with partition entries, but (apparently) not with slices.
Now if you REALLY need to mount this disk on the host, and you use
qemu, your best bet is probably to use iscsi. That is, setup iscsi
server to export /path/to/your/solaris.img usig tgtadm (part of
scsi-target-utils on RHEL5), and then import the disk back using
iscsiadm (part of iscsi-initiator-utils). See
http://www.cyberciti.biz/tips/howto-setup-linux-iscsi-target-sanwith-tgt.html
and "man iscsiadm" for examples.
Once you get it working, you should be able to do something like this
# zpool import -d /dev/disk/by-path/
pool: rpool
id: 12985232848676779931
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
rpool
ONLINE
disk/by-path/ip-127.0.0.1:3260-iscsi-iqn.xen01.com.example:disk1-lun-1-part5
ONLINE
# zpool import -d /dev/disk/by-path/ -f rpool
cannot share 'rpool/dump': feature not implemented yet
cannot share 'rpool/vbd/test1': feature not implemented yet
cannot share 'rpool/swap': feature not implemented yet
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 29.8G 4.42G 25.3G 14% 1.00x ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 4.81G 24.5G 82K /rpool
rpool/ROOT 3.82G 24.5G 21K legacy
rpool/ROOT/opensolaris 3.82G 24.5G 3.63G /
rpool/dump 500M 24.5G 500M -
rpool/export 70K 24.5G 23K /export
rpool/export/home 47K 24.5G 23K /export/home
rpool/export/home/fajar 24K 24.5G 24K /export/home/fajar
rpool/swap 512M 24.9G 110M -
rpool/vbd 37K 24.5G 21K /rpool/vbd
rpool/vbd/test1 16K 24.5G 16K -
Note that I use /dev/disk/by-path to make sure path names stays the
same. Also BIG WARNING: once you import the pool in another system
(like on linux) you MIGHT not be able to boot it on your virtual
machine anymore without first booting using live CD, import, then
export the pool. This is a known issue due to difference in hostid and
path.
--
Fajar