I have disk images from a Solaris x86 system (I believe version 10)
which I would like to mount under linux. The images were obtained
using dd. I currently have version 0.7.0 of zfs-fuse installed on a
Debian virtual machine (via the Sid package). My virtual machine has
been provided access to the dd images through VMWare. Therefore, the
disk images appear as /dev/sd* block devices.
I am unable to get the devices recognized. I have tried:
zpool import
zpool import -d /dev
zpool import -f -d /dev/block
zpool import -f -d /dev/disk/by-path
As well as other variations. I am unable to get it to recognize
anything. Is there an easy way to verify that these are indeed ZFS
images? Assuming they are, is it possible that an unsupported pool
version is in use? How would I check for this?
Thanks much,
tim
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
> If the images come from a whole disk, it might be tricky.
Understood. I can certainly set up loopback devices with the
appropriate offsets if necessary.
> If it's a partition image, then now even parted can tell you if the
> partition is zfs or not.
Ok, this is what I just tried:
# parted -l
Error: /dev/sdb: unrecognised disk label
Error: /dev/sdc: unrecognised disk label
Error: /dev/sdd: unrecognised disk label
Error: /dev/sde: unrecognised disk label
Error: /dev/sdf: unrecognised disk label
Error: /dev/sdg: unrecognised disk label
# parted -v
parted (GNU parted) 2.3
...
Some of those do look like UFS, but the rest are supposedly ZFS.
Using sleuthkit's mmls tool on /dev/sdc, I get:
# mmls /dev/sdc
Sun Volume Table of Contents (Solaris)
Offset Sector: 0
Units are in 512-byte sectors
Slot Start End Length Description
00: 00 0000000000 0031374944 0031374945 / (0x02)
01: ----- 0031374945 0031407074 0000032130 Unallocated
Not really sure what to make of it. I believe these systems may have
been running under LDOMs...
tim
--
To post to this group, send email to zfs-...@googlegroups.com
To visit our Web site, click on http://zfs-fuse.net/
# parted -l
Error: /dev/sdb: unrecognised disk label
> Solaris creates poorly formed GPT labels that are not recognized by most
> Linux utilities, which can cause this kind of error when a whole-disk pool
> is moved from Solaris to zfs-fuse or zfs-linux. Technical details are here:
>
> https://github.com/zfsonlinux/zfs/issues/344
>
> You can manually rewrite the GPT label according to the ticket, or you can
> take each vdev offline, clear it, and do an in-place replace.
Thanks for the helpful info. Unfortunately, I just tried running
gpart and it doesn't give the same kind of errors:
# gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.1
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
It is certainly possible that these disk images were obtained in an
unusual way. I'm honestly not sure if they were obtained from
partitions, volumes, or perhaps even at some layer below a
RAID/mirror. I do know that file contents that I want are in the
images though, based on a little browsing with a hex editor.
Unfortunately, performing another image is not really an option right
now, so I need to figure out how to get them mounted. I definitely
can modify a copy of the disks as needed to get them recognized, if I
can just figure out what needs to be done. I only need read access to
the data.
I'm going to research GPTs and see if I can search for a signature of
these blocks that may occur in an unexpected location in the image.
Thanks again,
tim
Thanks for the suggestions, I'll keep them in mind.
Unfortunately I'm not in a position to be able to reimage the disks or
to use zfs/zpool tools from the original host. I basically have to
deal with the images I was given, as getting new images is a
logistical problem.
I'm going to try to search for some partition/filesystem headers.
Failing a simple loopback offset fix, I'll probably have to set up
opensolaris and hope I can import there.
thanks,
tim
Seth
How?
> I'm honestly not sure if they were obtained from
> partitions, volumes, or perhaps even at some layer below a
> RAID/mirror. I do know that file contents that I want are in the
> images though, based on a little browsing with a hex editor.
So you didn't dump them yourself?
> Unfortunately, performing another image is not really an option right
> now, so I need to figure out how to get them mounted. I definitely
> can modify a copy of the disks as needed to get them recognized, if I
> can just figure out what needs to be done. I only need read access to
> the data.
If you dump either:
- the disk
- the partition containing solaris slices
- the slice containing zfs
you should be able to get it recognized by linux. If you need a "fake"
MBR (plus fdisk partition table), virtualbox should be able to help
you.
If the disk is previously managed by a hardware raid card (even if
it's just encapsulated as a single drive raid0), then it won't be
easy. You need to find out how that card labels the disk/reserves some
space, and adjust it accordingly.
In any case you need to know how you dumped the disk.
--
Fajar
It is certainly possible that these disk images were obtained in an
unusual way. I'm honestly not sure if they were obtained from
partitions, volumes, or perhaps even at some layer below a
RAID/mirror. I do know that file contents that I want are in the
images though, based on a little browsing with a hex editor.