Mounting ZFS partition from OSv image in host

136 views
Skip to first unread message

Justin Cinkelj

unread,
Jan 5, 2016, 3:58:24 PM1/5/16
to osv...@googlegroups.com
I tried to mount ZFS partition from OSv image, in host. That is partition 2 in usr.img file.
qemu-nbd -v -c /dev/nbd0 usr.img
zpool import
zpool import osv -R /mnt/tmp
Now I can see /mnt/tmp/zfs - but the directory is empty.
zpool export osv
usr.img is no longer bootable.

Is this to be expected? Maybe I'm using incompatible ZFS variant (PPA from http://zfsonlinux.org/)?

The presence of directory /mnt/tmp/zfs looks ok - 'sys_pivot_root("/zfs", "/");' in fs/vfs/main.cc does chroot, right? That it appears empty, is not ok.

Benoît Canet

unread,
Jan 5, 2016, 4:38:37 PM1/5/16
to Justin Cinkelj, Osv Dev
On Tue, Jan 5, 2016 at 9:58 PM, Justin Cinkelj <justin....@xlab.si> wrote:
I tried to mount ZFS partition from OSv image, in host. That is partition 2 in usr.img file.
qemu-nbd -v -c /dev/nbd0 usr.img
zpool import
zpool import osv -R /mnt/tmp
 Now I can see /mnt/tmp/zfs - but the directory is empty.
zpool export osv
 usr.img is no longer bootable.

I don't know ZFS enough to answer. 

Is this to be expected? Maybe I'm using incompatible ZFS variant (PPA from http://zfsonlinux.org/)?

The presence of directory /mnt/tmp/zfs looks ok - 'sys_pivot_root("/zfs", "/");' in fs/vfs/main.cc does chroot, right? That it appears empty, is not ok.

pivot_root is a technique often used for booting with a small read only (RAMFS) filesystem mounting another real filesystem
read/write somewhere in the / hierarchy then pivot rooting it to / so it end up in / and the RAMFS end up in the mount point.


--
You received this message because you are subscribed to the Google Groups "OSv Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osv-dev+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nadav Har'El

unread,
Jan 6, 2016, 3:57:39 AM1/6/16
to Justin Cinkelj, Osv Dev
On Tue, Jan 5, 2016 at 10:58 PM, Justin Cinkelj <justin....@xlab.si> wrote:
I tried to mount ZFS partition from OSv image, in host. That is partition 2 in usr.img file.
qemu-nbd -v -c /dev/nbd0 usr.img
zpool import
zpool import osv -R /mnt/tmp
 Now I can see /mnt/tmp/zfs - but the directory is empty.

The discussion we previously held on this list (see https://groups.google.com/d/msg/osv-dev/jvDV9RfZ-e8/Pmqgjpz4IgAJ )
Suggests it's not enough to "zpool import" the directory, you also need to "mount it".

I never actually tried this myself.
 
zpool export osv
 usr.img is no longer bootable.

Is this to be expected?

No, doesn't sound expected. What does "not bootable" mean - what doesn't work? Note that the OSv kernel itself isn't inside the ZFS partition so if the OSv kernel is no longer booting, then something is seriously wrong, perhaps unrelated to ZFS. Did you kill qemu-nbd gently?
 
Maybe I'm using incompatible ZFS variant (PPA from http://zfsonlinux.org/)?

To be honest, I don't know how careful the ZFS developers are about preserving their on-disk format, and if it's possible that one day (or today...) other ZFS implementations will not be able to shared a disk with OSv's implementation... But the empty directory and boot failures seem a more basic issue, not minor incompatibilities.

Justin Cinkelj

unread,
Jan 6, 2016, 5:14:16 AM1/6/16
to Nadav Har'El, Osv Dev
Yes, I figured out that works.
sudo qemu-nbd -v -c /dev/nbd0 usr.img
zpool import osv -N
zpool status
zfs list
  NAME      USED  AVAIL  REFER  MOUNTPOINT
  osv      17.0M  9.61G    32K  /
  osv/zfs  16.9M  9.61G  16.9M  /zfs
zfs mount osv/zfs
ll /zfs/  # files are shown
umount /zfs
zpool export osv
Ctrl+C in qemu-nbd (I have to try nicer disconnect)

But when I now try to boot usr.img, I got:
sudo ./scripts/run.py -v -n -V -i build/last/usr.img --novnc
OSv v0.24-16-g4235192
eth0: 192.168.122.210
Failed to load object: /cli/cli.so. Powering off.
That is what I mean/meant with boot failure.

The host can still access files in usr.img. So I guess maybe only OSv ZFS code gets a bit confused, and doesn't really mount osv/zfs .

BR justin

Justin Cinkelj

unread,
Jan 6, 2016, 5:24:14 AM1/6/16
to Nadav Har'El, Osv Dev
Interesting that if I only do:
(term1) qemu-nbd -v -c /dev/nbd0 usr.img
(term2) qemu-nbd -d /dev/nbd0

Then I got in term1:
NBD device /dev/nbd0 is now connected to usr.img
/build/qemu-Ee59aw/qemu-2.0.0+dfsg/nbd.c:nbd_trip():L1031: From: 18446744073709551104, Len: 0, Size: 10737418240, Offset: 0
/build/qemu-Ee59aw/qemu-2.0.0+dfsg/nbd.c:nbd_trip():L1032: requested operation past EOF--bad client?

Wasn't immediately sure if this is relevant at all.
But as image is still bootable - it is not relevant.



On 06. 01. 2016 09:57, Nadav Har'El wrote:

Nadav Har'El

unread,
Jan 6, 2016, 10:38:32 AM1/6/16
to Justin Cinkelj, Osv Dev
On Wed, Jan 6, 2016 at 12:14 PM, Justin Cinkelj <justin....@xlab.si> wrote:
Yes, I figured out that works.
sudo qemu-nbd -v -c /dev/nbd0 usr.img
zpool import osv -N
zpool status
zfs list
  NAME      USED  AVAIL  REFER  MOUNTPOINT
  osv      17.0M  9.61G    32K  /
  osv/zfs  16.9M  9.61G  16.9M  /zfs
zfs mount osv/zfs
ll /zfs/  # files are shown
umount /zfs
zpool export osv
Ctrl+C in qemu-nbd (I have to try nicer disconnect)

But when I now try to boot usr.img, I got:
sudo ./scripts/run.py -v -n -V -i build/last/usr.img --novnc
OSv v0.24-16-g4235192
eth0: 192.168.122.210
Failed to load object: /cli/cli.so. Powering off.
That is what I mean/meant with boot failure.

I see. So OSv can boot, but not find the files on zfs (including /cli/cli.so).

stange that run.py with -V doesn't provide a lot more  debugging messages before getting to this "Failed to load object".

I'm not ZFS-savvy enough to understand why after what you did OSv no longer can find /cli/cli.so.

A wild guess: Is that "zpool export osv" thing at the end actually necessary? Maybe it tells OSv not to mount it next time, because supposedly it was "exported" out of the system?

Justin Cinkelj

unread,
Jan 6, 2016, 2:13:03 PM1/6/16
to Nadav Har'El, Osv Dev
As far as I could see, accessing ZFS from host changed ZFS metadata to refer from /dev/vblk0 to /dev/nbd0.
After adding some logging here and there, vdev_disk_open() is called with param:
ZFS vdev_disk.c:63 vdev_disk_open vd->vdev_path=/dev/nbd0p2

So linux code transparently handled "import foreign disk", and OSv later could not found /dev/nbd0.
If I just tried 'hexdump -C /dev/nbd0 | grep -e nbd -e vblk', there are quite a few references to nbd0 (sort of backup "superblock", I guess).

Shame I have no idea, if it is possible to 'rename' nbd0/nbd0p2 to vblk0/vblk0.1.

Nadav Har'El

unread,
Jan 7, 2016, 2:20:54 AM1/7/16
to Justin Cinkelj, Osv Dev
On Wed, Jan 6, 2016 at 9:12 PM, Justin Cinkelj <justin....@xlab.si> wrote:
As far as I could see, accessing ZFS from host changed ZFS metadata to refer from /dev/vblk0 to /dev/nbd0.

As I said, I'm far from being a ZFS expert - maybe Raphael or Avi are reading this, and can offer a more authoritative answer - obviously better than my guesses.
But my *guess* is that just like you had to "zpool import" the pool on the host before mounting it, perhaps you also need to "zpool import" it on OSv to get the disks back to its control?

The normal boot code in fs/vfs/main.cc seems to only "mount" the zfs, but not "zpool import" it. It only calls "zpool import" on other devices (if there are any), not on vblk0.1. I have no idea why.

You can also try to run "run.py -e '-nomount /zpool.so ...'" manully to perhaps run an appropriate zpool command, if really needed. You can see how we use "zpool" in fs/vfs/main.cc, and in tools/mkfs/mkfs.cc - or how you used it on the host.

Good luck!
 
After adding some logging here and there, vdev_disk_open() is called with param:
ZFS vdev_disk.c:63 vdev_disk_open vd->vdev_path=/dev/nbd0p2

So linux code transparently handled "import foreign disk",

I'm not sure if "transparently" is the right word - you actually explicitly called "zpool import" on Linux, and I'm guessing this looked at all the devices in /dev to find this /dev/nbd0p2 thing.

Justin Cinkelj

unread,
Jan 7, 2016, 12:46:48 PM1/7/16
to Nadav Har'El, Osv Dev
Nadav, thank you for tip to look at mkfs.cc.
I tried to export/import, print pool status etc, just before/after the 'zpool create' call.

Export attempt just after zpoll create says"
cannot export 'osv': pool is busy
After rerunning same iamge (eg zfs was already created), the import part says:
cannot import 'osv': no such pool available

I plan to stop looking at that (and sorry for stealing your time).

The changed code in mkfs.cc (just FYI)
    int run_ret;
    fprintf(stderr, "/*---------------------------*/\n");
    run_cmd("/zpool.so", {"zpool", "status"});
    fprintf(stderr, "/*---------------------------*/\n");
    run("/zpool.so", {"zpool", "import", "osv"}, &run_ret);
    fprintf(stderr, "/*---------------------------*/\n");
    run_cmd("/zpool.so", {"zpool", "status"});
    fprintf(stderr, "/*---------------------------*/\n");
    run("/zfs.so", {"zfs", "list"}, &run_ret);
    fprintf(stderr, "/*---------------------------*/\n");

    fprintf(stderr, "/*------*/\n");
    // Create zpool named osv
    run_cmd("/zpool.so", zpool_args);
    fprintf(stderr, "/*------*/\n");

    fprintf(stderr, "/*---------------------------*/\n");
    //run("/zfs.so", {"zfs", "umount", "-a"}, &run_ret);
    //run("/zfs.so", {"zfs", "umount", "osv"}, &run_ret);
    //run("/zfs.so", {"zfs", "umount", "zfs"}, &run_ret);
    run("/zpool.so", {"zpool", "export", "osv"}, &run_ret);
    fprintf(stderr, "/*---------------------------*/\n");
    run("/zpool.so", {"zpool", "import", "osv"}, &run_ret);
    fprintf(stderr, "/*---------------------------*/\n");
    run_cmd("/zpool.so", {"zpool", "status"});
    fprintf(stderr, "/*---------------------------*/\n");
    run("/zfs.so", {"zfs", "list"}, &run_ret);
    fprintf(stderr, "/*---------------------------*/\n");
Reply all
Reply to author
Forward
0 new messages