Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ZFS: I/O error - blocks larger than 16777216 are not supported

57 views
Skip to first unread message

KIRIYAMA Kazuhiko

unread,
Jun 20, 2018, 11:34:54 PM6/20/18
to
Hi all,

I've been reported ZFS boot disable problem [1], and found
that this issue occers form RAID configuration [2]. So I
rebuit with RAID5 and re-installed 12.0-CURRENT
(r333982). But failed to boot with:

ZFS: i/o error - all block copies unavailable
ZFS: can't read MOS of pool zroot
gptzfsboot: failed to mount default pool zroot

FreeBSD/x86 boot
ZFS: I/O error - blocks larger than 16777216 are not supported
ZFS: can't find dataset u
Default: zroot/<0x0>:

In this case, the reason is "blocks larger than 16777216 are
not supported" and I guess this means datasets that have
recordsize greater than 8GB is NOT supported by the
FreeBSD boot loader(zpool-features(7)). Is that true ?

My zpool featues are as follows:

# kldload zfs
# zpool import
pool: zroot
id: 13407092850382881815
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://illumos.org/msg/ZFS-8000-EY
config:

zroot ONLINE
mfid0p3 ONLINE
# zpool import -fR /mnt zroot
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 19.9T 129G 19.7T - 0% 0% 1.00x ONLINE /mnt
# zpool get all zroot
NAME PROPERTY VALUE SOURCE
zroot size 19.9T -
zroot capacity 0% -
zroot altroot /mnt local
zroot health ONLINE -
zroot guid 13407092850382881815 default
zroot version - default
zroot bootfs zroot/ROOT/default local
zroot delegation on default
zroot autoreplace off default
zroot cachefile none local
zroot failmode wait default
zroot listsnapshots off default
zroot autoexpand off default
zroot dedupditto 0 default
zroot dedupratio 1.00x -
zroot free 19.7T -
zroot allocated 129G -
zroot readonly off -
zroot comment - default
zroot expandsize - -
zroot freeing 0 default
zroot fragmentation 0% -
zroot leaked 0 default
zroot feature@async_destroy enabled local
zroot feature@empty_bpobj active local
zroot feature@lz4_compress active local
zroot feature@multi_vdev_crash_dump enabled local
zroot feature@spacemap_histogram active local
zroot feature@enabled_txg active local
zroot feature@hole_birth active local
zroot feature@extensible_dataset enabled local
zroot feature@embedded_data active local
zroot feature@bookmarks enabled local
zroot feature@filesystem_limits enabled local
zroot feature@large_blocks enabled local
zroot feature@sha512 enabled local
zroot feature@skein enabled local
zroot unsup...@com.delphix:device_removal inactive local
zroot unsup...@com.delphix:obsolete_counts inactive local
zroot unsup...@com.delphix:zpool_checkpoint inactive local
#

Regards

[1] https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068886.html
[2] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=151910

---
KIRIYAMA Kazuhiko
_______________________________________________
freebsd...@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-curre...@freebsd.org"

Allan Jude

unread,
Jun 21, 2018, 1:36:54 AM6/21/18
to
I am guessing it means something is corrupt, as 16MB is the maximum size
of a record in ZFS. Also, the 'large_blocks' feature is 'enabled', not
'active', so this suggest you do not have any records larger than 128kb
on your pool.

--
Allan Jude

signature.asc

Toomas Soome

unread,
Jun 21, 2018, 2:19:15 AM6/21/18
to
yes indeed, this value printed is 1 << 24 and is current, however, I would start with reinstalling gptzfsboot on freebsd-boot partition.

rgds,
toomas

KIRIYAMA Kazuhiko

unread,
Jun 21, 2018, 2:49:35 AM6/21/18
to
At Wed, 20 Jun 2018 23:34:48 -0400,
As I mentioned above, [2] says ZFS on RAID disks have any
serious bugs except for mirror. Anyway I gave up to use ZFS
on RAID{5,6}* until Bug 151910 [2] fixed.

>
> --
> Allan Jude

Toomas Soome

unread,
Jun 21, 2018, 3:52:34 AM6/21/18
to

if you boot from usb stick (or cd), press esc at boot loader menu and enter lsdev -v. what sector and disk sizes are reported?

the issue [2] is mix of ancient freebsd (v 8.1 is mentioned there), and RAID luns with 512B sector size and 15TB!!! total size - are you really sure your BIOS can actually address 15TB lun (with 512B sector size)? Note that the problem with large disks can hide itself till you have pool filled up enough till the essential files will be stored above the limit… meaning that you may have “perfectly working” setup till at some point in time, after next update, it is suddenly not working any more.

Note that for boot loader we have only INT13h for BIOS version, and it really is limited. The UEFI version is using EFI_BLOCK_IO API, which usually can handle large sectors and disk sizes better.

rgds,
toomas

0 new messages