Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

mountroot prompt with error2, when trying to boot from a single drive in a 2-way mirror

89 views
Skip to first unread message

yudi v

unread,
Apr 14, 2015, 8:36:09 AM4/14/15
to
Hi all,

I was testing recovery scenario by removing one of the drives in a 2-way
mirror, but the system fails to boot and comes up with the mountroot prompt
with error 2. When I reconnect the second drive, it boots fine again.

Any suggestions on what the problem might be?

it's a simple root-on-ZFS setup (9.1 upgraded to 10.1 recently) with two
disks in mirror config.
each disk has 3 partitions, first one has the boot code, second has the
swap, third has the OS.

and the zfs pool is setup on the 3rd partition of the two disks.

--
Kind regards,
Yudi
_______________________________________________
freebsd-...@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questi...@freebsd.org"

Matthew Seaman

unread,
Apr 14, 2015, 9:18:10 AM4/14/15
to
On 2015/04/14 13:35, yudi v wrote:
> I was testing recovery scenario by removing one of the drives in a 2-way
> mirror, but the system fails to boot and comes up with the mountroot prompt
> with error 2. When I reconnect the second drive, it boots fine again.
>
> Any suggestions on what the problem might be?
>
> it's a simple root-on-ZFS setup (9.1 upgraded to 10.1 recently) with two
> disks in mirror config.
> each disk has 3 partitions, first one has the boot code, second has the
> swap, third has the OS.
>
> and the zfs pool is setup on the 3rd partition of the two disks.

Check the BIOS settings -- there will be a list giving the order of
preference for devices to boot from. Frequently you'll find there is
one slot for 'Harddrive' and you get to select just one of the drives
attached to the system to boot from. In this case, simply telling it to
use the other disk should allow you to boot. Otherwise, if your bios
allows you to specify several hard drives, then reordering the drives in
the preference list might make it work. This last really shouldn't be
necessary, but not all BIOSes are created equal.

Cheers,

Matthew



signature.asc

yudi v

unread,
Apr 15, 2015, 1:18:59 AM4/15/15
to
hi

It's not the BIOS settings, I checked. It picks up the other HDD in the
mirror and goes through the boot code and then it fails at booting into zfs
root pool.
The error is:

*Trying to mount root from zfs:osysPool/ROOT/default []...
*>

* Mounting from zfs:osysPool/ROOT/default failed with error 6.*

it is something to do with the guid mismatch for ada2p3 and ada3p3, not
sure why it's even trying to compare them as they are the two partitions in
the mirror.
Please see the below images for the relevant console messages.
screen1:
https://drive.google.com/file/d/1Q-F-8kF-Nevn5ijvFXLNuvtJOuRn7ztO2Q/view?usp=sharing
screen2:
https://drive.google.com/file/d/1ZGseshS0Uk0cc6Gli_-tywHNXO7sLQ_aVw/view?usp=sharing

I think for this to work /dev/ada2p3 should be attached (which has guid
2114803205502328891) but ends up attaching /dev/ada2 with guid
15791103587254396721 (this is the guid for ada3p3). ada3 is the one I am
disconnecting to test this.

​​
output from #zdb -l /dev/ada2
=================================================================================

--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 30644
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: ''
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
create_txg: 4
features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 30644
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: ''
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
create_txg: 4
features_for_read:


output from #zdb -l /dev/ada3
==============================================================================

--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 30644
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: ''
top_guid: 16302517322241353808
guid: 15791103587254396721
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
create_txg: 4
features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 30644
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: ''
top_guid: 16302517322241353808
guid: 15791103587254396721
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
create_txg: 4
features_for_read:
====================================================================================
Is anything amiss in the above label info for these two drives?
I have used these two drives before for testing and reinstalled the os and
recreated the pools.
Any suggestions on how to fix this.

Thanks
Yudi

On Tue, Apr 14, 2015 at 11:17 PM, Matthew Seaman <mat...@freebsd.org>
wrote:

Fabian Keil

unread,
Apr 15, 2015, 6:10:25 AM4/15/15
to
yudi v <yudi...@gmail.com> wrote:

> It's not the BIOS settings, I checked. It picks up the other HDD in the
> mirror and goes through the boot code and then it fails at booting into zfs
> root pool.
> The error is:
>
> *Trying to mount root from zfs:osysPool/ROOT/default []...
> *>
>
> * Mounting from zfs:osysPool/ROOT/default failed with error 6.*
>
> it is something to do with the guid mismatch for ada2p3 and ada3p3, not
> sure why it's even trying to compare them as they are the two partitions in
> the mirror.
> Please see the below images for the relevant console messages.
> screen1:
> https://drive.google.com/file/d/1Q-F-8kF-Nevn5ijvFXLNuvtJOuRn7ztO2Q/view?usp=sharing
> screen2:
> https://drive.google.com/file/d/1ZGseshS0Uk0cc6Gli_-tywHNXO7sLQ_aVw/view?usp=sharing

Please note that these resources aren't accessible without allowing
presumably non-free JavaScript from untrustworthy (YMMV) sources.

> Is anything amiss in the above label info for these two drives?
> I have used these two drives before for testing and reinstalled the os and
> recreated the pools.
> Any suggestions on how to fix this.

The problem could be the result of a known race condition that will prevent
the system from booting if the kernel looks for the root pool before its
vdevs are available. The less disks there are, the "better" the chances that
ZFS "wins" the race.

The vfs.mountroot.timeout is ignored for ZFS so the kernel only tries once.
For details and a patch to change this see:
https://lists.freebsd.org/pipermail/freebsd-fs/2015-March/020997.html

As a workaround you can add a UFS root file system on a disk that
doesn't actually exist to vfs.root.mountfrom. It should be the first
one so you don't hit the spa_namespace_lock deadlock reported in:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=198563

Fabian

Yudi V

unread,
Apr 15, 2015, 11:33:31 AM4/15/15
to
Hi Fabian,

First, what is the recommended image hosting site.

I don't think your conclusion is right. If you had a look at the
console messages, it looks like the vdev is created but destroyed due
to guid mismatch.

I will repost the images once I know what service to use.
Thanks!
Yudi

Fabian Keil

unread,
Apr 16, 2015, 7:48:01 AM4/16/15
to
Yudi V <yudi...@gmail.com> wrote:

> First, what is the recommended image hosting site.

Any site that allows to download the image with fetch should be fine.
I use my own website so I unfortunately can't recommend any specific
image hosting site.

> I don't think your conclusion is right.

You're right, among other things I completely missed that you wrote
that the system tries to attach the whole disk instead of a partition.

Could you additionally post the output of "zdb -l /dev/ada2p3"
and "gpart show ada2"?

Fabian

Yudi V

unread,
Apr 16, 2015, 1:32:16 PM4/16/15
to
I checked some popular image hosting websites, could not find any that did
not use javascript. I can send you the images via email.
This is reproducible on a clean install of 9.3 as well. I think it's a bug.

Here's the output of zdb -l /dev/ada2p3
--------------------------------------------
LABEL 0
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 35882
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: 'test'
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
DTL: 99
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
DTL: 98
create_txg: 4
features_for_read:
--------------------------------------------
LABEL 1
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 35882
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: 'test'
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
DTL: 99
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
DTL: 98
create_txg: 4
features_for_read:
--------------------------------------------
LABEL 2
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 35882
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: 'test'
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
DTL: 99
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
DTL: 98
create_txg: 4
features_for_read:
--------------------------------------------
LABEL 3
--------------------------------------------
version: 5000
name: 'osysPool'
state: 0
txg: 35882
pool_guid: 3008044207603099329
hostid: 1990654128
hostname: 'test'
top_guid: 16302517322241353808
guid: 2114803205502328891
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 16302517322241353808
metaslab_array: 33
metaslab_shift: 29
ashift: 9
asize: 70355779584
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 2114803205502328891
path: '/dev/ada2p3'
phys_path: '/dev/ada2p3'
whole_disk: 1
DTL: 99
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 15791103587254396721
path: '/dev/ada3p3'
phys_path: '/dev/ada3p3'
whole_disk: 1
DTL: 98
create_txg: 4
features_for_read:


gpart show ada2 output:

=> 34 156301421 ada2 GPT (74G)
34 2014 - free - (1M)
2048 512 1 freebsd-boot (256k)
2560 1536 - free - (768k)
4096 8388608 2 freebsd-swap (4.0G)
8392704 10485760 - free - (5.0G)
18878464 137422848 3 freebsd-zfs (65G)
156301312 143 - free - (71k)



On Thu, Apr 16, 2015 at 9:45 PM, Fabian Keil <freebsd...@fabiankeil.de>
wrote:

Fabian Keil

unread,
Apr 16, 2015, 2:41:13 PM4/16/15
to
Yudi V <yudi...@gmail.com> wrote:

> I checked some popular image hosting websites, could not find any that did
> not use javascript. I can send you the images via email.

Sure.

> This is reproducible on a clean install of 9.3 as well. I think it's a bug.

I think the problem could be that two of the labels from ada2p3 are picked
up when looking at ada2 itself. Quoting the gptzfsboot man page:

| After a disk is probed and gptzfsboot determines that the
| whole disk is not a ZFS pool member, the individual partitions
| are probed in their partition table order.

Putting the pool on p2 and using p3 for swap would probably work around
this, but given that there's free space behind p3 already, it's not
obvious to me why this wasn't already sufficient.

> Here's the output of zdb -l /dev/ada2p3
[...]
>
> gpart show ada2 output:
>
> => 34 156301421 ada2 GPT (74G)
> 34 2014 - free - (1M)
> 2048 512 1 freebsd-boot (256k)
> 2560 1536 - free - (768k)
> 4096 8388608 2 freebsd-swap (4.0G)
> 8392704 10485760 - free - (5.0G)
> 18878464 137422848 3 freebsd-zfs (65G)
> 156301312 143 - free - (71k)

For comparison, no labels are found with this layout
(p3, p4 and p5 are also encrypted, though):

[fk@kendra ~]$ gpart show
=> 40 1250263648 ada0 GPT (596G)
40 128 1 freebsd-boot (64K)
168 1880 - free - (940K)
2048 409600 2 freebsd-zfs (200M)
411648 8388608 3 freebsd-zfs (4.0G)
8800256 8388608 4 freebsd-swap (4.0G)
17188864 1233074816 5 freebsd-zfs (588G)
1250263680 8 - free - (4.0K)

[fk@kendra ~]$ zdb -l /dev/ada0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3

Fabian

Ricky .

unread,
Apr 16, 2015, 10:51:22 PM4/16/15
to
Is your pool using gptids for the devices? I tried tried to reproduce this with 9.1-release and 10.1-release, however using gptids and never run into this problem.

> Date: Fri, 17 Apr 2015 03:32:07 +1000
> Subject: Re: mountroot prompt with error2, when trying to boot from a single drive in a 2-way mirror
> From: yudi...@gmail.com
> To: freebsd...@fabiankeil.de
> CC: freebsd-...@freebsd.org
>
> I checked some popular image hosting websites, could not find any that did
> not use javascript. I can send you the images via email.
> This is reproducible on a clean install of 9.3 as well. I think it's a bug.
>
> Here's the output of zdb -l /dev/ada2p3
> gpart show ada2 output:
>
> => 34 156301421 ada2 GPT (74G)
> 34 2014 - free - (1M)
> 2048 512 1 freebsd-boot (256k)
> 2560 1536 - free - (768k)
> 4096 8388608 2 freebsd-swap (4.0G)
> 8392704 10485760 - free - (5.0G)
> 18878464 137422848 3 freebsd-zfs (65G)
> 156301312 143 - free - (71k)
>
>
>

Yudi V

unread,
Apr 16, 2015, 11:37:04 PM4/16/15
to
I cleared the label info on /dev/ada2, still same error. And that disk
(ada2) became unavailable.
gpart show does not even list the disk or it's partitions. I am guessing
zpool labelclear -f /dev/ada2 destroyed the partition table.


Answer to Ricky's question:

No I used geom names (ada2p3).
I cannot figure out the reason for the GUID mismatch (see console image).
Any ideas?

On Fri, Apr 17, 2015 at 4:26 AM, Fabian Keil <freebsd...@fabiankeil.de>
wrote:

> Yudi V <yudi...@gmail.com> wrote:
>
> > I checked some popular image hosting websites, could not find any that
> did
> > not use javascript. I can send you the images via email.
>
> Sure.
>
> > This is reproducible on a clean install of 9.3 as well. I think it's a
> bug.
>
> I think the problem could be that two of the labels from ada2p3 are picked
> up when looking at ada2 itself. Quoting the gptzfsboot man page:
>
> | After a disk is probed and gptzfsboot determines that the
> | whole disk is not a ZFS pool member, the individual partitions
> | are probed in their partition table order.
>
> Putting the pool on p2 and using p3 for swap would probably work around
> this, but given that there's free space behind p3 already, it's not
> obvious to me why this wasn't already sufficient.
>
> > Here's the output of zdb -l /dev/ada2p3
> [...]
> >
> > gpart show ada2 output:
> >
> > => 34 156301421 ada2 GPT (74G)
> > 34 2014 - free - (1M)
> > 2048 512 1 freebsd-boot (256k)
> > 2560 1536 - free - (768k)
> > 4096 8388608 2 freebsd-swap (4.0G)
> > 8392704 10485760 - free - (5.0G)
> > 18878464 137422848 3 freebsd-zfs (65G)
> > 156301312 143 - free - (71k)
>

Ricky .

unread,
Apr 17, 2015, 12:55:49 AM4/17/15
to
From the console images, it would appear that its searching for device (ada2p3). I'm assuming that is the device missing and it is expecting it to have the guid of ada2p3 before it was missing. Because ada3 is now ada2 it is showing the mismatch.
Did you try to unplug the other one and see if it will boot?
I suggest as switching to gpt labeling.as a work around.

> Date: Fri, 17 Apr 2015 13:36:52 +1000
> Subject: Re: mountroot prompt with error2, when trying to boot from a single drive in a 2-way mirror
> From: yudi...@gmail.com
> To: freebsd-...@freebsd.org

Yudi V

unread,
Apr 17, 2015, 1:21:01 AM4/17/15
to
On Fri, Apr 17, 2015 at 2:54 PM, Ricky . <rick...@hotmail.com> wrote:

> From the console images, it would appear that its searching for device
> (ada2p3). I'm assuming that is the device missing and it is expecting it to
> have the guid of ada2p3 before it was missing. Because ada3 is now ada2 it
> is showing the mismatch.
>
You might be right, I will change these to gpt lables and test. But first I
will have to recreate drive I destroyed.

Yudi V

unread,
Apr 17, 2015, 11:44:47 PM4/17/15
to
changed the pool to use gpt labels and the issue is sorted.
Thank you.
0 new messages