# lucreate -c zfsBE1 -n zfsBE2 -p newpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <zfsBE1>.
ERROR: ZFS pool <newpool> does not support boot environments
#
Now zpool newpool is a simple zfs pool - it is not a stripe or a raid
z. It is also on a disk with a SMI label.
# zpool status newpool
pool: newpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
newpool ONLINE 0 0 0
c2t3d0 ONLINE 0 0 0
errors: No known data errors
#
And it is the current version:
# zpool upgrade newpool
This system is currently running ZFS pool version 10.
Pool 'newpool' is already formatted using the current version.
#
Your newpool uses the entire disk that has an EFI label so you need to
relabel it
with an SMI label and a valid slice 0. Currently, Solaris needs to
boot from an SMI
labeled disk.
Check the info here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Replacing/Relabeling the Root Pool Disk
Thanks,
Cindy
Thanks Cindy, that helped, I am getting the following errors:
ERROR: unable to mount zones:
/.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be group readable.
/.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be group executable.
/.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be world readable.
/.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be world executable.
could not verify zonepath /.alt.tmp.b-MFc.mnt/zfszone/zone1 because of
the above errors.
zoneadm: zone zone1 failed to verify
ERROR: unable to mount zone <zone1> in </.alt.tmp.b-MFc.mnt>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Unable to remount ABE <be2>: cannot make ABE bootable
Making the ABE <be2> bootable FAILED.
ERROR: Unable to make boot environment <be2> bootable.
ERROR: Unable to populate file systems on boot environment <be2>.
ERROR: Cannot make file systems for boot environment <be2>.
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 6.48G 1.76G 92.5K /mypool
mypool/ROOT 4.48G 1.76G 18K /mypool/ROOT
mypool/ROOT/be2 4.48G 1.76G 4.48G /
mypool/dump 1G 2.76G 16K -
mypool/swap 1G 2.76G 16K -
pool3 5.71G 10.8G 19K /pool3
pool3/billables 74K 10.8G 18K /
pool3/billables/usr1 18K 10.8G 18K /usr1
pool3/billables/usr2 18K 10.8G 18K /usr2
pool3/billables/usr3 20K 10.8G 20K /usr3
pool3/dvd 1.94G 10.8G 1.94G /dvd
pool3/zone2 3.77G 10.8G 3.77G /pool3/zone2
pool3/zone2@081810 2.98M - 3.76G -
pool3/zone2@11102010 1.63M - 3.76G -
rpool 6.53G 1.71G 94K /rpool
rpool/ROOT 4.53G 1.71G 18K legacy
rpool/ROOT/s10s_u7wos_08 4.53G 1.71G 4.49G /
rpool/ROOT/s10s_u7wos_08@be2 46.2M - 4.47G -
rpool/ROOT/s10s_u7wos_08-be2 0 1.71G 4.47G legacy
rpool/dump 1.00G 1.71G 1.00G -
rpool/export 45K 1.71G 20K /export
rpool/export/home 25K 1.71G 25K /export/home
rpool/swap 1G 2.71G 16K -
zfszone 3.80G 12.7G 23K /zfszone
zfszone/billables 36.5M 12.7G 18K /
zfszone/billables/usr1 35.1M 12.7G 35.1M /usr1
zfszone/billables/usr2 1.34M 12.7G 1.34M /usr2
zfszone/billables/usr3 46K 12.7G 46K /usr3
zfszone/zone1 3.77G 12.7G 3.76G /zfszone/zone1
zfszone/zone1@081810 5.51M - 3.76G -
#
# zoneadm list -cv
ID NAME STATUS PATH
BRAND IP
0 global running /
native shared
- myzone configured /zfszone
native shared
- zone1 installed /zfszone/zone1
native shared
- zone2 installed /pool3/zone2
native shared
- testzone configured /testpool/testzfs
native shared
> Thanks Cindy, that helped, I am getting the following errors:
>
> ERROR: unable to mount zones:
> /.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be group readable.
> /.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be group executable.
> /.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be world readable.
> /.alt.tmp.b-MFc.mnt/zfszone/zone1 must not be world executable.
What are the permissions on /zfszone/zone1 ?
--
Ian Collins
# ls -al /zfszone/zone1
total 17
drwx------ 5 root root 5 Nov 28 11:46 .
drwx------ 3 root root 3 Aug 18 08:26 ..
drwxr-xr-x 12 root root 51 Dec 7 17:23 dev
drwxr-xr-x 2 root root 2 Nov 28 11:39 lu
drwxr-xr-x 21 root root 22 Aug 18 09:35 root
Odd, do you have the latest LU patches installed?
As you are upgrading, did you install the LU packages form the update 9
media?
--
Ian Collins
Instead of using LU to get to the new disk, could you add the second
disk as a mirror of the existing root pool, then just detach the
smaller disk?
Thanks Ian, if this works I will owe you. No I have not installed the
10/09 packages, I'll do that and post
Upgraded the lu packages and tried again, but still getting errors:
# lucreate -c be1 -n be2 -p mypool
Analyzing system configuration.
Comparing source boot environment <be1> file systems with the file
system(s) you specified for the new boot environment. Determining
which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot
environment; cannot get BE ID.
Creating configuration for boot environment <be2>.
Source boot environment is <be1>.
Creating boot environment <be2>.
Creating file systems on boot environment <be2>.
Creating <zfs> file system for </> in zone <global> on <mypool/ROOT/
be2>.
mv: cannot access /tmp/.liveupgrade.8830.17170/.icf.newmap
df: (/testpool/testzfs) not a block device, directory or mounted
resource
/usr/lib/lu/lumake: test: argument expected
Populating file systems on boot environment <be2>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
zoneadm: zone 'myzone': illegal UUID value specified
Creating snapshot for <rpool/ROOT/s10s_u7wos_08> on <rpool/ROOT/
s10s_u7wos_08@be2>.
Creating clone for <rpool/ROOT/s10s_u7wos_08@be2> on <rpool/ROOT/
s10s_u7wos_08-be2>.
Creating snapshot for <zfszone/zone1> on <zfszone/zone1@be2>.
Creating clone for <zfszone/zone1@be2> on <zfszone/zone1-be2>.
Creating snapshot for <pool3/zone2> on <pool3/zone2@be2>.
Creating clone for <pool3/zone2@be2> on <pool3/zone2-be2>.
zoneadm: zone 'testzone': illegal UUID value specified
Creating snapshot for <rpool/ROOT/s10s_u7wos_08> on <rpool/ROOT/
s10s_u7wos_08@be2>.
ERROR: cannot create snapshot 'rpool/ROOT/s10s_u7wos_08@be2': dataset
already exists
ERROR: Unable to snapshot <rpool/ROOT/s10s_u7wos_08> on <rpool/ROOT/
s10s_u7wos_08@be2>.
Creating compare databases for boot environment <be2>.
Creating compare database for file system </>.
ERROR: no boot environment is mounted on root device <mypool/ROOT/be2>
ERROR: Unable to unmount ABE <be2> from ICS file </etc/lu/ICF.2>.
ERROR: Unable to copy file systems from boot environment <be1> to BE
<be2>.
Odd, I've never seen that, but I've never tried using LU across pools.
All I can suggest is follow the other suggestion and use mirroring to
copy your existing BE to the new drive and then upgrade in place.
--
Ian Collins
Steve,
The ZFS admin guide documents the supported LU/zones configurations.
Do you have NFS mounts in these zones?
Thanks,
Cindy
I have successfully used LU to move between pools:
to and from a USB drive on my old Solaris Express laptop
in order to resize the fdisk partitions.
I'll likely need to do beadm(1M) equivalent thing soon
on my Solaris 11 Express laptop to reinstall Windows 7.
I vaguely recall similar weirdness with LU.
You've ruled out the OP running the wrong LU bits.
Is the OP running LU in multi-user?
I just tried again on a old S10 test system:
# zpool create -f foo c2t2d0s0
# cat /etc/release
Oracle Solaris 10 9/10 s10x_u9wos_14a X86
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.
Assembled 11 August 2010
# pkginfo -l SUNWlur | egrep 'VERSION|PSTAMP'
VERSION: 11.10,REV=2005.01.09.21.46
PSTAMP: sds-42-patch-x20100518135236
# lucreate -n foo -p foo
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
zfsS10 yes yes yes no -
foo yes no no yes -
# zpool status
pool: foo
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
foo ONLINE 0 0 0
c2t2d0s0 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c1t0d0s0 ONLINE 0 0 0
errors: No known data errors
John
groe...@acm.org
Cindy,
Thanks for responding, I've learned a lot from your posts. I do not
# pkginfo -l SUNWlur | egrep 'VERSION|PSTAMP'
VERSION: 11.10,REV=2005.01.10.00.03
PSTAMP: sds-42-patch20100518144508
#
The system was in multi user mode while trying lucreate, tried it with
both zones running and halted
shutdown and detach those zones, snapshot them for safe-keeping,
try lucreate again.
John
groe...@acm.org
It should be possible to reattach the zones with -u (upgrade) to the
upgraded BE if you are really stuck.
--
Ian Collins
Thanks gentlemen, I appreciate your advise, I detached the zones, the
latest complaint from lucreate is about space, let me try and find a
larger drive and start again
Good luck. LU is great when it works, but opaque when it doesn't!
The new IPS BE management tools from OpenSolaris (designed around ZFS
form the beginning) are a big improvement.
--
Ian Collins
I've had problems with LU when local zones are configured with lofs
(not inherit-pkg-dir) mounts. Detaching the zones was the work-around.
Still does not work - good news is that this is a test system which I
can rebuild.
I appreciate the help given by everybody. Thank you.