can not boot zone

818 views
Skip to first unread message

John Lyman

unread,
Jul 28, 2010, 2:28:54 PM7/28/10
to Puppet Users
On Solaris 10 x86 u7 I get the following error when a zone resource is
created:

err: //Node[foo]/Zones::Instance[test]/Zone[test]/ensure: change from
absent to running failed: Could not boot zone: Execution of '/usr/sbin/
zoneadm -z test boot' returned 1: zoneadm: zone 'test': These file-
systems are mounted on subdirectories of /export/zones/test/root:
zoneadm: zone 'test': /export/zones/test/root/var/sadm/install/.door
zoneadm: zone 'test': call to zoneadmd failed

This seems to be related to http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855364.

If I manually run "pkgadm sync -R /export/zones/test/root -q", and
then do another puppet run, puppet can boot the zone. I was wondering
if there was some way to have puppet run pkgadm sync if the boot
fails. Normally, this would be a simple exec, but I'm not sure how to
go about it. The exec would have to depend on the boot failing, which
in turn would have to trigger puppet to retry booting the zone. I
think the only way I can do this is to modify the zone provider. Does
anyone have any ideas about how to do this in a module?

deet

unread,
Jul 28, 2010, 3:31:10 PM7/28/10
to Puppet Users

> zoneadm: zone 'test':   /export/zones/test/root/var/sadm/install/.door
> zoneadm: zone 'test': call to zoneadmd failed

John.
I'm trying to understand the meaning of the actual error message.
Can you tell me if their is anything special about your "/export" or "/
export/zones" filesystem such as them being nested zfs filesystem or
nfs mounts or something?

Also can you share the resulting zonecfg? The only other references
I see to error messages like this (without looking in the code) is
when users are giving ownership of a zfs dataset to the non-global
zone and the resulting dataset zfs filesystem should have the zfs
setting "zoned" set to on. Look at some of the hits in sunsolve on
"zfs set zoned".

I use puppet to create all of my zones and have not run into the
error you've described so I'd like to see if their is something unique
to your zone root fs or config that's causing the issue. I'm running
on Solaris 10 update 8 with patch levels of 200910 and 201003.

Before approaching the solution with the pkgadm invocation it would
be interesting to see if it's something specific to your zone setup.
I'd be happy to test out your config on Sol 10 update if you'd like.

Thanks. Derek.


>
> This seems to be related tohttp://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855364.

John Lyman

unread,
Jul 29, 2010, 9:55:12 AM7/29/10
to Puppet Users
Derek,

I am using nested zfs filesystems:
>zfs list -o name,zoned,mountpoint
NAME ZONED MOUNTPOINT
rootpool off /rootpool
rootpool/ROOT off legacy
rootpool/ROOT/s10x_u7 off /
rootpool/ROOT/s10x_u7/var off /var
rootpool/dump - -
rootpool/export off /export
rootpool/export/home off /export/home
rootpool/export/zones off /export/zones
rootpool/export/zones/test off /export/zones/test
rootpool/swap - -

Here is my zonecfg:
>zonecfg -z test info
zonename: test
zonepath: /export/zones/test
brand: native
autoboot: true
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
net:
address: 192.168.1.100
physical: aggr10001
defrouter not specified

I don't get the error when the zonepath is on a ufs filesystem.
Originally I was thinking that the error was only on newer releases of
solaris, but it probably has more to do with ufs vs. zfs.

John

deet

unread,
Jul 29, 2010, 12:13:02 PM7/29/10
to Puppet Users


>
> I am using nested zfs filesystems:>zfs list -o name,zoned,mountpoint
>

I used the following script to create a zone on zfs nested
filesystem and this worked

node default {
zone { "nested":
realhostname => "nested",
autoboot => "true",
path => "/nested/mount/zones/nested",
ip => ["e1000g0:10.1.16.240" ],
sysidcfg => "zones/sysidcfg",
}
}

puppet ./zone-works.pp
notice: //Node[default]/Zone[nested]/ensure: created

zoneadm -z nested list -v
ID NAME STATUS PATH
BRAND IP
10 nested running /nested/mount/zones/nested
native shared

zfs list |grep nested
rpool/nested 3.87G 85.1G
23K /nested
rpool/nested/mount 3.87G 85.1G
23K /nested/mount
rpool/nested/mount/zones 3.87G 85.1G
23K /nested/mount/zones
rpool/nested/mount/zones/nested 3.87G 85.1G
3.87G /nested/mount/zones/nested

The zoned setting appears to only matter for a dataset given to a
non-global zone. I would suggest you try to spin up the same zone on
a non nested zfs filesystem to see if that works. I've used zones on
all versions of Solaris 10 and have not encountered the error your
hitting but i've never used nested mounts and I've only used puppet to
spin up zones on update 8 nodes. I'm thinking their may be something
related to the nested mounts and your Solaris patch level?

HTH. Derek.
Reply all
Reply to author
Forward
0 new messages