>
> I am using nested zfs filesystems:>zfs list -o name,zoned,mountpoint
>
I used the following script to create a zone on zfs nested
filesystem and this worked
node default {
zone { "nested":
realhostname => "nested",
autoboot => "true",
path => "/nested/mount/zones/nested",
ip => ["e1000g0:10.1.16.240" ],
sysidcfg => "zones/sysidcfg",
}
}
puppet ./zone-works.pp
notice: //Node[default]/Zone[nested]/ensure: created
zoneadm -z nested list -v
ID NAME STATUS PATH
BRAND IP
10 nested running /nested/mount/zones/nested
native shared
zfs list |grep nested
rpool/nested 3.87G 85.1G
23K /nested
rpool/nested/mount 3.87G 85.1G
23K /nested/mount
rpool/nested/mount/zones 3.87G 85.1G
23K /nested/mount/zones
rpool/nested/mount/zones/nested 3.87G 85.1G
3.87G /nested/mount/zones/nested
The zoned setting appears to only matter for a dataset given to a
non-global zone. I would suggest you try to spin up the same zone on
a non nested zfs filesystem to see if that works. I've used zones on
all versions of Solaris 10 and have not encountered the error your
hitting but i've never used nested mounts and I've only used puppet to
spin up zones on update 8 nodes. I'm thinking their may be something
related to the nested mounts and your Solaris patch level?
HTH. Derek.