Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

11.3 SPARC Using Recovery Archive to Migrate to new H/W

501 views
Skip to first unread message

Unknown

unread,
Apr 15, 2017, 2:44:20 PM4/15/17
to
Hi, the more I read the more I'm getting confused so help welcome.

Recovery archive taken, I created an ai service on the old H/W for the
new H/W having downloaded latest ISO:


# installadm create-service -n T4-1 -s /filer/sol-11_3-ai-sparc.iso

# installadm list
Service Name State Arch Type Alias No. Clients Profiles Manifests
------------ ----- ---- ---- ----- --- ------- -------- ---------
T4-1 on sparc iso no 1 0 0 1
default-sparc on sparc iso yes 0 0 0 1

# installadm list -m
Service Name Manifest Name Status Criteria
------------ ------------- ------ --------
T4-1 orig_default default none
default-sparc orig_default default none

Brought-up the orig_default manifest in the wizard for editing but...

1. I don't see how to set software type to ARCHIVE rather than IPS
2. I don't see as many disks as on new H/W

Learning manifest xml *now* is going to be a steep learning curve
if ai is the best way to migrate.

Plan-B, downloaded the text installer ISO and after phaffing with RKVM
did a bare metal install and setup the network so it can see the filer.

I have two of these to setup, not hundreds. What's the best approach?

a) Get to grips with AI, edit the manifest to the right rpool disk
layout; edit the xml to source s/w from the recovery archive taken;

b) On the new H/W, add more disk, create a pool dataset across the
remaining disks (to become the new root pool); create a new zone
from the archive (with its child zones). Somehow locate the new
zone in the to-be-new-rpool. Somehow activate the zone in the new
bigger pool as new global zone.

Thanks

YTC#1

unread,
Apr 15, 2017, 4:30:13 PM4/15/17
to
On 15/04/2017 19:41, Unknown wrote:
> Hi, the more I read the more I'm getting confused so help welcome.
>

Can we start with what you are trying to do ?
It is not cleat to me from the below.

You have a system, you have taken a UAR (unified archive) and are using
that to install to a new system ?

You are now trying to install the UAR ?

The "wizard" bit has lost me, I don't use it.

Have you got a working AI config ?

When I was using UARs to P2V stuff last year, I found it easier to write
a JetUAR module (JumpStart Enterprise Toolkit) for the JetAI extension.

If you are familiar with JumpStart and JET you are welcome to the module
--
Bruce Porter
"The internet is a huge and diverse community but mainly friendly"
http://ytc1.blogspot.co.uk/
There *is* an alternative! http://www.openoffice.org/

Unknown

unread,
Apr 15, 2017, 5:29:16 PM4/15/17
to
On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:

Thx YTC, (I'll top-post as you did). It's in the subject. Migrating
to new H/W. Yes I've taken a recovery UAR and installing on new H/W.

AI is plan-a. The AIM Wizard supposedly simplifies (well it does say
"creating" rather than my usage, "customising") the default manifest.
OK I won't use it and try and manually edit the XML. I wasn't any
more familiar with the previous Jetstart. Back to the obj: Once the
archive of old global with its one S10 zone is running on the new H/W
(in the desired new mirrored disk configuration), the goals are:

1) to move /u01 out of the rpool and into a new data pool on a set of
additional disks

2) convert the branded S10 zone into a S10 kernel zone

So the OP was asking whether the approach should be (plan-a) use AI
to install the archive on the new h/w and, in the process of install-
ation, perform the necessary configuration on-the-fly; or (plan-b)
with the new H/W bare metal install booted (using a single disk),
create a new pool across the other disks in the desired mirrored
configuration, create a new zone from the archive as a detached zone
outside of the current basic rpool on the single disk, get it right
then eventually set it as the replacement rpool, boot from it.
Remove the bare metal install disk and its basic rpool and keep safe
somewhere.

Plan-b (if it could work) has the advantage that you can see what
you're doing and get the new rpool and its zones right before
promoting it as the new rpool. Plan-a seems to involve more working
in the dark; constantly doing a boot - net, waiting for it to install
and boot; logging-in to see what's been created, and if isn't right,
having to go back to editing the manifest on the old H/W acting as
AI server, and going through the boot - net cycle over again.

YTC#1

unread,
Apr 16, 2017, 4:34:56 AM4/16/17
to
On 15/04/2017 22:26, Unknown wrote:
> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>
> Thx YTC, (I'll top-post as you did). It's in the subject. Migrating

I didn't , really, I started after line one. But didn't feel
comments/queries fitted in line.

I should have snipped :-(

> to new H/W. Yes I've taken a recovery UAR and installing on new H/W.
>
> AI is plan-a. The AIM Wizard supposedly simplifies (well it does say
> "creating" rather than my usage, "customising") the default manifest.

So, you don't have a working AI installation ?
I'd suggest this as a stage 1

> OK I won't use it and try and manually edit the XML. I wasn't any
Sometimes I do, as it is easier (IMO) than using the aimanifest command

> more familiar with the previous Jetstart. Back to the obj: Once the
> archive of old global with its one S10 zone is running on the new H/W

A word of warning, not sure if it has been fixed, but last time I did
this (11.2) archiveadm was including the dumpvol (the docs say it
doesn't), so I shrunk it 1st and grew it at target.

> (in the desired new mirrored disk configuration), the goals are:
>
> 1) to move /u01 out of the rpool and into a new data pool on a set of
> additional disks

I was going to suggest just building a new server and moving the zone,
but as you have broken rule 1 of zones (run nothing in the GZ ), I'll
not bother :-) .

>
> 2) convert the branded S10 zone into a S10 kernel zone

You do mean a S11 kernel zone here ?

>
> So the OP was asking whether the approach should be (plan-a) use AI
> to install the archive on the new h/w and, in the process of install-
> ation, perform the necessary configuration on-the-fly; or (plan-b)

If it is a complex system, with lots of configuration that is historic
and not easily reproduced, then yes. This is the correct option.

1) Configure a working AI

I think guide covers it.
https://blogs.oracle.com/cmt/entry/setting_up_a_local_ai

Also check out Steffan's blog
https://blogs.oracle.com/stw/entry/using_solaris_11_repository_and

2) Take your UAR and tweak the AI XML to use it.
boot net:dhcp - install

> with the new H/W bare metal install booted (using a single disk),
I'd always mirror root if it is a local disk.

> create a new pool across the other disks in the desired mirrored
> configuration, create a new zone from the archive as a detached zone

If I was doing this, I would not use archiveadm
I would detach the zone, zfs send/receive the dataset and just attach it
at the target server.

> outside of the current basic rpool on the single disk, get it right
> then eventually set it as the replacement rpool, boot from it.

Lost me here, what was wrong with the original rpool ?

> Remove the bare metal install disk and its basic rpool and keep safe
> somewhere.
>
> Plan-b (if it could work) has the advantage that you can see what
> you're doing and get the new rpool and its zones right before

One of us is confused about something.

If I had new hardware, I would install the OS (using an automated tool).
This would create a mirrored rpool
I would create the additional pools (data, zones, whatever) and build
the zone(s) on that.

I would do this all in one step, hands free.

boot net:dhcp - install
(assuming sparc)


> promoting it as the new rpool. Plan-a seems to involve more working
> in the dark; constantly doing a boot - net, waiting for it to install
> and boot; logging-in to see what's been created, and if isn't right,
> having to go back to editing the manifest on the old H/W acting as
> AI server, and going through the boot - net cycle over again.

This sounds like you have the "wrong end of the stick" as we say in the UK.

Unknown

unread,
Apr 16, 2017, 6:49:55 AM4/16/17
to
On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:

> On 15/04/2017 22:26, Unknown wrote:
>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>>
> So, you don't have a working AI installation ? I'd suggest this as
> a stage 1

Thx YTC, ai is up afaik. (Lots of errors setting it up but now resolved).
It's just that I haven't bothered trying to boot from it since the
default manifest is pants.

>> OK I won't use it and try and manually edit the XML. I wasn't any
> Sometimes I do, as it is easier (IMO) than using the aimanifest command

This is where I'm struggling as its all new and the docs make it worse:
https://docs.oracle.com/cd/E53394_01/html/E54756/gpugb.html
says: "You can get into the interactive editor mode either by using
the –e option on the command line." There's no "or" in that sentence.

> A word of warning, not sure if it has been fixed, but last time I did
> this (11.2) archiveadm was including the dumpvol (the docs say it
> doesn't), so I shrunk it 1st and grew it at target.

I remember having to mess with swap. Sorting all that out post-
install is no big deal though.

>> 1) to move /u01 out of the rpool and into a new data pool on a set of
>> additional disks
>
> I was going to suggest just building a new server and moving the zone,
> but as you have broken rule 1 of zones (run nothing in the GZ ), I'll
> not bother :-) .

No do! I wish I'd caught wind of that golden rule before now. More
than happy to adopt best practice now. Every previous H/W migration had
needed a time-consuming re-build (many bad practices like /usr/local/..).

Obeying rule 1 will help won't it? AI a minimal global with the disk
config and separate rpool and data pool as needed, then attach what was
formerly the GZ from the archive into a new zone but pared-down somewhat.

>> 2) convert the branded S10 zone into a S10 kernel zone
>
> You do mean a S11 kernel zone here ?

I mis-spoke. I meant a S10 zone nested within a S11 KZ.
>
>
> If it is a complex system, with lots of configuration that is historic
> and not easily reproduced, then yes. This [AI] is the correct option.
>
> 1) Configure a working AI
>
> I think guide covers it.
> https://blogs.oracle.com/cmt/entry/setting_up_a_local_ai

Hadn't come-across Stefan's blog. Many thanks; will read.
I *had* come-across Steffen's blog but he'd lost me in the steps he'd
taken to modify the AIM. The old H/W now hosting AI, is tight on disk,
so when setting-up AI I'd issued:

# installadm set-server -d /tmp

and confused myself over which of the directories I should work in
40% : Cleaning up netboot directory: /etc/netboot/T4-1
40% : Cleaning up service directory: /var/ai/service/T4-1
40% : Cleaning up target directory: /tmp/T4-1

> 2) Take your UAR, tweak AI XML to use it. boot net:dhcp - install

You make it, plan-a, sound sooo simple ;-)

[Plan-B]
>> with the new H/W bare metal install booted (using a single disk),
> I'd always mirror root if it is a local disk.

On the old H/W, rpool is mirrored. The text installer still gives
no option so BMI gives you a single disk.

> If doing this, I would not use archiveadm I would detach the zone,
> zfs send/receive the dataset and attach it at the target server.

But doesn't detaching require the zone to be halted and, as I
didn't know-of, and didn't observe rule-1, it's the Old GZ. Tricky?!

>> outside of the current basic rpool on the single disk, get it right
>> then eventually set it as the replacement rpool, boot from it.
>
> Lost me here, what was wrong with the original rpool ?
What I was saying was that since the text installer gave me a BMI
on a single local disk, just use it, and the [unmirrored] rpool on it,
to work from while building the desired new to-be rpool on four other
disks and restore the recovery archive into that.

> One of us is confused about something.
Mea culpa.

> If I had new hardware, I would install the OS (using an automated tool).
> This would create a mirrored rpool
> I would create the additional pools (data, zones, whatever) and build
> the zone(s) on that.
>
> I would do this all in one step, hands free.
>
> boot net:dhcp - install
> (assuming sparc)

An no-doubt re-zone, leaving a minimal GZ, the old GZ in a new KZ
and the old S10 BZ as a nested S10 BZ within a new KZ, again all in
one fell swoop. I've never used Jumpstart or AI. Being too ambitious
on my first Hello World AI would be setting myself up for failure.

[Plan-A seemingly more a "big bang" than incremental approach]
> This sounds like you have the "wrong end of the stick" as we say in
> the UK.

As Ruth said to Brian about Lillian.

YTC#1

unread,
Apr 16, 2017, 7:57:10 AM4/16/17
to
On 16/04/2017 11:46, Unknown wrote:
> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>
>> On 15/04/2017 22:26, Unknown wrote:
>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>>>
>> So, you don't have a working AI installation ? I'd suggest this as
>> a stage 1
>
> Thx YTC, ai is up afaik. (Lots of errors setting it up but now resolved).
> It's just that I haven't bothered trying to boot from it since the
> default manifest is pants.

Please bother, there is no point chasing issues down the line if the
failure is at the start.

TBH, I would not have used the current server, of I had I would have
used a zone on it.

I use VirtualBox with an AI VM setup, so a bit more portable.

>
>>> OK I won't use it and try and manually edit the XML. I wasn't any
>> Sometimes I do, as it is easier (IMO) than using the aimanifest command
>
> This is where I'm struggling as its all new and the docs make it worse:
> https://docs.oracle.com/cd/E53394_01/html/E54756/gpugb.html
> says: "You can get into the interactive editor mode either by using
> the –e option on the command line." There's no "or" in that sentence.

Don't get me started on the usability of AI .... It works well with
OpsCenter, but that is another story.

>
>> A word of warning, not sure if it has been fixed, but last time I did
>> this (11.2) archiveadm was including the dumpvol (the docs say it
>> doesn't), so I shrunk it 1st and grew it at target.
>
> I remember having to mess with swap. Sorting all that out post-
> install is no big deal though.

Agreed

>
>>> 1) to move /u01 out of the rpool and into a new data pool on a set of
>>> additional disks
>>
>> I was going to suggest just building a new server and moving the zone,
>> but as you have broken rule 1 of zones (run nothing in the GZ ), I'll
>> not bother :-) .
>
> No do! I wish I'd caught wind of that golden rule before now. More
> than happy to adopt best practice now. Every previous H/W migration had
> needed a time-consuming re-build (many bad practices like /usr/local/..).
>
> Obeying rule 1 will help won't it? AI a minimal global with the disk

Yes, we tried to get people to use this approach with S10, and since S11
it has been the an even better (and easier) idea.

> config and separate rpool and data pool as needed, then attach what was
> formerly the GZ from the archive into a new zone but pared-down somewhat.
>
>>> 2) convert the branded S10 zone into a S10 kernel zone
>>
>> You do mean a S11 kernel zone here ?
>
> I mis-spoke. I meant a S10 zone nested within a S11 KZ.

Why do I feel frightened by this ? :-)

TBH, I've never tried that, I think I have seen it suggested, but never
had a reason to do it.

Why would you want to do it, as opposed to have an S10 brand running in
zone ?

>>
>>
>> If it is a complex system, with lots of configuration that is historic
>> and not easily reproduced, then yes. This [AI] is the correct option.
>>
>> 1) Configure a working AI
>>
>> I think guide covers it.
>> https://blogs.oracle.com/cmt/entry/setting_up_a_local_ai
>
> Hadn't come-across Stefan's blog. Many thanks; will read.
>>
>> Also check out Steffan's blog
>> https://blogs.oracle.com/stw/entry/using_solaris_11_repository_and
>
> I *had* come-across Steffen's blog but he'd lost me in the steps he'd
> taken to modify the AIM. The old H/W now hosting AI, is tight on disk,
> so when setting-up AI I'd issued:
>
> # installadm set-server -d /tmp
>
> and confused myself over which of the directories I should work in
> 40% : Cleaning up netboot directory: /etc/netboot/T4-1
> 40% : Cleaning up service directory: /var/ai/service/T4-1
> 40% : Cleaning up target directory: /tmp/T4-1
>
>> 2) Take your UAR, tweak AI XML to use it. boot net:dhcp - install
>
> You make it, plan-a, sound sooo simple ;-)

It is, as far as I am concerned.

But then I would use my JET setup with AI extensions to do it all.

>
> [Plan-B]
>>> with the new H/W bare metal install booted (using a single disk),
>> I'd always mirror root if it is a local disk.
>
> On the old H/W, rpool is mirrored. The text installer still gives
> no option so BMI gives you a single disk.

Oh yer, PITA that .
If I do a non JET build I always have to remember to mirror.
(I'm sure some one will come along and prod me as to what I/we have missed)

>
>> If doing this, I would not use archiveadm I would detach the zone,
>> zfs send/receive the dataset and attach it at the target server.
>
> But doesn't detaching require the zone to be halted and, as I
> didn't know-of, and didn't observe rule-1, it's the Old GZ. Tricky?!

Ah, I misunderstood your config.

You are converting a GZ to an NGZ ? I misread the post as you had a GZ
with an NGZ.

Fine, create the UAR

>
>>> outside of the current basic rpool on the single disk, get it right
>>> then eventually set it as the replacement rpool, boot from it.
>>
>> Lost me here, what was wrong with the original rpool ?
> What I was saying was that since the text installer gave me a BMI
> on a single local disk, just use it, and the [unmirrored] rpool on it,
> to work from while building the desired new to-be rpool on four other
> disks and restore the recovery archive into that.

Ah, hang on. Think I am getting here.
You are able to install (using AI) the UAR *straight* onto the HW, no
fannying around with multiple rpools.

The UAR becomes the source software.

Process is this.
AI boots the OS into memory (no disks)
AI looks that the XML
AI creates your rpool on the disks indicated
AI unwraps your UAR onto the hardware
AI reboots and you have a server.

OK, that is a global to global migation

So, you want to P2V the GZ into another server as a NGZ

Fine, config your new server GZ, either by a full AI hands free or a
hands on install.

This gives you a GZ on rpool
Add some disks and create a zones pool

Follow this guide
https://docs.oracle.com/cd/E53394_01/html/E54752/gpohk.html


>
>> One of us is confused about something.
> Mea culpa.
>
>> If I had new hardware, I would install the OS (using an automated tool).
>> This would create a mirrored rpool
>> I would create the additional pools (data, zones, whatever) and build
>> the zone(s) on that.
>>
>> I would do this all in one step, hands free.
>>
>> boot net:dhcp - install
>> (assuming sparc)
>
> An no-doubt re-zone, leaving a minimal GZ, the old GZ in a new KZ
> and the old S10 BZ as a nested S10 BZ within a new KZ, again all in
> one fell swoop. I've never used Jumpstart or AI. Being too ambitious
> on my first Hello World AI would be setting myself up for failure.

You also have a branded S10 NGZ on the server ?

I would defo do it this way

https://docs.oracle.com/cd/E53394_01/html/E54752/gpolc.html#scrolltoc

Yes, you have an outage, but not a long one.

Think of it as a snap shot, declare a change freeze and when you are
ready switch the app over.

I still don;t see why you want to use a KZ ? The point of a KZ is
separate and allow multiple S11 kernels on the same platform.

You have a S10 branded zone, leave it like that.

>
> [Plan-A seemingly more a "big bang" than incremental approach]
>> This sounds like you have the "wrong end of the stick" as we say in
>> the UK.
>
> As Ruth said to Brian about Lillian.
>



Unknown

unread,
Apr 16, 2017, 5:41:41 PM4/16/17
to
On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:

> On 16/04/2017 11:46, Unknown wrote:
>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>
>>> On 15/04/2017 22:26, Unknown wrote:
>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>>>>
>>> So, you don't have a working AI installation ? I'd suggest this as a
>>> stage 1
>>
>> Thx YTC, ai is up afaik. (Lots of errors setting it up but now
>> resolved). It's just that I haven't bothered trying to boot from it
>> since the default manifest is pants.
>
> Please bother, there is no point chasing issues down the line if the
> failure is at the start.

First of all, many thanks YTC for all the help and encouragement you've
given me today. I knuckled-down, took-in those blogs and the docs and
am almost there. The archive is v.big so takes hours to get to the
point where it bailed (see below).

[Editing AIMs with AIM CLI]
> Don't get me started on the usability of AI .... It works well with
> OpsCenter, but that is another story.

I don't particularly want to opine on this one as it will only revive
an argument/debate that's already been had (and lost), viz: Desktop.
FWIW, I edited the manifests and profiles in NEdit 5.5 (2004).
Something *that* old, like 13 years old, couldn't possibly be still
good to eat; couldn't possibly be a productivity improvement over CLI..

[Remediating past bad-practice "use" of a monolithic GZ by AI'ing
a minimal GZ and unpacking out into separate NGZs]
> Yes, we tried to get people to use this approach with S10, and since
> S11 it has been the an even better (and easier) idea.
>
>>>> 2) convert the branded S10 zone into a S10 kernel zone
>> I mis-spoke. I meant a S10 zone nested within a S11 KZ.
>
> Why do I feel frightened by this ? :-)

The docs actually say it can't be done if I read them correctly.

> TBH, I've never tried that, I think I have seen it suggested,
> but never had a reason to do it.
>
> Why would you want to do it, as opposed to have an S10 brand
> running in zone ?

I could bore you with my reasons, but really I can take a hint ;)
"Not recommended". Message received. I'll just use a S10 BZ.


[Plan-B]
> Ah, I misunderstood your config.
> You are converting a GZ to an NGZ ? I misread the post as you
> had a GZ with an NGZ. Fine, create the UAR

TBH I changed my mind along the way based on what you had said.
Originally Plan-B would have been GZ -> GZ. However, given Rule #1
that morphed to GZ -> NGZ.

So, back to Plan-A (AI). Chuffed I got the disks auto-configged:

root@solaris:~# zpool status
...
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000CCA03C7197C8d0 ONLINE 0 0 0
c0t5000CCA03C70904Cd0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t5000CCA03C716BA4d0 ONLINE 0 0 0
c0t5000CCA03C70AF54d0 ONLINE 0 0 0

However, after reading the entire UAR for two hours it bailed:

19:49:56 Completed transfer of stream: 'c2da57d7-0d41-48a6-a052-
fdefa6795101-0.zfs' from http://hostname/2017-04-15.uar
...
19:49:58 Archive transfer completed
...
19:50:13 96% boot-archive completed.
19:50:14 Setting boot title prefix from manifest value:
'desktop-be-43-recovery'
19:50:14 Error occurred during execution of 'boot-configuration'
checkpoint.
19:50:15 Failed Checkpoints:
19:50:15 boot-configuration
19:50:15 operation not supported on this type of pool
19:50:15 Automated Installation Failed. See install log at
/system/volatile/install_log
Automated Installation failed

> Ah, hang on. Think I am getting here. You are able to install
> (using AI) the UAR *straight* onto the HW, no fannying around
> with multiple rpools. The UAR becomes the source software.

Actually, P2P is what failed above; GZ+NGZ -> GZ+NGZ

> Process is this.
> AI boots the OS into memory (no disks) AI looks that the XML
> AI creates your rpool on the disks indicated AI unwraps your UAR
> onto the hardware AI reboots and you have a server.
>
> OK, that is a global to global migration. So, you want to P2V
> the GZ into another server as a NGZ. Fine, config your new
> server GZ, either by a full AI hands free or hands on install.
>
> This gives you a GZ on rpool
> Add some disks and create a zones pool
>
> Follow this guide
> https://docs.oracle.com/cd/E53394_01/html/E54752/gpohk.html

Right. So as not to break Rule #1, I should P2V (like I had to
to create the S10 BZ). Thanks. Looks like I must take separate
GZ and S10 BNGZ archives for two P2Vs and that I can't use the
single entire UAR (GZ + S10 BGNZ)?

> You also have a branded S10 NGZ on the server ?

Indeedy.
Thanks, will knuckle-down.

> Yes, you have an outage, but not a long one.
>
> Think of it as a snapshot, declare a change freeze and when you
> are ready switch the app over.

No, I'm happy for the advice and turn what was pretty amateurish
as a setup into something half decent, including pretty basic
things like separate dev/test/live environs/zones that I never
had before. This NG is the closest I get to looking over the
shoulder of a pro.

> I still don't see why you want to use a KZ? The point of a KZ
> is separate and allow multiple S11 kernels on the same platform.

(I'm probably wrong on this too). KZ gets more virtualisation with
little performance hit and without the setup complexity of LDoms.

> You have a S10 branded zone, leave it like that.

Good advice which I'm going to take. :)

>> [Plan-A seemingly more a "big bang" than incremental approach]
>>> This sounds like you have the "wrong end of the stick" as we say in
>>> the UK.
>>
>> As Ruth said to Brian about Lillian.

Shame. Was hoping you were an Archers fan ;).

Unknown

unread,
Apr 17, 2017, 4:58:33 AM4/17/17
to
Good morning all,

On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:

> On 16/04/2017 11:46, Unknown wrote:
>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>> On 15/04/2017 22:26, Unknown wrote:
>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:

> Follow this guide
> https://docs.oracle.com/cd/E53394_01/html/E54752/gpohk.html

Will some kind soul help me understand the 2017 docs.

Ch. 7. "Scanning the Source System With zonep2pchk" shows, at
Step 5, a zonecfg script file generated on the SOURCE system and
saved to "s11-zone.config"

Ch. 7. "How to Configure the Zone on the Target System" describes
in the introductory paragraph the config file being named
"s11-zone-config.uar" and goes-on to illustrate use of the command
"# zonecfg create -a s11-zone-config.uar" on the TARGET that
derives a config from the stored UAR archive.

I'm confused. I'd expected on the TARGET system to read-in the
zonecfg generated on the SOURCE system as well as the UAR saved
from the SOURCE system with explanation as to resolving clashes.

zonecfg script generated by zonep2vchk at Step 5 "s11-zone.config"
isn't subsequently used/referred-to! What am I missing?

TIA

YTC#1

unread,
Apr 17, 2017, 5:26:53 AM4/17/17
to
On 16/04/2017 22:38, Unknown wrote:
> On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:
>
>> On 16/04/2017 11:46, Unknown wrote:
>>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>>
>>>> On 15/04/2017 22:26, Unknown wrote:
>>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>>>>>
>>>> So, you don't have a working AI installation ? I'd suggest this as a
>>>> stage 1
>>>
>>> Thx YTC, ai is up afaik. (Lots of errors setting it up but now
>>> resolved). It's just that I haven't bothered trying to boot from it
>>> since the default manifest is pants.
>>
>> Please bother, there is no point chasing issues down the line if the
>> failure is at the start.
>
> First of all, many thanks YTC for all the help and encouragement you've
> given me today. I knuckled-down, took-in those blogs and the docs and

It was raining, and too chilly to be working on the motorbike in the
garage :-)

> am almost there. The archive is v.big so takes hours to get to the
> point where it bailed (see below).

Yep, UAR being bare metal usually gets a bit large, especially if it
tries to add swap and dump vols :-)

>
> [Editing AIMs with AIM CLI]
>> Don't get me started on the usability of AI .... It works well with
>> OpsCenter, but that is another story.
>
> I don't particularly want to opine on this one as it will only revive
> an argument/debate that's already been had (and lost), viz: Desktop.
> FWIW, I edited the manifests and profiles in NEdit 5.5 (2004).
> Something *that* old, like 13 years old, couldn't possibly be still
> good to eat; couldn't possibly be a productivity improvement over CLI..

I like CLI, to the point where I can wrap scripts around it :-). I just
find the AI CLI stuff a bit unwieldy and have to get my head back around
it each time I use it (when there has been a few months between use)

>
> [Remediating past bad-practice "use" of a monolithic GZ by AI'ing
> a minimal GZ and unpacking out into separate NGZs]
>> Yes, we tried to get people to use this approach with S10, and since
>> S11 it has been the an even better (and easier) idea.
>>
>>>>> 2) convert the branded S10 zone into a S10 kernel zone
>>> I mis-spoke. I meant a S10 zone nested within a S11 KZ.
>>
>> Why do I feel frightened by this ? :-)
>
> The docs actually say it can't be done if I read them correctly.

A quick glance, and I failed to find any reference to it. And I still
can't see why anyone would want to as the S10 brand is already separated
for the S11 OS.

I'm sure Casper will be along to explain how, and maybe why it would be
a good idea (or not).

>
>> TBH, I've never tried that, I think I have seen it suggested,
>> but never had a reason to do it.
>>
>> Why would you want to do it, as opposed to have an S10 brand
>> running in zone ?
>
> I could bore you with my reasons, but really I can take a hint ;)
> "Not recommended". Message received. I'll just use a S10 BZ.
>
>
> [Plan-B]
>> Ah, I misunderstood your config.
>> You are converting a GZ to an NGZ ? I misread the post as you
>> had a GZ with an NGZ. Fine, create the UAR
>
> TBH I changed my mind along the way based on what you had said.
> Originally Plan-B would have been GZ -> GZ. However, given Rule #1
> that morphed to GZ -> NGZ.
>

As the GZ appears to contain apps, then yes, a good idea IMO

> So, back to Plan-A (AI). Chuffed I got the disks auto-configged:
>
> root@solaris:~# zpool status
> ...
> NAME STATE READ WRITE CKSUM
> rpool ONLINE 0 0 0
> mirror-0 ONLINE 0 0 0
> c0t5000CCA03C7197C8d0 ONLINE 0 0 0
> c0t5000CCA03C70904Cd0 ONLINE 0 0 0
> mirror-1 ONLINE 0 0 0
> c0t5000CCA03C716BA4d0 ONLINE 0 0 0
> c0t5000CCA03C70AF54d0 ONLINE 0 0 0
>
> However, after reading the entire UAR for two hours it bailed:
>

What hardware ?

4 way mirror ?
Normal two looks like this

pool: rpool
state: ONLINE
scan: scrub repaired 0 in 2h59m with 0 errors on Sat Dec 5 14:59:28 2015

config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c5t0d0s0 ONLINE 0 0 0
c1t2d0s0 ONLINE 0 0 0

errors: No known data errors

> 19:49:56 Completed transfer of stream: 'c2da57d7-0d41-48a6-a052-
> fdefa6795101-0.zfs' from http://hostname/2017-04-15.uar
> ...
> 19:49:58 Archive transfer completed
> ...
> 19:50:13 96% boot-archive completed.
> 19:50:14 Setting boot title prefix from manifest value:
> 'desktop-be-43-recovery'
> 19:50:14 Error occurred during execution of 'boot-configuration'
> checkpoint.
> 19:50:15 Failed Checkpoints:
> 19:50:15 boot-configuration
> 19:50:15 operation not supported on this type of pool

I reckon that is down to the rpool config above, let it install on
single disk and add mirror later


> 19:50:15 Automated Installation Failed. See install log at
> /system/volatile/install_log
> Automated Installation failed
>
>> Ah, hang on. Think I am getting here. You are able to install
>> (using AI) the UAR *straight* onto the HW, no fannying around
>> with multiple rpools. The UAR becomes the source software.
>
> Actually, P2P is what failed above; GZ+NGZ -> GZ+NGZ
>
>> Process is this.
>> AI boots the OS into memory (no disks) AI looks that the XML
>> AI creates your rpool on the disks indicated AI unwraps your UAR
>> onto the hardware AI reboots and you have a server.
>>
>> OK, that is a global to global migration. So, you want to P2V
>> the GZ into another server as a NGZ. Fine, config your new
>> server GZ, either by a full AI hands free or hands on install.
>>
>> This gives you a GZ on rpool
>> Add some disks and create a zones pool
>>
>> Follow this guide
>> https://docs.oracle.com/cd/E53394_01/html/E54752/gpohk.html
>
> Right. So as not to break Rule #1, I should P2V (like I had to
> to create the S10 BZ). Thanks. Looks like I must take separate

To P2V the GZ -> NGZ, and then the S10NGZ to the new server, yes.

> GZ and S10 BNGZ archives for two P2Vs and that I can't use the
> single entire UAR (GZ + S10 BGNZ)?

There is an option to exclude zones, and another to do zones only.


>
>> You also have a branded S10 NGZ on the server ?
>
> Indeedy.
>
>> I would defo do it this way
>>
>> https://docs.oracle.com/cd/E53394_01/html/E54752/gpolc.html#scrolltoc
> Thanks, will knuckle-down.
>
>> Yes, you have an outage, but not a long one.
>>
>> Think of it as a snapshot, declare a change freeze and when you
>> are ready switch the app over.
>
> No, I'm happy for the advice and turn what was pretty amateurish
> as a setup into something half decent, including pretty basic
> things like separate dev/test/live environs/zones that I never
> had before. This NG is the closest I get to looking over the
> shoulder of a pro.

It is (often) my day job :-). I do however often have "different" ways
of doing things, and not everyone always agrees with me :-)

>
>> I still don't see why you want to use a KZ? The point of a KZ
>> is separate and allow multiple S11 kernels on the same platform.
>
> (I'm probably wrong on this too). KZ gets more virtualisation with
> little performance hit and without the setup complexity of LDoms.

I have not extensively played with KZ, lack of kit and lack of use case.
I view it as LDoms for x86 :-)

To me the main use would be when you have applications that are
dependant on a specific Solaris 11 version, so that when the GZ has an
upgrade it is left alone.

I like LDoms, I don't see them as complex, bit fiddly initially, but
once you have set up the CDom you are away (and again OpsCenter works
well with them)

>
>> You have a S10 branded zone, leave it like that.
>
> Good advice which I'm going to take. :)
>
>>> [Plan-A seemingly more a "big bang" than incremental approach]
>>>> This sounds like you have the "wrong end of the stick" as we say in
>>>> the UK.
>>>
>>> As Ruth said to Brian about Lillian.
>
> Shame. Was hoping you were an Archers fan ;).

You had lost me with that. :-)

So, you are UK based then ?

I'm up between Manc and Livepool

Casper H.S. Dik

unread,
Apr 17, 2017, 5:31:10 AM4/17/17
to
YTC#1 <b...@ytc1-spambin.co.uk> writes:

>>>>>> 2) convert the branded S10 zone into a S10 kernel zone
>>>> I mis-spoke. I meant a S10 zone nested within a S11 KZ.
>>>
>>> Why do I feel frightened by this ? :-)
>>
>> The docs actually say it can't be done if I read them correctly.

>A quick glance, and I failed to find any reference to it. And I still
>can't see why anyone would want to as the S10 brand is already separated
>for the S11 OS.


>I'm sure Casper will be along to explain how, and maybe why it would be
>a good idea (or not).

You can create a Solaris 10 branded zone in a Solaris 11 *kernel* zone.

Much software will work when deployed natively under Solaris 11 and
certain things do NOT work in Solaris 10 bramded zones (specifically,
any device drivers which aren't supported in Solaris 11 or device
drivers which do now work in non-global zones.

Casper

YTC#1

unread,
Apr 17, 2017, 5:38:26 AM4/17/17
to
On 17/04/2017 09:55, Unknown wrote:
> Good morning all,
>
> On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:
>
>> On 16/04/2017 11:46, Unknown wrote:
>>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>>> On 15/04/2017 22:26, Unknown wrote:
>>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>
>> Follow this guide
>> https://docs.oracle.com/cd/E53394_01/html/E54752/gpohk.html
>
> Will some kind soul help me understand the 2017 docs.
>
> Ch. 7. "Scanning the Source System With zonep2pchk" shows, at
> Step 5, a zonecfg script file generated on the SOURCE system and
> saved to "s11-zone.config"
>
> Ch. 7. "How to Configure the Zone on the Target System" describes
> in the introductory paragraph the config file being named
> "s11-zone-config.uar" and goes-on to illustrate use of the command
> "# zonecfg create -a s11-zone-config.uar" on the TARGET that
> derives a config from the stored UAR archive.
>
> I'm confused. I'd expected on the TARGET system to read-in the
> zonecfg generated on the SOURCE system as well as the UAR saved
> from the SOURCE system with explanation as to resolving clashes.
>

The output from zonep2vchk -c is mean to be read in on a GZ with zonecfg
-z <zone> -f <file>
(after you have edited it)

This is to create a zone on the target system before running zoneadm install



> zonecfg script generated by zonep2vchk at Step 5 "s11-zone.config"
> isn't subsequently used/referred-to! What am I missing?

The UAR is a separate entity, used for unwrapping the zone, it contains
zonecfg info. It is a *separate* process.

The doc could be clearer as it initially implies that they are both used.


>
> TIA

YTC#1

unread,
Apr 17, 2017, 5:55:59 AM4/17/17
to
On 17/04/2017 10:31, Casper H.S. Dik wrote:
> YTC#1 <b...@ytc1-spambin.co.uk> writes:
>
>>>>>>> 2) convert the branded S10 zone into a S10 kernel zone
>>>>> I mis-spoke. I meant a S10 zone nested within a S11 KZ.
>>>>
>>>> Why do I feel frightened by this ? :-)
>>>
>>> The docs actually say it can't be done if I read them correctly.
>
>> A quick glance, and I failed to find any reference to it. And I still
>> can't see why anyone would want to as the S10 brand is already separated
>> for the S11 OS.
>
>
>> I'm sure Casper will be along to explain how, and maybe why it would be
>> a good idea (or not).
>
> You can create a Solaris 10 branded zone in a Solaris 11 *kernel* zone.

Ta :-)

>
> Much software will work when deployed natively under Solaris 11 and
> certain things do NOT work in Solaris 10 bramded zones (specifically,
> any device drivers which aren't supported in Solaris 11 or device
> drivers which do now work in non-global zones.

But as the S10 zone is still a branded zone, inside a S11 KZ, surely
that is the case ?

Or am I missing something here ?

With the above use case I would go for an LDom setup, but from what you
are saying it is workable under KZ on x86 ?

>
> Casper

Unknown

unread,
Apr 17, 2017, 6:52:09 AM4/17/17
to
'Morning YTC,

On Mon, 17 Apr 2017 10:38:23 +0100, YTC#1 wrote:
> On 17/04/2017 09:55, Unknown wrote:

> The output from zonep2vchk -c is meant to be read in on a GZ with
> zonecfg -z <zone> -f <file>
> (after you have edited it)
>
> This is to create a zone on the target system before running zoneadm
> install
>
> The UAR is a separate entity, used for unwrapping the zone, it
> contains zonecfg info. It is a *separate* process.
>
> The doc could be clearer as it initially implies that they are both
> used.

Well, by referring to the UAR as the "configuration file" it certainly
confused me.

Thanks, more knucking-down now required to get this working in what's
left of the BH.

K/R

Unknown

unread,
Apr 17, 2017, 6:52:20 AM4/17/17
to
Good morning!

On Mon, 17 Apr 2017 10:26:51 +0100, YTC#1 wrote:

> On 16/04/2017 22:38, Unknown wrote:
>> On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:
>>
>>> On 16/04/2017 11:46, Unknown wrote:
>>>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>>>
>>>>> On 15/04/2017 22:26, Unknown wrote:
>>>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>>>>>>
[Bank holiday in England]
> It was raining, and too chilly to be working on the motorbike in
> the garage :-)

Yes, compared with last weekend's sultry weather. I put-off doing
some rack jobs in the office for the same reason.

[What's the point of a S10 KZ even if it can be done by nesting
a S10 BZ in a S11 KZ]
> A quick glance, and I failed to find any reference to it. And I
> still can't see why anyone [Ed: in their right mind] would want to
> as the S10 brand is already separated for the S11 OS.

At the risk of inviting incredulity, ridicule and scorn, I've
always wanted to ressurect again the half-dozen licences of
Apple's MAE2 that in a moment of sheer folly I bought for an
infeasibly-large amount of money a decade and half ago. It ran
fine on an old S8 Netra but can't remember if on a subsequent
S10 V210.

> I'm sure Casper will be along to explain how, and maybe why it
> would be a good idea (or not).

Indeed the gentleman says you can run an S10 BZ in a S11 KZ.
MAE2 installs optionally installs AppleTalk kernel drivers and
requires /dev/random. Hopeless case? Possibly.

> What hardware ?

2 x T5120 -> 2 x T4-1

[Target Disk Config]
> 4 way mirror ?

The disks are only 300G so I was after is what man zpool calls:
"two root vdevs, each a mirror of two disks". Put in an SR years
ago to do that on the tiny 160G drives on the 5120s and told it
couldn't be done. Content that it can be done now.

>> 19:50:15 Failed Checkpoints:
>> 19:50:15 boot-configuration
>> 19:50:15 operation not supported on this type of pool
>
> I reckon that is down to the rpool config above, let it install on
> single disk and add mirror later

That slightly defeats the args you originally gave in favour of AI
(Plan-A). I thought that at the above stage, it had done all the
physical reconfig and was just transferring content from the UAR
into the pool. It should care less how the pool's constructed.

>> Right. So as not to break Rule #1, I should P2V (like I had to to
>> create the S10 BZ). Thanks. Looks like I must take separate
>
> To P2V the GZ -> NGZ, and then the S10NGZ to the new server, yes.
>
> There is an option to exclude zones, and another to do zones only.

Yes found them, and took the individual zone archives overnight and
early this morning (and also invoked the options to explicitly
exclude dump and swap just in case it left them in).

> So, you are UK based then ?
> I'm up between Manc and Liverpool

You're a Southerner then. ;>. Was involved in a DCT project at a
major ISP whose "Northern" DC is in Knowsley and a non-IT related
project for a major bank with a processing centre in Manc. Hotelled
and got to know the cities better. Every British city has it's own
attractions and distinct culture. Of course, one has to go further
north to cross the threshold into true civilisation. ;>

Unknown

unread,
Apr 17, 2017, 7:35:01 AM4/17/17
to
On Mon, 17 Apr 2017 09:31:06 +0000, Casper H.S. Dik wrote:

> You can create a Solaris 10 branded zone in a Solaris 11 *kernel* zone.
>
> Much software will work when deployed natively under Solaris 11 and
> certain things do NOT work in Solaris 10 branded zones (specifically,
> any device drivers which aren't supported in Solaris 11 or device
> drivers which do not work in non-global zones.
>
> Casper

Thank you Casper.

Minitab (for which I paid good money) stopped working after an SRU
update or patch to the S10 BZ. By that time T5120 had already lagged
in F/W and, seeing that KZs had been introduced, but only for later
SPARCs, I had seized on that as possible salvation. In the event it
turned out to be breakage in the SunOs 4 Binary Compatibility Package.
So Minitab resurrected, the motivation for KZ has mostly gone.

YTC#1

unread,
Apr 17, 2017, 7:49:22 AM4/17/17
to
Yerbut, I'm still missing why that is better/different from running an
S10BZ in the S11GZ.

(Yes, I am admitting to not being 100% up on KZ and their use cases :-) )

> MAE2 installs optionally installs AppleTalk kernel drivers and
> requires /dev/random. Hopeless case? Possibly.
>
>> What hardware ?
>
> 2 x T5120 -> 2 x T4-1

Got a spare T4 ? I can't do KZ on my 5120 :-(

>
> [Target Disk Config]
>> 4 way mirror ?
>
> The disks are only 300G so I was after is what man zpool calls:
> "two root vdevs, each a mirror of two disks". Put in an SR years
> ago to do that on the tiny 160G drives on the 5120s and told it
> couldn't be done. Content that it can be done now.
>
>>> 19:50:15 Failed Checkpoints:
>>> 19:50:15 boot-configuration
>>> 19:50:15 operation not supported on this type of pool
>>
>> I reckon that is down to the rpool config above, let it install on
>> single disk and add mirror later
>
> That slightly defeats the args you originally gave in favour of AI
Ah, yes, this is because AI wants to do everything ion a single boot.

I come from the world of JumpStart and am happy with the multiple boot
config approach.

When I use AI, as I said I use JET to drive it, that handles multiple
boot cycles for me and I can do any post rpool stuff later.

But I still treat it as a one step process, as all I do I type "boot
net:dhcp - install" and go and do something else :-)

> (Plan-A). I thought that at the above stage, it had done all the
> physical reconfig and was just transferring content from the UAR
> into the pool. It should care less how the pool's constructed.

The rpool rules and a XML snippet are here
https://docs.oracle.com/cd/E53394_01/html/E54801/gitgn.html

>
>>> Right. So as not to break Rule #1, I should P2V (like I had to to
>>> create the S10 BZ). Thanks. Looks like I must take separate
>>
>> To P2V the GZ -> NGZ, and then the S10NGZ to the new server, yes.
>>
>> There is an option to exclude zones, and another to do zones only.
>
> Yes found them, and took the individual zone archives overnight and
> early this morning (and also invoked the options to explicitly
> exclude dump and swap just in case it left them in).

Slow isn't it.

flash archive (S10 and earlier) was faster
>
>> So, you are UK based then ?
>> I'm up between Manc and Liverpool
>
> You're a Southerner then. ;>. Was involved in a DCT project at a
> major ISP whose "Northern" DC is in Knowsley and a non-IT related

I've worked there in my pre and post Sun days. And while I was at Sun :-)

> project for a major bank with a processing centre in Manc. Hotelled

If it is in Manc, that will be the Co-op
If it is on the outskirts, then probably Barclays.

> and got to know the cities better. Every British city has it's own
> attractions and distinct culture. Of course, one has to go further
> north to cross the threshold into true civilisation. ;>
>

I can't argue with that. I still do some work up there and have many
friends and collegues. Only last month I was helping out fitting masts
for the Pentalnds community broadband.

cindy.sw...@gmail.com

unread,
Apr 17, 2017, 11:01:08 AM4/17/17
to
I can't tell if you are making progress or not but I've seen this message: "operation not supported on this type of pool" when a root pool has gzip or lz4 compression enabled.

Thanks, Cindy

Casper H.S. Dik

unread,
Apr 18, 2017, 4:26:30 AM4/18/17
to
YTC#1 <b...@ytc1-spambin.co.uk> writes:

>On 17/04/2017 10:31, Casper H.S. Dik wrote:
>> Much software will work when deployed natively under Solaris 11 and
>> certain things do NOT work in Solaris 10 bramded zones (specifically,
>> any device drivers which aren't supported in Solaris 11 or device
>> drivers which do now work in non-global zones.

>But as the S10 zone is still a branded zone, inside a S11 KZ, surely
>that is the case ?

But kernel drivers would need to be installed in the global zone
(that is the kernel zone itself)

>Or am I missing something here ?

>With the above use case I would go for an LDom setup, but from what you
>are saying it is workable under KZ on x86 ?

Not sure what you mean here? The best compatibility with Solaris 10
would be a native Solaris 10 install with the Solaris 10 kernel;
whether this is a LDom or bare metal wouldn't matter.

Solaris 10 branded zone runs on top of a Solaris 11 kernel; it then
renaps a number of system calls but things like /proc would look more
like Solaris 11 /proc (which would included cmdline, environ and
execname) But since Solaris 11 /proc is compatible, that doesn't
give any issues.

Still, if the code runs under Solaris 11 you'd be better off running
it directly under Solaris 11.

Casper

Casper H.S. Dik

unread,
Apr 18, 2017, 8:34:59 AM4/18/17
to
Unknown <nos...@example.com> writes:

>[What's the point of a S10 KZ even if it can be done by nesting
>a S10 BZ in a S11 KZ]

Device drivers (pseudo device drivers, actual device drivers likely
can't find anything under a KZ)

>Indeed the gentleman says you can run an S10 BZ in a S11 KZ.
>MAE2 installs optionally installs AppleTalk kernel drivers and
>requires /dev/random. Hopeless case? Possibly.

/dev/random is available in Solaris 10.

Casper

Casper H.S. Dik

unread,
Apr 18, 2017, 8:37:29 AM4/18/17
to
Yea, unfortunately the BCP (binary compatibility bits) are not always
properly tested, I think this already happened before I joined Sun
and they change the compiler (the BCP bits required a K&R cpp and
not a more modern one)

Casper

YTC#1

unread,
Apr 18, 2017, 4:30:04 PM4/18/17
to
On 18/04/2017 09:26, Casper H.S. Dik wrote:
> YTC#1 <b...@ytc1-spambin.co.uk> writes:
>
>> On 17/04/2017 10:31, Casper H.S. Dik wrote:
>>> Much software will work when deployed natively under Solaris 11 and
>>> certain things do NOT work in Solaris 10 bramded zones (specifically,
>>> any device drivers which aren't supported in Solaris 11 or device
>>> drivers which do now work in non-global zones.
>
>> But as the S10 zone is still a branded zone, inside a S11 KZ, surely
>> that is the case ?
>
> But kernel drivers would need to be installed in the global zone
> (that is the kernel zone itself)
>
>> Or am I missing something here ?
>
>> With the above use case I would go for an LDom setup, but from what you
>> are saying it is workable under KZ on x86 ?
>
> Not sure what you mean here? The best compatibility with Solaris 10

i am just querying why anyone would want to put a S10branded zone inside
a S11 Kernel zone, I am still failing to see any benefits from it.

As I said, if on a sun4v system, I would use an LDom and go native if
there were any driver issues.

> would be a native Solaris 10 install with the Solaris 10 kernel;
> whether this is a LDom or bare metal wouldn't matter.
>
> Solaris 10 branded zone runs on top of a Solaris 11 kernel; it then
> renaps a number of system calls but things like /proc would look more
> like Solaris 11 /proc (which would included cmdline, environ and
> execname) But since Solaris 11 /proc is compatible, that doesn't
> give any issues.
>
> Still, if the code runs under Solaris 11 you'd be better off running
> it directly under Solaris 11.
>
> Casper
>



Casper H.S. Dik

unread,
Apr 19, 2017, 3:15:37 AM4/19/17
to
YTC#1 <b...@ytc1-spambin.co.uk> writes:

>i am just querying why anyone would want to put a S10branded zone inside
>a S11 Kernel zone, I am still failing to see any benefits from it.

I see your point; if you run Solaris 11 why add an additional layer
and why not run the Solaris 10 branded zone as part of the global zone.

Casper

YTC#1

unread,
Apr 19, 2017, 8:08:38 AM4/19/17
to
yep, sometimes just because you can, does not mean you should :-)

Unknown

unread,
Apr 28, 2017, 6:55:53 PM4/28/17
to
On Mon, 17 Apr 2017 08:01:04 -0700, cindy.swearingen wrote:

> On Saturday, April 15, 2017 at 12:44:20 PM UTC-6, Unknown wrote:
[...]
>
> I can't tell if you are making progress or not but I've seen this
> message: "operation not supported on this type of pool" when a root
> pool has gzip or lz4 compression enabled.
>
> Thanks, Cindy

Hi Cindy, I have a chance to circle-back to work on this this bank-
holiday weekend. To reply to your observation, thanks, but I'd
enabled neither compression, de-duplication nor encryption. I had
attempted to deploy, through ai, a recovery unified archive taken
on a T5120 to clone the global zone and the single NGZ, a branded
S10 zone, onto a T4-1. Excerpted, this is what happened:-

{0} ok boot net - install
Sun Apr 16 18:07:23 wanboot progress: miniroot: Read 267034 of 267034 kB
(100%)
Sun Apr 16 18:07:23 wanboot info: miniroot: Download complete
SunOS Release 5.11 Version 11.3 64-bit
Copyright (c) 1983, 2015, Oracle and/or its affiliates. All rights
reserved.
Remounting root read/write
Probing for device nodes ...
Preparing network image for use
Downloading solaris.zlib
...
Done mounting image
Configuring devices.
Hostname: solaris
Service discovery phase initiated
Service name to look up: T4-1
Service discovery over multicast DNS failed
Service T4-1 located at T5120:5555 will be used
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery
...
Automated Installation started
...
18:10:09 9% target-selection completed.
18:10:10 10% ai-configuration completed.
18:10:10 10% var-share-dataset completed.
18:10:17 10% target-instantiation completed.
18:10:17 10% Beginning archive transfer
18:10:17 Commencing transfer of stream: c2da57d7-0d41-48a6-a052-
fdefa6795101-0.zfs to rpool
...
19:49:56 Completed transfer of stream: 'c2da57d7-0d41-48a6-a052-
fdefa6795101-0.zfs' from http://galaxy/galaxy_entire_2017-04-15.uar
19:49:58 Archive transfer completed
19:50:01 89% generated-transfer-760-1 completed.
19:50:01 90% apply-pkg-variant completed.
19:50:01 90% update-dump-adm completed.
19:50:01 90% setup-swap completed.
19:50:05 90% device-config completed.
19:50:10 91% apply-sysconfig completed.
19:50:10 91% transfer-zpool-cache completed.
19:50:13 96% boot-archive completed.
19:50:14 Setting boot title prefix from manifest value: 'desktop-be-43-
recovery'
19:50:14 Error occurred during execution of 'boot-configuration'
checkpoint.
19:50:14 100% None
19:50:15 Failed Checkpoints:
19:50:15
19:50:15 boot-configuration
19:50:15
19:50:15 Checkpoint execution error:
19:50:15
19:50:15 operation not supported on this type of pool
19:50:15
19:50:15 Automated Installation Failed. See install log at /system/
volatile/install_log


Regarding what progress I've made: I'm still in the mini-root boot
but have now successfully r/w mounted the zfs datasets. I have yet
to finalise the configuration then try rebooting (given ai's failure).

However, I learned here that replicating the T5120's GZ onto the T4-1
was a bad idea. So, what started as a P2P cloning morphed into a P2V
i.e. having what had formerly been the GZ on the T5120 as a NGZ on the
T4-1 and install a fresh, uncluttered, GZ on the T4-1. That's turned-
into a complete "start again" but using the archived monolithic T5120's
old GZ as a starting point. (The S10 BZ can V2V 1-4-1 unchanged).

So, the new question is in two parts. i), what's a sensible minimum
zoning? and ii) what's the best way to configure the zones based on
the recovery uar that I'd taken, or from the root pool from that uar
that's now on-disk? Regarding i) The H/W migration is an opportunity
for separate dev test live. I think that implies test and live being
paired zones e.g. testdb + testwebserver, four zones so far; test and
live db zones having no public ip and all four zones using small-server
builds. The S10 zone has to be carried across, to that's now five zones
including to-be-kept-uncluttered GZ on the T4-1. To keep to minimum
number of zones, dev can be a single zone in which there's a db and
webserver and it can run desktop, so it's a no-brainer that the old
T5120 GZ virtualised as-is can serve as dev.

That leaves me several issues. The two db zones (test and live) being
internal-only can't install and update packages from Oracle. So looks
like I now suddenly have also to get up to speed with setting-up one's
own repository; something I haven't previously needed to do.

Secondly, from a pen test p.o.v, have I made a nonsense of live (and
test) consisting of two zones, DBs being non-publicly-facing-only, if
dev runs desktop-incorporation and has both webserver and db in the
same zone and that general purpose dev zone inevitably having a
publicly-facing i/f?

Suddenly I've gone from simply upgrading a legacy box to architecting
a minimum zoning and thereby addressing scheme. So it's beginning to
feel like a DIY job I regret having started.

Unknown

unread,
Apr 30, 2017, 3:14:51 PM4/30/17
to
On Mon, 17 Apr 2017 12:49:21 +0100, YTC#1 wrote:
> On 17/04/2017 11:49, Unknown wrote:
>> Good morning!
>> On Mon, 17 Apr 2017 10:26:51 +0100, YTC#1 wrote:
>>> On 16/04/2017 22:38, Unknown wrote:
>>>> On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:
>>>>> On 16/04/2017 11:46, Unknown wrote:
>>>>>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>>>>>> On 15/04/2017 22:26, Unknown wrote:
>>>>>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:

Hi YTC, bank hol again and time to circle back and pick-up where I got to.

[S10 KZ]
As I mentioned, I was clutching at straws when Minitab on S10 broke. I
didn't need to throw-away the 5120s for the promise of KZs on the T4-1.
All that was needed was to fix BCP.

>> MAE2 installs optionally installs AppleTalk kernel drivers and requires
>> /dev/random. Hopeless case? Possibly.
>>
>>> What hardware ?

MAE2 originally ran on a 2002 bought S8 Netra X1 bought new from Esteem
for the best part of a grand. Can't recall if it ran on V120 S10.

>> 2 x T5120 -> 2 x T4-1
>
> Got a spare T4 ? I can't do KZ on my 5120 :-(

I do have a cold spare (open box). Contracts people said I soon wouldn't
be able to renew on 5120.

[Taking single-zone UARs)
> Slow isn't it.

You can say that again.
>
> flash archive (S10 and earlier) was faster
>>

[Haunts]
>>> I'm up between Manc and Liverpool
>>
>> project for a major bank with a processing centre in Manc. Hotelled
>
> If it is in Manc, that will be the Co-op If it is on the outskirts, then
> probably Barclays.

I couldn't possibly comment ;-)

> I can't argue with that. I still do some work up there and have many
> friends and collegues. Only last month I was helping out fitting masts
> for the Pentalnds community broadband.

Went to vendors day at a council hoping to sell into their city broad-
band grand plans: City turn-over their street furniture for installation
of masts in return for rent and suppliers provide a municipal service.
Couldn't see the numbers stacking to help digital exclusion though.

Hope you'll, (and indeed others) will be around tomorrow when I try again.

Unknown

unread,
Apr 30, 2017, 3:25:32 PM4/30/17
to
On Tue, 18 Apr 2017 12:34:56 +0000, Casper H.S. Dik wrote:
> Unknown <nos...@example.com> writes:
>>Indeed the gentleman says you can run an S10 BZ in a S11 KZ. MAE2
>>installs optionally installs AppleTalk kernel drivers and requires
>>/dev/random. Hopeless case? Possibly.
>
> /dev/random is available in Solaris 10.

Well you (or anyone else for that matter) is welcome to a copy of it
if they fancy trying to get it going and report back on solving it.

(Apple gave skeleton keys to licensed Customers upon obsoleting it.)

YTC#1

unread,
May 1, 2017, 7:51:51 AM5/1/17
to
On 30/04/2017 20:11, Unknown wrote:
> On Mon, 17 Apr 2017 12:49:21 +0100, YTC#1 wrote:
>> On 17/04/2017 11:49, Unknown wrote:
>>> Good morning!
>>> On Mon, 17 Apr 2017 10:26:51 +0100, YTC#1 wrote:
>>>> On 16/04/2017 22:38, Unknown wrote:
>>>>> On Sun, 16 Apr 2017 12:57:07 +0100, YTC#1 wrote:
>>>>>> On 16/04/2017 11:46, Unknown wrote:
>>>>>>> On Sun, 16 Apr 2017 09:34:53 +0100, YTC#1 wrote:
>>>>>>>> On 15/04/2017 22:26, Unknown wrote:
>>>>>>>>> On Sat, 15 Apr 2017 21:30:11 +0100, YTC#1 wrote:
>
> Hi YTC, bank hol again and time to circle back and pick-up where I got to.

I won't be responding much I'm afraid, I am prepping the motorbikes for
a 6 month trip to Mongolia (and back)
<snip>
>
> [Haunts]
>>>> I'm up between Manc and Liverpool
>>>
>>> project for a major bank with a processing centre in Manc. Hotelled
>>
>> If it is in Manc, that will be the Co-op If it is on the outskirts, then
>> probably Barclays.
>
> I couldn't possibly comment ;-)

Out them I say ! :-)

>
>> I can't argue with that. I still do some work up there and have many
>> friends and collegues. Only last month I was helping out fitting masts
>> for the Pentalnds community broadband.
>
> Went to vendors day at a council hoping to sell into their city broad-
> band grand plans: City turn-over their street furniture for installation
> of masts in return for rent and suppliers provide a municipal service.
> Couldn't see the numbers stacking to help digital exclusion though.
>
> Hope you'll, (and indeed others) will be around tomorrow when I try again.
>

Unlikely (see above)
0 new messages