Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Solaris zones and containers

77 views
Skip to first unread message

Jay Braun

unread,
Feb 4, 2020, 11:55:49 AM2/4/20
to
I recently began to work with a project whose software runs on Solaris. The software is normally deployed on a Solaris Zone. At the application level, I see several ways in which container technology could benefit the project. Can multiple Solaris containers execute on a single Solaris zone?

I apologize if this is a no-brainer question, but I'm coming from an environment where we ran Docker containers on an AWS Linux VM, and the distinction between the VM and the container was very clear.

None

unread,
Feb 4, 2020, 9:39:49 PM2/4/20
to
A Solaris Zone is a container, so how would you run containers within a
container? I think your real answer is to run multiple zones.

YTC#1

unread,
Feb 5, 2020, 4:55:04 AM2/5/20
to
On 04/02/2020 16:55, Jay Braun wrote:
> I recently began to work with a project whose software runs on Solaris. The software is normally deployed on a Solaris Zone. At the application level, I see several ways in which container technology could benefit the project. Can multiple Solaris containers execute on a single Solaris zone?
>
> I apologize if this is a no-brainer question, but I'm coming from an environment where we ran Docker containers on an AWS Linux VM, and the distinction between the VM and the container was very clear.
>
As pointed out, and understanding of zones/containers is needed. And
maybe more info about what you are doing.

When you install Solaris the OS is at that point known as the "Global
Zone" , it will often be called the GZ.

Each GZ can have many, many Non Global Zones (or NGZ, or just zones) *
IIRC 8192 was the stated limit, but I believe it is way beyond that in
practice :-)

Then there are Kernel Zones (KZ) in S11.2 onwards... really cool.

A zone is often called a container, there was a lot of naming and
renaming back in the early days, but the phrase "a container is a zone
with resource allocations" helps understanding.
https://docs.oracle.com/cd/E19044-01/sol.containers/817-1592/6mhahuotn/index.html

Next we have "branded" zones. This is where we can run solaris 8/9 zones
inside Solaris 10. Or Solaris 10 zones on Solaris 11.
So an understanding of what you want and where you are will help.

In S10 there are 2 types of zone (other than brands) , whole or sparse.
I'll let you read up on that one.

Generally you can think of a zone as being a server in its own right,
just not (always) direct access to it's resources. * So install all your
apps etc in the zone.
* Don't allow users access to the GZ.
* A GZ can have separate admins to an NGZ

Patching is performed from the GZ (yes I know it some can be done from
the NGZ, and KZ) but as the NGZ has kernel dependencies on the GZ it
makes sense.

Welcome to how the world should be :-)
(I'm sure Linux will catch up one day).

--
Bruce Porter
"The internet is a huge and diverse community but mainly friendly"
http://ytc1.blogspot.co.uk/
There *is* an alternative! https://www.libreoffice.org/

Grant Taylor

unread,
Feb 9, 2020, 4:24:01 PM2/9/20
to
On 2/5/20 2:55 AM, YTC#1 wrote:
> Generally you can think of a zone as being a server in its own right,
> just not (always) direct access to it's resources.
> * So install all your apps etc in the zone.
> * Don't allow users access to the GZ.
> * A GZ can have separate admins to an NGZ

I agree that zones can be thought of as a server in their own right.
But I don't think the same can be said about containers. At least not
containers from outside of the Solaris Zone world.

Containers, as I understand them, are supposed to be /just/ the
application and it's dependencies. The rest of the OS isn't there.

I feel like this is a stark contrast to Zones, which, as you say, are
effectively a full server ~> OS in their own right.

> Patching is performed from the GZ (yes I know it some can be done
> from the NGZ, and KZ) but as the NGZ has kernel dependencies on the
> GZ it makes sense.

This also differs from containers outside of Solaris. Containers
outside of the Solaris world are blown away and replaced. They aren't
patched.

> Welcome to how the world should be :-)
> (I'm sure Linux will catch up one day).

Please elaborate on what you mean by these two statements.

I believe that Linux is capable of doing probably 80% (or more) of what
Solaris Zones can do. It's just that few people do it. But, I'd like
to know more specifically what you're referring to.



--
Grant. . . .
unix || die

YTC#1

unread,
Feb 10, 2020, 4:08:25 AM2/10/20
to
On 09/02/2020 21:24, Grant Taylor wrote:
> On 2/5/20 2:55 AM, YTC#1 wrote:
>> Generally you can think of a zone as being a server in its own right,
>> just not (always) direct access to it's resources.
>> * So install all your apps etc in the zone.
>> * Don't allow users access to the GZ.
>> * A GZ can have separate admins to an NGZ
>
> I agree that zones can be thought of as a server in their own right. But
> I don't think the same can be said about containers.  At least not
> containers from outside of the Solaris Zone world.

I was only referring to Solaris zones/containers. The words are used to
mean the same thing.

>
> Containers, as I understand them, are supposed to be /just/ the
> application and it's dependencies.  The rest of the OS isn't there.

In the non Solaris world, maybe.

>
> I feel like this is a stark contrast to Zones, which, as you say, are
> effectively a full server ~> OS in their own right.

As above, I was pointing out that the words are used to mean the same
thing. Back when they came out the usage swung one way or another
depending who was talking, and the two phrases are still used occasionally.

>
>> Patching is performed from the GZ (yes I know it some can be done from
>> the NGZ, and KZ) but as the NGZ has kernel dependencies on the GZ it
>> makes sense.
>
> This also differs from containers outside of Solaris.  Containers
> outside of the Solaris world are blown away and replaced.  They aren't
> patched.

Fine, but this is Solaris and it was a Solaris query.
However, zones can be treated in the same way providing you use a decent
installation tool.

>
>> Welcome to how the world should be :-)
>> (I'm sure Linux will catch up one day).
>
> Please elaborate on what you mean by these two statements.

Is it not obvious ?
Solaris zones are still seen as being way ahead of Linux containers.
There was a shot period of time when docker was mean to appear on
Solaris, and work with containers. But that failed to pass :-(

>
> I believe that Linux is capable of doing probably 80% (or more) of what
> Solaris Zones can do.  It's just that few people do it.  But, I'd like
> to know more specifically what you're referring to.

The OS separation, partitioning and isolation of resources, for one thing.
Being able to run branded zones for another.

And have you seen kernel zones ?

Fair enough, I am a Solaris through and through, and can be a touch
biased. I find the concept of the isolation of a zone more likeable to
the way I understand linux containers to work.

John D Groenveld

unread,
Feb 10, 2020, 8:41:58 AM2/10/20
to
In article <r1pt8r$6td$1...@tncsrv09.home.tnetconsulting.net>,
Grant Taylor <gta...@tnetconsulting.net> wrote:
>This also differs from containers outside of Solaris. Containers
>outside of the Solaris world are blown away and replaced. They aren't
>patched.

The Linux containers that I have used are the distro OS sans kernel.
They are patched from the host or from within container.

John
groe...@acm.org

John D Groenveld

unread,
Feb 10, 2020, 8:50:10 AM2/10/20
to
In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>The OS separation, partitioning and isolation of resources, for one thing.
>Being able to run branded zones for another.

I find the management lx branded zones to be easier than
Linux containers as far as resource controls and networking.
IMO Crossbow VNICs are more intuitive than the Linux vnet/virbr
counterpart.

John
groe...@acm.org

YTC#1

unread,
Feb 10, 2020, 11:16:18 AM2/10/20
to
TBH, I have not used the LX brand since it was dropped from Solaris :-(

John D Groenveld

unread,
Feb 10, 2020, 12:07:02 PM2/10/20
to
In article <r1rvkd$vkf$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>TBH, I have not used the LX brand since it was dropped from Solaris :-(

I have had occassion to use them on illumos-based OmniOS.
<URL:https://omniosce.org/info/lxzones.html>

Many thanks to Joyent for resurrecting after Oracle abandoned.
John
groe...@acm.org

Chris Ridd

unread,
Feb 10, 2020, 1:18:42 PM2/10/20
to
It is unfortunate they haven't kept up to date wrt Linux kernel changes.
Joyent seem to prefer using bhve zones instead of lx nowadays, which is
a shame.

--
Chris

John D Groenveld

unread,
Feb 10, 2020, 4:51:49 PM2/10/20
to
In article <r1s6pr$d1k$1...@dont-email.me>, Chris Ridd <chri...@mac.com> wrote:
>It is unfortunate they haven't kept up to date wrt Linux kernel changes.
>Joyent seem to prefer using bhve zones instead of lx nowadays, which is
>a shame.

I have had good success running Linux and FreeBSD on bhyve branded
zones.
<URL:https://omniosce.org/info/bhyve_kvm_brand.html>
But I haven't benchmarked lx zone vs bhyve with Centos7 Linux guest.

BTW FreeBSD now has Centos8 Linux compatible jails.
<URL:https://www.freebsd.org/news/status/report-2019-10-2019-12.html>

John
groe...@acm.org

John D Groenveld

unread,
Feb 10, 2020, 5:01:20 PM2/10/20
to
In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>And have you seen kernel zones ?

What are the use cases for kernel zones?

John
groe...@acm.org

YTC#1

unread,
Feb 11, 2020, 4:28:48 AM2/11/20
to
On 10/02/2020 22:01, John D Groenveld wrote:
> In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>> And have you seen kernel zones ?
>
> What are the use cases for kernel zones?
>

I've had a couple of occasions where I just can't get an application to
work in S11.4, even after unfreezing obsolete stuff. On x86 it means I
can lock some resources to a zone and have it stuck at S11.3 while the
GZ is at S11.4
(And in 1 case I have a zone at S11.3SRU23 because I couldn't be
bothered upgrading Apache for a simple in house use Twiki :-) )

They are more akin to LDoms than branded zones.

Oh, and then there are immutable zones. So what if someone hacks in ?
They can't write stuff :-)

Casper H.S. Dik

unread,
Feb 11, 2020, 9:42:46 AM2/11/20
to
YTC#1 <b...@ytc1.co.uk> writes:

>On 10/02/2020 22:01, John D Groenveld wrote:
>> In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>>> And have you seen kernel zones ?
>>
>> What are the use cases for kernel zones?
>>

>I've had a couple of occasions where I just can't get an application to
>work in S11.4, even after unfreezing obsolete stuff. On x86 it means I
>can lock some resources to a zone and have it stuck at S11.3 while the
>GZ is at S11.4
>(And in 1 case I have a zone at S11.3SRU23 because I couldn't be
>bothered upgrading Apache for a simple in house use Twiki :-) )

>They are more akin to LDoms than branded zones.

>Oh, and then there are immutable zones. So what if someone hacks in ?
>They can't write stuff :-)

Works for the global zone also (and thus for kernel zones)

Casper

Chris Ridd

unread,
Feb 11, 2020, 12:35:16 PM2/11/20
to
On 10/02/2020 21:51, John D Groenveld wrote:
> In article <r1s6pr$d1k$1...@dont-email.me>, Chris Ridd <chri...@mac.com> wrote:
>> It is unfortunate they haven't kept up to date wrt Linux kernel changes.
>> Joyent seem to prefer using bhve zones instead of lx nowadays, which is
>> a shame.
>
> I have had good success running Linux and FreeBSD on bhyve branded
> zones.
> <URL:https://omniosce.org/info/bhyve_kvm_brand.html>
> But I haven't benchmarked lx zone vs bhyve with Centos7 Linux guest.

I'm sure they work well. They're just heavyweight compared to an lx zone.

--
Chris

YTC#1

unread,
Feb 12, 2020, 4:17:46 AM2/12/20
to
Yeah, just don't make /var/tmp immutable, and forget you have done it :-)

Casper H.S. Dik

unread,
Feb 12, 2020, 8:10:41 AM2/12/20
to
YTC#1 <b...@ytc1.co.uk> writes:

>On 11/02/2020 14:42, Casper H.S. Dik wrote:
>> YTC#1 <b...@ytc1.co.uk> writes:
>>
>>> On 10/02/2020 22:01, John D Groenveld wrote:
>>>> In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>>>>> And have you seen kernel zones ?
>>>>
>>>> What are the use cases for kernel zones?
>>>>
>>
>>> I've had a couple of occasions where I just can't get an application to
>>> work in S11.4, even after unfreezing obsolete stuff. On x86 it means I
>>> can lock some resources to a zone and have it stuck at S11.3 while the
>>> GZ is at S11.4
>>> (And in 1 case I have a zone at S11.3SRU23 because I couldn't be
>>> bothered upgrading Apache for a simple in house use Twiki :-) )
>>
>>> They are more akin to LDoms than branded zones.
>>
>>> Oh, and then there are immutable zones. So what if someone hacks in ?
>>> They can't write stuff :-)
>>
>> Works for the global zone also (and thus for kernel zones)
>>

>Yeah, just don't make /var/tmp immutable, and forget you have done it :-)


That is, I think, only true in the "strict" profile.

Of course, we did make sure that libc and some other applications
should not use /var/tmp when it is not writable.

Casper

tim.wort

unread,
Feb 12, 2020, 11:26:28 AM2/12/20
to

On Tuesday, February 4, 2020 at 11:55:49 AM UTC-5, Jay Braun wrote:
> I recently began to work with a project whose software runs on Solaris. The software is normally deployed on a Solaris Zone. At the application level, I see several ways in which container technology could benefit the project. Can multiple Solaris containers execute on a single Solaris zone?
>
> I apologize if this is a no-brainer question, but I'm coming from an environment where we ran Docker containers on an AWS Linux VM, and the distinction between the VM and the container was very clear.


This is a comment to the complete chain of replies, some of the information is dubious so I want to clarify (hopefully) a few things. Just for background, followed this list quit a few years but I seldom comment.
I am a contract instructor for both Sun and Oracle with a lot of experience around Solaris.

The term "Containers" original meaning is: something to which you can apply resource controls. Sun Marketing (snarky comment here) in there wisdom called Solaris 10 Zones containers but the term originated with a un-bundled Sun product (I think the name may have been different) was call Resource Management. This product introduced the first "container" called "Projects".

This bundle was integrated into Solaris with the release of Solaris 9, as a little trivia, you can not log into Solaris without having a project but that is a longer discussion.

So Marketing did correct this by dropping the term "Containers" with the release of Solaris 11 but the damage was done and here we are. :)

A quick (simplified) description of Zones is separate user land environments for workloads. With features I won't list but you already are aware of those. And, of course, you can apply resource controls to zones.

Dockers in Linux are more aligned with Projects is Solaris, by no means the same thing but conceptually and purpose are similar (in my mind, I won't argue this point, if you disagree, that's fine)

The statement that Kernel Zones are similar to a LDOM is correct, KZ requires a lot more resources, particularly memory, than regular zones. However you can have regular Zones in a Kernel Zone.

So, I would recommend Zones for application (workloads) that you might have used with Dockers in Linux, Zones have drawbacks but that too is a much longer discussion.

In GENERAL you can size for zones by looking at the overall resources available against the resources required by the workloads and then place them in zones, you need not worry about the overhead of the zone, it is minimal. You should try to balance resource utilization, so not all high IO, or high memory or CPU utilization workloads are on the same server. (Global Zone)

I wouldn't recommend Project if you have never used them, they are not hard to use but for some reason the learning curve seems to be a bit steep, YMMV.
You can use projects within a Zone.

One last comment, lx zones were a proof of concept and not really intended for production use, but were pretty cool... IMO.

Grant Taylor

unread,
Feb 12, 2020, 11:35:02 AM2/12/20
to
On 2/10/20 2:08 AM, YTC#1 wrote:
> In the non Solaris world, maybe.

Is chroot still a thing in the Solaris world now that zones are common?

> As above, I was pointing out that the words are used to mean the
> same thing. Back when they came out the usage swung one way or
> another depending who was talking, and the two phrases are still
> used occasionally.

Am I understanding you correctly that in the Solaris parlance, zone ≈
container. Thus Solaris meaning ≠ non-Solaris meaning?

> Fine, but this is Solaris and it was a Solaris query. However,
> zones can be treated in the same way providing you use a decent
> installation tool.

Technology can be used a lot of different ways.

How common is it to blow a NGZ a way and ""deploy a new version of it vs
patching (upgrading) said NGZ?

> Is it not obvious ?

No. Hence my question.

> Solaris zones are still seen as being way ahead of Linux containers.

Please elaborate on /why/ Solaris zones are seen as being way ahead of
Linux containers. I'm specifically interested in /what/ is different
and /how/ that is significant.

> There was a shot period of time when docker was mean to appear on
> Solaris, and work with containers. But that failed to pass :-(

Interesting, and somewhat unsurprising given how Docker seems to want to
be everywhere. My opinion of Docker not withstanding.

> The OS separation,

Unfortunately, that's too generic for me to get any value out of.

> partitioning and isolation of resources, for one thing.

I believe that it's possible to use cgroups to restrict which resources
that a ""container (in non-Solaris parlance) has access too. I believe
there are even ways to control processor affinity to ensure that two
""containers can't interfere with each other. I believe that similar
can be done with other resources.

> Being able to run branded zones for another.

I know it's a different methodology, but I suspect that User Mode Linux
— which allows running different kernels, older or newer — can provide
similar functionality to branded zones. I expect that this can be
extended to allow running CentOS 6 w/ a 4.x kernel on an Ubuntu host
running a 5.x kernel. (Or vice versa.)

Will it be as easy, or pretty as branded zones, no. Is similar
functionality possible, probably.

> And have you seen kernel zones ?

I believe that a kernel zone would be quite similar to a UML kernel
running a different Linux distribution than the host.

> Fair enough, I am a Solaris through and through, and can be a touch
> biased.

I have no problem with biases as long as people are aware of the bias
and still willing to have polite discussions. :-)

I know that I'm biased towards Linux, but I'm trying to keep an open
mind and learn about other things. I have respect for Solaris and SPARC
hardware. Despite the last Solaris environment I was in being
administered like it was the late '90s. I see Solaris LDOMs as being
similar in concept to AIX LPARs, particularly with service domains being
analogs to VIOs, especially when there are multiple redundant service
domains / VIOs. I believe there is a LOT of capability there. I wish
more people took advantage of it.

> I find the concept of the isolation of a zone more likeable to the
> way I understand linux containers to work.

Can I ask that you elaborate on what you think each side of that
statement means?

Thank you for taking the time to reply.

Grant Taylor

unread,
Feb 12, 2020, 11:46:32 AM2/12/20
to
On 2/10/20 6:50 AM, John D Groenveld wrote:
> I find the management lx branded zones to be easier than Linux
> containers as far as resource controls and networking.

Fair enough.

I think that container management is a steaming pile in a lot of places.
Part of that is probably because the term "container" means so many
different things.

> IMO Crossbow VNICs are more intuitive than the Linux vnet/virbr
> counterpart.

Fair.

I'd encourage you to look at the macvlan virtual interface. It sounds
more similar to Crossbow, save for bandwidth control. (Even the
bandwidth control can likely be done with QoS support.)

Grant Taylor

unread,
Feb 12, 2020, 11:59:00 AM2/12/20
to
On 2/12/20 9:26 AM, tim.wort wrote:
> The term "Containers" original meaning is: something to which you can
> apply resource controls. Sun Marketing (snarky comment here) in there
> wisdom called Solaris 10 Zones containers but the term originated with
> a un-bundled Sun product (I think the name may have been different)
> was call Resource Management. This product introduced the first
> "container" called "Projects".

Is it fair to extrapolate that the container / resource control
technology was applied to zones? Thus starts the telephone game of a
containerized zone devolves into a container?

> This bundle was integrated into Solaris with the release of Solaris
> 9, as a little trivia, you can not log into Solaris without having
> a project but that is a longer discussion.

Aside: Was the ~/.plan file automatically updated? }:-)

> So Marketing did correct this by dropping the term "Containers" with
> the release of Solaris 11 but the damage was done and here we are. :)
>
> A quick (simplified) description of Zones is separate user land
> environments for workloads. With features I won't list but you already
> are aware of those. And, of course, you can apply resource controls
> to zones.

Solaris Zones seem quite similar to Linux namespaces (pick and choose
which namespace as you see fit) to me.

> Dockers in Linux are more aligned with Projects is Solaris, by no means
> the same thing but conceptually and purpose are similar (in my mind,
> I won't argue this point, if you disagree, that's fine)

:-/

I'm curious to learn more about Projects. Or at least I was prior to
the comparison of Docker, which I dislike.

> The statement that Kernel Zones are similar to a LDOM is correct,
> KZ requires a lot more resources, particularly memory, than regular
> zones. However you can have regular Zones in a Kernel Zone.

To me, LDOM is a hardware partition concept. I'm guessing that the KZ
is a software partition concept, which runs inside of an LDOM. Is that
correct?

> So, I would recommend Zones for application (workloads) that you
> might have used with Dockers in Linux, Zones have drawbacks but that
> too is a much longer discussion.

I'd be curious to participate in ~> learn from that discussion.

> In GENERAL you can size for zones by looking at the overall resources
> available against the resources required by the workloads and then
> place them in zones, you need not worry about the overhead of the
> zone, it is minimal. You should try to balance resource utilization,
> so not all high IO, or high memory or CPU utilization workloads are
> on the same server. (Global Zone)

I believe that Linux ""containers are quite similar.

> I wouldn't recommend Project if you have never used them, they are
> not hard to use but for some reason the learning curve seems to be
> a bit steep, YMMV. You can use projects within a Zone.

vi(m)'s learning curve comes to mind.

> One last comment, lx zones were a proof of concept and not really
> intended for production use, but were pretty cool... IMO.

ACK

I could see how commercial support of things on lx branded zones could
become quite problematic.

YTC#1

unread,
Feb 12, 2020, 12:16:09 PM2/12/20
to
On 12/02/2020 13:10, Casper H.S. Dik wrote:
> YTC#1 <b...@ytc1.co.uk> writes:
>
>> On 11/02/2020 14:42, Casper H.S. Dik wrote:
>>> YTC#1 <b...@ytc1.co.uk> writes:
>>>
>>>> On 10/02/2020 22:01, John D Groenveld wrote:
>>>>> In article <r1r6ht$brk$1...@dont-email.me>, YTC#1 <b...@ytc1.co.uk> wrote:
>>>>>> And have you seen kernel zones ?
>>>>>
>>>>> What are the use cases for kernel zones?
>>>>>
>>>
>>>> I've had a couple of occasions where I just can't get an application to
>>>> work in S11.4, even after unfreezing obsolete stuff. On x86 it means I
>>>> can lock some resources to a zone and have it stuck at S11.3 while the
>>>> GZ is at S11.4
>>>> (And in 1 case I have a zone at S11.3SRU23 because I couldn't be
>>>> bothered upgrading Apache for a simple in house use Twiki :-) )
>>>
>>>> They are more akin to LDoms than branded zones.
>>>
>>>> Oh, and then there are immutable zones. So what if someone hacks in ?
>>>> They can't write stuff :-)
>>>
>>> Works for the global zone also (and thus for kernel zones)
>>>
>
>> Yeah, just don't make /var/tmp immutable, and forget you have done it :-)
>
>
> That is, I think, only true in the "strict" profile.

IIRC, it was something I did on my desktop many years ago when we could
1st do these things, and then forgot about it until I had an issue....
you had to remind me :-)

>
> Of course, we did make sure that libc and some other applications
> should not use /var/tmp when it is not writable.
>
> Casper
>



YTC#1

unread,
Feb 12, 2020, 12:34:54 PM2/12/20
to
On 12/02/2020 16:35, Grant Taylor wrote:
> On 2/10/20 2:08 AM, YTC#1 wrote:
>> In the non Solaris world, maybe.
>
> Is chroot still a thing in the Solaris world now that zones are common?

You can still do it.... not run it for over 10 years ...

>
>> As above, I was pointing out that the words are used to mean the same
>> thing. Back when they came out the usage swung one way or another
>> depending who was talking, and the two phrases are still used
>> occasionally.
>
> Am I understanding you correctly that in the Solaris parlance, zone ≈
> container.  Thus Solaris meaning ≠ non-Solaris meaning?

We zones were released the 2 names were being used synonymously. Hence
the the statement about "containers being zones with resource <yadda>"

Now, generally, just call them zones.

>
>> Fine, but this is Solaris and it was a Solaris query.  However, zones
>> can be treated in the same way providing you use a decent installation
>> tool.
>
> Technology can be used a lot of different ways.
>
> How common is it to blow a NGZ a way and ""deploy a new version of it vs
> patching (upgrading) said NGZ?

You would only do that for a non-OS application, as the patching is
still dependant upon the GZ (except for KZ and branded zones).

But yes, there is no reason why you cannot blow the entire zone away to
replace/renew and app. But it will come down to the developers how they
want to achieve the required target.

This could be performed by Ops Center (on one of many other orchestrators)

>
>> Is it not obvious ?
>
> No.  Hence my question.
Fair enough

>
>> Solaris zones are still seen as being way ahead of Linux containers.
>
> Please elaborate on /why/ Solaris zones are seen as being way ahead of
> Linux containers.  I'm specifically interested in /what/ is different
> and /how/ that is significant.

Can we come back to that ?
It may be easy to have a little google, due to my bias :-)

I'd start here
https://www.oracle.com/technical-resources/articles/it-infrastructure/admin-zones-containers-virtualization.html

>
>> There was a shot period of time when docker was mean to appear on
>> Solaris, and work with containers. But that failed to pass :-(
>
> Interesting, and somewhat unsurprising given how Docker seems to want to
> be everywhere.  My opinion of Docker not withstanding.

I have no opinion on docker

>
>> The OS separation,
>
> Unfortunately, that's too generic for me to get any value out of
>
>> partitioning and isolation of resources, for one thing.
>
> I believe that it's possible to use cgroups to restrict which resources
> that a ""container (in non-Solaris parlance) has access too.  I believe
> there are even ways to control processor affinity to ensure that two
> ""containers can't interfere with each other.  I believe that similar
> can be done with other resources.

A processor , or thread can be assigned to a zone.
With KZ, memory can be assigned and not used by other zones.
>
>> Being able to run branded zones for another.
>
> I know it's a different methodology, but I suspect that User Mode Linux
> — which allows running different kernels, older or newer — can provide
> similar functionality to branded zones.  I expect that this can be
> extended to allow running CentOS 6 w/ a 4.x kernel on an Ubuntu host
> running a 5.x kernel.  (Or vice versa.)
>
> Will it be as easy, or pretty as branded zones, no.  Is similar
> functionality possible, probably.

I've not looked into that, so can't say.
(quick read)(usefll site
http://www.informit.com/articles/article.aspx?p=481866&seqNum=2) , ah,
UML is a process and not in the OS

>
>> And have you seen kernel zones ?
>
> I believe that a kernel zone would be quite similar to a UML kernel
> running a different Linux distribution than the host.
Looks like it, but I believe the Solaris approach may be more performant.

>
>> Fair enough, I am a Solaris through and through, and can be a touch
>> biased.
>
> I have no problem with biases as long as people are aware of the bias
> and still willing to have polite discussions.  :-)

I have been known to swear ....

>
> I know that I'm biased towards Linux, but I'm trying to keep an open
> mind and learn about other things.  I have respect for Solaris and SPARC
> hardware.  Despite the last Solaris environment I was in being
> administered like it was the late '90s.  I see Solaris LDOMs as being
> similar in concept to AIX LPARs, particularly with service domains being
Yep, that is about right.
It gets interesting when you get to the bigger kit (M7/M8) and PDUs so
that you have physical as well.

> analogs to VIOs, especially when there are multiple redundant service
> domains / VIOs.  I believe there is a LOT of capability there.  I wish
> more people took advantage of it.
>
>> I find the concept of the isolation of a zone more likeable to the way
>> I understand linux containers to work.
>
> Can I ask that you elaborate on what you think each side of that
> statement means?

It is my understanding that there is less isolation in Linux containers

YTC#1

unread,
Feb 13, 2020, 2:20:18 PM2/13/20
to
On 12/02/2020 16:59, Grant Taylor wrote:
<snip>

>> The statement that Kernel Zones are similar to a LDOM is correct, KZ
>> requires a lot more resources, particularly memory, than regular
>> zones. However you can have regular Zones in a Kernel Zone.
>
> To me, LDOM is a hardware partition concept.  I'm guessing that the KZ
> is a software partition concept, which runs inside of an LDOM.  Is that
> correct?

Yes, a KZ can run inside an LDom, and then you can run MGZ inside the
KZ. So the OP was correct that you can run zones in zones in zones :-)

The biggest difference in a useful sense between LDom and KZ is that KZ
can be run on x86 hardware.

From the docs ('cos no point in rewriting stuff)
---8<
The administrative and structural content of a kernel zone is entirely
independent of the global zone. For example, a kernel zone does not
share software packaging with the global zone, or kernel zone host.
Package updates on the kernel zone host are not linked images and do not
affect kernel zones. Similarly, packaging commands such as pkg update
are fully functional from inside a kernel zone.
---8<
And
---8<
System processes are handled in the kernel zone's separate process ID
table and are not shared with the global zone. Resource management in
kernel zones is also different. Resource controls such as max-processes
are not available when configuring a kernel zone.
---8<

Another thing I like about zones is the ability to migrate them
(warm/cold/live), with the use of a single command.
(As well as home grown methods)

Not 100% sure that is so straight forward with linux ?
0 new messages