Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to Patch and Manage Zone in Sun Cluster Node Solaris-10 Servers ?? ***BEGINNER QUESTIONS***

294 views
Skip to first unread message

underh20

unread,
Feb 8, 2011, 8:29:57 PM2/8/11
to
To Whom It May Concern,


I just inherited two Solaris 10 (release 09/10) servers which are
running Sun Cluster 3.3. Each server hosts a sun cluster node, i.e.,
one server is "cluster-1" and the other server is "cluster-2".
"cluster-1" is online with two local zones, namely classical and
baroque. "cluster-2" is online with two local zones, namely romantic
and modern. Does anyone know the steps or where to find technical
docs for the following tasks.

Thanks again for your kind assistance. Bill

- Apply Solaris Kernel patches to the global zones and local zones
for each of the two servers (nodes) in the Sun Cluster; reboot each
node after patching

- Move local zone "classical" and all its resources from online
cluster node "cluster-1" to offline cluster node "cluster-2"

- Move local zone "romantic" and all its resources from online
cluster node "cluster-2" to offline cluster node "cluster-1"

- Add and expand 50g to zfs file system /data in local zone
"classical" in cluster-1

- Add 100g swap space to local zone "romantic" in cluster-2

# clrg status

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- ---------
--------- ------
classical cluster-1 No Online
cluster-2 No Offline

baroque cluster-1 No Online
cluster-2 No Offline

romantic cluster-1 No
Offline
cluster-2 No Online

modern cluster-1 No Offline
cluster-2 No Online

nelson

unread,
Feb 9, 2011, 11:14:42 PM2/9/11
to
I'm asuming these are failover zones, is that correct?

1 - we use the zone upgrade on attach feature with live upgrade for
this sort of thing. basically switch resource groups to another node,
there's a document out there on the web about the various methods and
since you're already using 3.3 check doc 821-1256 for the best fit for
you. here's basically what we do from the top of my head (so i might
be missing a step or two)
* clrg switch -n <node> <group> (or clnode evacuate <node>)
* change /etc/zones/index to be installed instead of configured
(there's a document out there that says to detach the zone but we
couldn't work that one) and commented out.
* create an ABE with lucreate
* upgrade the ABE with luupgrade
* activate it with luactivate and init 6.
* uncomment /etc/zones/index and attach upgrade the zone

2 - i'm not sure i follow, if you want it offline it it won't be on
any node (so just offline the resource group - ala 'clrg offline
classical'). if you want to switch it to node 'cluster-2' and online
it just 'clrg switch -n cluster-2 classical'

3 - as above, either 'clrg offline romantic' or 'clrg switch -n
cluster-2 romantic'

4 - i'm guessing you're adding a lun to a zpool to expand it, if so
'zpool add <poolname> <disk>' will make the space available to the
pool, the rest depends on whether or not you're using reservations or
quota in the volume. also make sure the disk is visible on all nodes
(cldev is pretty handy there)

5 - is it using capped-memory in the zone config? if so rememeber to
change it on all nodes.

0 new messages