Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

S10 LU with SVM (RAID-1) volumes

66 views
Skip to first unread message

Youri Podchosov

unread,
Jan 2, 2006, 3:28:37 PM1/2/06
to
I just did S10 -> S10U1 LU (SPARC) and the procedure for creation of SVM
RAID-1 based new BE described in
http://docs.sun.com/app/docs/doc/817-5505/6mkv5m1lq?a=view didn't work.

The original BE (S10 FCS) has /, swap and a couple of other (shared
between BEs) filesystems, all mirrored with SVM. The new BE (S10 U1) is
supposed to have the very same disk layout. Accordingly to the book,
mirrored root device for the new BE is supposed to be created with
something like

lucreate -A 'mydescription' \
-m /:/dev/md/dsk/d10:ufs,mirror \
-m /:/dev/dsk/c0t0d0s0,d1:attach \
-m /:/dev/dsk/c0t1c0s0,d2:attach -n another_disk

That didn't work in the sense that lucreate complains that it knows
nothing about /dev/md/dsk/d10 (well, indeed, it doesn't exist yet)
instead of creating it as instructed. Only after the new mirror, d10,
is fully configured (w/o newfs, of course) by hand and the example above
is reduced to just

lucreate -A 'mydescription' \
-m /:/dev/md/dsk/d10:ufs -n another_disk

things work. LU certainly understands SVM configuration, it correctly
determines physical boot devices for both old and new BE, etc., but the
promised ability to perform simple volume management operations seems to
be not there.

It's been repeatedly stated in many documents, with command line
examples that look consistent, that current LU properly supports [not
too sophisticated] SVM configurations. So why didn't it work for me?

--
/ynp

Thomas Maier-Komor

unread,
Jan 2, 2006, 5:55:05 PM1/2/06
to

I can't tell you why it did not work for you, because there is too few
information.

But I can tell you what I usually do:

I usually have a layout like this:
two physical disks - e.g. c1t0 and c1t1 (even better c2t0)
on each disk there is a slice for swap (slice 0), root (slice 1), and
svm metadata (slice 7).

Then svm has a submirror for each root and swap slice and a mirror
for swap and root.

e.g. d0 consists of d100 (c1t0d0s0) and d110 (c1t1d0s0)
d1 consists of d101 (c1t0d0s1) and d111 (c1t1d0s1)
c1t0d0s7 and c1t0d0s7 contain the metadata

for a live upgrade I break the mirror and preserve its data:

#### I guess the following line is the one most important to you:
#### you must name the mirror and its submirrors (at least one)
#### to get a valid setup.
lucreate -n new_be -m /:d2:ufs,mirror -m /:d111:detach,attach,preserve

The I upgrade new_be and boot it.
If everything is fine I delete the old boot environment

$ ludelete old_be

clear off the old mirror
$ metaclear d1

and attach its submirror to the new root mirror
$ metattach d2 d101

Works for me.

HTH,
Tom

Youri Podchosov

unread,
Jan 2, 2006, 6:32:47 PM1/2/06
to

Tom,

thanks for your response. My understanding, however, is that your
scenario is just one of the possible ways to deal with mirror-to-mirror
upgrade. It's an advantage if you want to avoid the step of copying the
entire old BE: everything's already there when a detached submirror is
used as a basis for the new BE.

I'm talking about another scenario when the target (new BE) mirrored
root device is to be built from scratch as a part of the new BE creation
procedure. [Think of an upgrade from non-SVM BE to SVM-RAID-1 based BE:
you don't have a detachable submirror as a starting point in such a case.]

As to the insufficient information, what more needs to be told? The old
BE is completely patched, including the "laundry list" from InfoDoc
72099. The problem still is lucreate wants the target mirror it's
supposed (accordingly to all documentation) to create to be already
existent. As soon as the mirror is made outside of the LU process, it
can be used by lucreate as any other, regular slice. Does this behavior
somehow depend on the particular SPARC model, or physical disks/HA
configuration? Did it work for anybody as described? -- that's what's
most interesting to me.

Or I'm not getting something essential here?

--
/ynp

Thomas Maier-Komor

unread,
Jan 2, 2006, 7:05:58 PM1/2/06
to
Youri Podchosov wrote:
>
> Tom,
>
> thanks for your response. My understanding, however, is that your
> scenario is just one of the possible ways to deal with mirror-to-mirror
> upgrade. It's an advantage if you want to avoid the step of copying the
> entire old BE: everything's already there when a detached submirror is
> used as a basis for the new BE.
>
> I'm talking about another scenario when the target (new BE) mirrored
> root device is to be built from scratch as a part of the new BE creation
> procedure. [Think of an upgrade from non-SVM BE to SVM-RAID-1 based BE:
> you don't have a detachable submirror as a starting point in such a case.]
>
> As to the insufficient information, what more needs to be told? The old
> BE is completely patched, including the "laundry list" from InfoDoc
> 72099. The problem still is lucreate wants the target mirror it's
> supposed (accordingly to all documentation) to create to be already
> existent. As soon as the mirror is made outside of the LU process, it
> can be used by lucreate as any other, regular slice. Does this behavior
> somehow depend on the particular SPARC model, or physical disks/HA
> configuration? Did it work for anybody as described? -- that's what's
> most interesting to me.
>
> Or I'm not getting something essential here?
>

yes, the tagged lines, I would say.
The command I mentioned also works if you have a 3-way mirror and no
boot environment setup. This is pretty similar to no boot environment
and mirror on new boot environment.

But to make a long story short, look at example 8 of lucreate(1M). I
think this is very close to what you want...

Tom

Youri Podchosov

unread,
Jan 2, 2006, 8:17:20 PM1/2/06
to

Well, example 8 in lucreate(1m) man page is exactly what I wanted,
exactly what I quoted in my original message and exactly what did not
work for me. This part 1 in

Example 8: Using Solaris Volume Manager Volumes

The command shown below does the following:

1. Creates the mirror d10 and establishes this mirror as
the receptacle for the root file system.

is what was my problem: lucreate did not show any intention to *create*
the requested mirror, it wanted it to be already created.

Anyway, to make this too long already story even shorter: I managed to
do all I needed, although a slightly different way, and I can live with
it: not by the book, but it worked.

Thanks!

--
/ynp

Thomas Maier-Komor

unread,
Jan 2, 2006, 9:50:03 PM1/2/06
to
Youri Podchosov wrote:
>
>
> Well, example 8 in lucreate(1m) man page is exactly what I wanted,
> exactly what I quoted in my original message and exactly what did not
> work for me. This part 1 in
>
> Example 8: Using Solaris Volume Manager Volumes
>
> The command shown below does the following:
>
> 1. Creates the mirror d10 and establishes this mirror as
> the receptacle for the root file system.
>
> is what was my problem: lucreate did not show any intention to *create*
> the requested mirror, it wanted it to be already created.
>
> Anyway, to make this too long already story even shorter: I managed to
> do all I needed, although a slightly different way, and I can live with
> it: not by the book, but it worked.
>
> Thanks!
>

but that's what the book ought do be for. I suppose you don't have any
log of what happened - somebody might care... I only once had a problem
with liveupgrade, when it denied creating a root mirror on slice 0. I
cannot remember what the problem back then was.

Tom

Youri Podchosov

unread,
Jan 3, 2006, 1:23:18 AM1/3/06
to

Why, I do have a log:


LiveUpgrade -- Sun Jan 1 12:37:29 EST 2006
+ 1> /opt/local/install/upgrade2.log
+ lucreate -c s10fcs -n s10u1 -A Solaris 10 Update 1 -m
/:/dev/md/dsk/d3:mirror,ufs -m /:/dev/dsk/c0t0d0s3,d103:attach -m
/:/dev/dsk/c0t2d0s3,d123:attach
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
ERROR: device </dev/md/dsk/d3> does not exist
ERROR: device </dev/md/dsk/d3> is not available for use with mount point </>
ERROR: cannot create new boot environment using file systems as configured
ERROR: please review all file system configuration options
ERROR: cannot create new boot environment using options provided


--
/ynp

Thomas Maier-Komor

unread,
Jan 3, 2006, 8:07:22 AM1/3/06
to
Youri Podchosov wrote:
>
>
> Why, I do have a log:
>
>
> LiveUpgrade -- Sun Jan 1 12:37:29 EST 2006
> + 1> /opt/local/install/upgrade2.log
> + lucreate -c s10fcs -n s10u1 -A Solaris 10 Update 1 -m
> /:/dev/md/dsk/d3:mirror,ufs -m /:/dev/dsk/c0t0d0s3,d103:attach -m
> /:/dev/dsk/c0t2d0s3,d123:attach
> Discovering physical storage devices
> Discovering logical storage devices
> Cross referencing storage devices with boot environment configurations
> Determining types of file systems supported
> Validating file system requests
> ERROR: device </dev/md/dsk/d3> does not exist
> ERROR: device </dev/md/dsk/d3> is not available for use with mount point
> </>
> ERROR: cannot create new boot environment using file systems as configured
> ERROR: please review all file system configuration options
> ERROR: cannot create new boot environment using options provided
>
>

In my eyes this should have worked...

0 new messages