For example:
# zpool replace mypool /DISK_ARRAY1/disk1 /DISK_ARRAY2/disk1
That will work but "mypool" has close to 100 SAN disks. I supposed I
can write a script that would match the disk from DISK_ARRAY1 to
DISK_ARRAY2 and do the "zpool replace" for each matching device.
However, I was hoping ZFS would allow me to create another ZFS pool
named "mypool2" for example, mirror "mypool" with "mypool2" and once
the re-silvering processes was complete, break the mirror, then rename
"mypool2" to "mypool".
The solution I am searching for would ideally not require any downtime
to the "mypool" (i.e. no zpool export/import).
I am running Solaris 10 8/07.
The disk devices on DISK_ARRAY2 will be of equal or greater size to
that of the disks in DISK_ARRAY1.
Current SAN Disks:
/DISK_ARRAY1/disk1
/DISK_ARRAY1/disk2
/DISK_ARRAY1/disk3
…
/DISK_ARRAY1/disk99
/DISK_ARRAY1/disk100
New SAN Disks:
/DISK_ARRAY2/disk1
/DISK_ARRAY2/disk2
/DISK_ARRAY2/disk3
…
/DISK_ARRAY2/disk99
/DISK_ARRAY2/disk100
TODAY
=======
# zfs status -v mypool
pool: mypool
state: ONLINE
scrub:
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
/DISK_ARRAY1/disk1 ONLINE 0 0 0
/DISK_ARRAY1/disk2 ONLINE 0 0 0
/DISK_ARRAY1/disk3 ONLINE 0 0 0
...
/DISK_ARRAY1/disk99 ONLINE 0 0 0
/DISK_ARRAY1/disk100 ONLINE 0 0 0
errors: No known data errors
HOPING FOR
===========
# zfs status -v mypool
pool: mypool
state: ONLINE
scrub:
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
/DISK_ARRAY2/disk1 ONLINE 0 0 0
/DISK_ARRAY2/disk2 ONLINE 0 0 0
/DISK_ARRAY2/disk3 ONLINE 0 0 0
...
/DISK_ARRAY2/disk99 ONLINE 0 0 0
/DISK_ARRAY2/disk100 ONLINE 0 0 0
errors: No known data errors
Just attach each disk on ARRAY_2 as a mirror of the corresponding disk
in ARRAY_1, once the resliver has taken place detach ARRAY_1 from the
pool.
I think you can do this with the zpool attach command.
zpool attach mypool ARRAY_1/disk1 ARRAY_2/disk1
This should add a mirror to ARRAY_1/disk1 from ARRAY_2/disk1
Then zpool detach mypool ARRAY_1/disk1 ...
You may have an issue however with the larger disks being added as a
mirror, I dont think the extra space will be available to the pool
until all the disks in the pool are also at the greater capacity. Not
sure on this however.
When you use zpool attach/detach to attach larger disks, the pool
space is
increased when the smaller disks are detached.
If you don't want to migrate the pool from ARRAY1 to ARRAY2 by
either using zpool replace or zpool attach disk-by-disk, then you
could build
a new mypool2 with ARRAY2 and copy the data over.
After you are satisfied that the new pool is finished, destroy the old
pool.
If possible, you could rename the new pool by exporting it and
importing it with the mypool name.
Cindy
I was going to suggest creating a newpool and then send/recv the data
into the new pool, however creating a mirror with the new disks and
then detaching the old ones would be how I would do it. I suppose its
up to the admin, such is the power of zfs..
Thanks for the info on the larger disks Cindy, I didn't know that.
The sounds like a disaster waiting to happen. Always give ZFS control
of the pool's redundancy. If you loose any of those devices, you loose
the pool and all its data.
> comprised of several disk in DISK_ARRAY1 to other comparable SAN disks
> in DISK_ARRAY2. I understand the usage of "zpool replace" and would
> like to avoid having to individually replace each disk from
> DISK_ARRAY1 to DISK_ARRAY2.
>
> For example:
>
> # zpool replace mypool /DISK_ARRAY1/disk1 /DISK_ARRAY2/disk1
>
Why don't you create a new pool on the new devices (with redundancy!)
and use zfs send/receive to replicate the data over?
You can easily rename the pools by exporting them and re-importing them
with a new name.
zpool export newpool
zpool import newpool mypool
--
Ian Collins
# mkfile 64m /var/tmp/foo1 /var/tmp/foo2 /var/tmp/foo3
# zpool create foobar /var/tmp/foo1 /var/tmp/foo2 /var/tmp/foo3
# mkfile 128m /var/tmp/bar1 /var/tmp/bar2 /var/tmp/bar3
# zpool attach foobar /var/tmp/foo1 /var/tmp/bar1
# zpool attach foobar /var/tmp/foo2 /var/tmp/bar2
# zpool attach foobar /var/tmp/foo3 /var/tmp/bar3
# zpool detach foobar /var/tmp/foo1
# zpool detach foobar /var/tmp/foo2
# zpool detach foobar /var/tmp/foo3
I don't see the additional space zpool iostat until after I
export and import foobar:
# zpool export foobar
# zpool import -d /var/tmp foobar
Happy hacking!
John
groe...@acm.org
Hi Ian, thanks for taking the time to respond to my post. The
"mypool" ZFS pool is made up of 100+ RAID-5 protected devices from the
SAN. In my mind, there was no need to provide additional mirroring on
the host side if the devices I was using to build the pool were
already protected at the SAN.
What is the benefit of using "zpool attach" over "zpool replace"? The
way I understand it, using "zpool replace" would mirror itself to the
device, and once its re-silvered, the old device would automatically
detach itself from the pool. Am I misunderstanding the concept?
Thanks,
webjuan
Thanks Cindy. I think I will build a simple script that will list all
the SAN devices that make up the "mypool" ZFS pool and individually
replace the disks using "zpool replace" with the SAN devices in
DISK_ARRAY2. This beats doing all of them by hand. I tested it on a
lab server with 10 disks and it works fine and best of all, the
"mypool" always remains online.
Thanks,
webjuan
Good info. I will have to test this out. Thanks John.
webjuan
I have seen metadata corruption at the zfs layer which can result in
pool loss. If zfs detects a problem and needs to rewrite something in
the pool it HAS to have another zfs device within the pool to
replicate from..
Please do not get caught on this as I have.
Hi John,
Which Solaris release is this?
I retested the attach/detach operation with disks on an upcoming
Solaris 10 release
but have also confirmed this behavior on previous Solaris 10 releases.
An existing bug does prevent the correct behavior with zpool replace
and you do
have to import/export to see the expanded space.
See the output below.
Cindy
c1t226000C0FFA001ABd19 = 9 GB
c0t0d0 = 72 GB
# zpool create pool c1t226000C0FFA001ABd19
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 8.75G 87K 8.75G 0% ONLINE -
# zpool attach pool c1t226000C0FFA001ABd19 c0t0d0
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 8.75G 87K 8.75G 0% ONLINE -
# zpool detach pool c1t226000C0FFA001ABd19
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 68.3G 104K 68.3G 0% ONLINE -
That could turn into a very expensive mistake. If *anything* goes wrong
(random bit flip, noisy cable, switch hiccup), you're screwed.
At least use raidz (with no more than 8 devices in a vdev).
--
Ian Collins
Whoops, I'm running Nevada build 122 on my laptop.
>I retested the attach/detach operation with disks on an upcoming
>Solaris 10 release
>but have also confirmed this behavior on previous Solaris 10 releases.
I retested on a fully patched Solaris 10 system and it works as
expected.
John
groe...@acm.org
Starting in Nevada, build 117, you have to set the autoexpand property
to see the expanded space. This way, you can control whether a LUN
is expanded or not
Cindy
c2t2d0 = 68 GB
c0t1d0 = 136 GB
# zpool create -pool c2t2d0
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 68G 99K 68.0G 0% ONLINE -
# zpool attach pool c2t2d0 c0t1d0
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 68G 110K 68.0G 0% ONLINE -
# zpool detach pool c2t2d0
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 68G 91.5K 68.0G 0% ONLINE -
# zpool set autoexpand=on pool
# zpool list prool
cannot open 'prool': no such pool
# zpool list pool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
pool 136G 116K 136G 0% ONLINE -