having a disk configuration like that
hdisk1 None None
hdisk2 None None
hdisk3 None None
hdisk4 None None
...
and so on
...
and
hdiskpower1 PVID vgname1
hdiskpower2 PVID vgname1
...
and so on
...
With powermt I see every hdiskpower contains 4 hdisks - ok.
Now I add 16 new luns and after emc_cfgmgr and powermt config I see 16
new hdisks and 4 new hdiskpower - ok.
hdiskpower3 PVID None
hdiskpower4 PVID None
hdiskpower5 PVID None
hdiskpower6 PVID None
Every filesystem which I want to increase is on two hdiskpower, say
lvname fsname 404 PPs hdiskpower1
lvname fsname 066 PPs hdiskpower2
After increasing the filesystems, "old" hdiskpower should be removed,
means I want to migrate the part of a fs, which is actually on
hdiskpower2 to new hdiskpower3. So I plan to do the following:
extendvg vgname1 hdiskpower3
migratepv hdiskpower2 hdiskpower3
check if hdiskpower2 is empty after migrating
if so, reducevg vgname1 hdiskpower2
and remove the hdiskpower2 with tools of powerpath
Can this be done this way?
Thx in advance
Friedhelm
> extendvg vgname1 hdiskpower3
> migratepv hdiskpower2 hdiskpower3
> check if hdiskpower2 is empty after migrating
> if so, reducevg vgname1 hdiskpower2
>
> and remove the hdiskpower2 with tools of powerpath
>
> Can this be done this way?
Yes, this will work quite nicely. Before you remove hdiskpower2, don't forget
to make a note about which hdisks make up hdiskpower2. After you remove
hdiskpower2, you should remove the hdisks too.
--
Jurjen Oskam
Savage's Law of Expediency:
You want it bad, you'll get it bad.
It means more less:
First remove the hdiskpower and the depending hdisk with rmdev BEFORE
you remove the lun from the storage group.
Otherwise a simple lspv might crash your system ...
cheers
Hajo
> Normaly no need to use the emc tools. An rmdev work fine.
more often than not, this will work fine. but if you ever have any
problems with emc devices, emc will tell you that you should have
followed their documented procedures. following their procedures is
much easier than the problem resolution after the fact.
Friedhelm Neyer schrieb:
think twice about that. depending on which version of ppath you use you
risk odm corruption (or at least having emc claim odm corruption when
something barfs and the resulting cleanup requires a reboot). the best
practice, per most of the emc docs spanning the versions we've had to
mess with is..
inq | tee /tmp/inq.out # nice to just have this list beforehand
powermt remove dev=<power device to remove>
rmdev -dl <power device to remove>
rmdev -dl <each hdisk for the power device to remove>
-r
>> Normaly no need to use the emc tools. An rmdev work fine. But you
>> should read
>
> think twice about that. depending on which version of ppath you use you
> risk odm corruption (or at least having emc claim odm corruption when
> something barfs and the resulting cleanup requires a reboot). the best
> practice, per most of the emc docs spanning the versions we've had to
> mess with is..
>
> inq | tee /tmp/inq.out # nice to just have this list beforehand
> powermt remove dev=<power device to remove>
> rmdev -dl <power device to remove>
> rmdev -dl <each hdisk for the power device to remove>
There are at least several Powerlink articles which say you can remove
powerpath devices with rmdev, without using powermt remove. I didn't search
any other documentation (such as whitepapers, etc) but I expect that
rmdev pops up there too.
Note that the documentation does state you need to do a "powermt save" after
a powerpath reconfiguration.
So I've done the following steps:
powermt remove dev=<hiskpowerxx>
rmdev -Rdl every hdiskpower which should be removed
rmdev -Rdl every underlying hdisk
powermt save
... and all runs fine