My first idea was to remove hdisk1 from ODM and
use unallocated space on hdisk0 to re-create some vital filesystems
from backups, but I cannot remove mount point of filesystems
left on the crashed disk :
0516-306 getlvodm: Unable to find user2vg in the Device
Configuration Database
0516-912 rmlv. Unable to remove logical volume user2a.
I also have problems changing the status of the filesystems and
of paging space located on the crashed disk to have it forgotten
on the next startup:
0516-010 lqueryvg: Volume group must beb varied on; use varyonvg
command.
0516-010 lquerylv: Volume group must beb varied on; use varyonvg
command.
0516-704 chlv: Unable to change logical volume user2a.
Did anyone encounter such a situation ? What to do when
I cannot define filesystems with the same names on other
disks, cannot remove them nor making them forget on next
reboot ?
Thanks in advance
Roman Kanala
kan...@sc2a.unige.ch
--
Norman Levin
Jan Muench wrote in response to my previous query:
>at the first look this looks like that the vg was removed from the odm
>before the pv.
Yes exactly.
> to fix it you should take a look at this paper from IBM.
> Rebuilding a Volume Group's Customized Device Database
...
PV=/dev/ipldevice
VG=rootvg
lqueryvg -Lp $PV | awk '{ print $2 }' | while read LVname; do
odmdelete -q "name = $LVname" -o CuAt
odmdelete -q "name = $LVname" -o CuDv
odmdelete -q "value3 = $LVname" -o CuDvDr
odmdelete -q "dependency = $LVname" -o CuDep
done
odmdelete -q "name = $VG" -o CuAt
odmdelete -q "parent = $VG" -o CuDv
odmdelete -q "name = $VG" -o CuDv
odmdelete -q "name = $VG" -o CuDep
odmdelete -q "dependency = $VG" -o CuDep
if [ "$VG" = rootvg ]
then
odmdelete -q "value1 = 10" -o CuDvDr
else
odmdelete -q "value1 = $VG" -o CuDvDr
fi
odmdelete -q "value3 = $VG" -o CuDvDr
importvg -y $VG $PV # ignore lvaryoffvg errors
varyonvg $VG
synclvodm -v $VG
savebase
> [ TechDocs Ref: 90605223414650 Publish Date: Feb. 01, 2000
> 4FAX Ref: 2418 ]
Thank you very much, it worked. Without this it would be impossible
to get rid of lvg's in hdisk1 ODM records because it was dead.
Now I have a new hdisk1 and everything is running.
Roman Kanala