Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

PVID not showing, PV not coming up

1,291 views
Skip to first unread message

Goni

unread,
Aug 18, 2010, 5:47:45 PM8/18/10
to
Hello Everyone :)
I am facing a strange issue, I have a VG which was supposed to be on 1
PV, but it's is looking for 2 PVID(s).
===================
mpa018:/>lqueryvg -Atp hdiskpower0
Max LVs: 256
PP Size: 27
Free PPs: 647
LV count: 3
PV count: 2
Total VGDAs: 2
Conc Allowed: 0
MAX PPs per PV 1016
MAX PVs: 32
Quorum (disk): 1
Quorum (dd): 1
Auto Varyon ?: 1
Conc Autovaryo 0
Varied on Conc 0
Logical: 00c2e1de00004c00000001297abc9cb3.1 lv_sweapps 1
00c2e1de00004c00000001297abc9cb3.2 lv_websrvr 1
00c2e1de00004c00000001297abc9cb3.3 loglv00 1
Physical: 00c2e1de9e8218dc 0 4
00c4aa859f7af139 2 0
Total PPs: 808
LTG size: 128
HOT SPARE: 0
AUTO SYNC: 0
VG PERMISSION: 0
SNAPSHOT VG: 0
IS_PRIMARY VG: 0
PSNFSTPP: 4352
VARYON MODE: 0
VG Type: 0
Max PPs: 32512

===================
See the PV count=2 and the PVID is also shown in Physical tab. This is
ok, but the other disk I have, hdiskpower1, I think this is the same
disk but I am not able to see or set any PVID for this PV.
chdev -l hdiskpower1 -a pv=yes says the PVID is changed, but lspv
shows none.

===================
mpa018:/>importvg -y appsvg -f hdiskpower0
PV Status: hdiskpower0 00c4aa859f7af139 PVACTIVE
00c2e1de9e8218dc NONAME
varyonvg: Volume group appsvg is varied on.
0516-510 synclvodm: Physical volume not found for physical volume
identifier 00c2e1de9e8218dc0000000000000000.
0516-548 synclvodm: Partially successful with updating volume
group appsvg.
0516-1281 synclvodm: Warning, lv control block of lv_sweapps
has been over written.
0516-1281 synclvodm: Warning, lv control block of lv_websrvr
has been over written.
0516-1281 synclvodm: Warning, lv control block of loglv00
has been over written.
0516-622 synclvodm: Warning, cannot write lv control block data.
0516-622 synclvodm: Warning, cannot write lv control block data.
0516-622 synclvodm: Warning, cannot write lv control block data.
appsvg
PV Status: hdiskpower0 00c4aa859f7af139 PVACTIVE
00c2e1de9e8218dc NONAME
varyonvg: Volume group appsvg is varied on.

===================

So, it seems like we are missing 1 PV. But why I am not able to set
the PVID for the other disk? Is there a way we can set this?
===================
mpa018:/emc>synclvodm appsvg
0516-510 synclvodm: Physical volume not found for physical volume
identifier 00c2e1de9e8218dc0000000000000000.
0516-548 synclvodm: Partially successful with updating volume
group appsvg.
0516-1281 synclvodm: Warning, lv control block of lv_sweapps
has been over written.
0516-1281 synclvodm: Warning, lv control block of lv_websrvr
has been over written.
0516-1281 synclvodm: Warning, lv control block of loglv00
has been over written.
0516-622 synclvodm: Warning, cannot write lv control block data.
0516-622 synclvodm: Warning, cannot write lv control block data.
0516-622 synclvodm: Warning, cannot write lv control block data.
===================

Goni

Goni

unread,
Aug 18, 2010, 5:50:42 PM8/18/10
to
Storage is EMC storage and PowerPath is installed managing the disks.
There are around 4 servers doing the same. Funny thing is that, on
some of the servers, after reboot the VG won't varyon or import. But
if I mask/add new device to the server, everything starts working
perfectly ok. Is the ODMDB corrupted?

Hajo Ehlers

unread,
Aug 19, 2010, 9:10:19 AM8/19/10
to


What are you doing ?
- Does the servers access the same LUNs ?
- Any kind of cluster software ?
- Which powerpath version is in use ?
- Which failover mode ?
- Any hdiskpower ODM setting like reserve_lock verified ?

$ powermt display dev=all
is very helpfull.

cheers
Hajo

Goni

unread,
Aug 19, 2010, 5:16:38 PM8/19/10
to
>  - Does the servers access the same LUNs ?
No, each LUN only masked to 1 server.

>  - Any kind of cluster software ?

No. Standalone machines.

>  - Which powerpath version is in use ?

PP 5.3 SP 1

>  - Which failover mode ?

Symmetrix Opt.

>  - Any hdiskpower ODM setting like reserve_lock verified ?

Well, I thought of it. I know for EMC devices reserv_lock should not
be yes, all of the devices have reserv_lock yes.
mpa018:/>lsattr -El hdiskpower0
clr_q yes Clear Queue (RS/
6000) True
location
Location True
lun_id 0x0 LUN
ID False
lun_reset_spt yes FC Forced Open
LUN True
max_coalesce 0x10000 Maximum coalesce
size True
max_transfer 0x40000 Maximum transfer
size True
pvid 00c2e1deb4d2ef970000000000000000 Physical volume
identifier False
pvid_takeover yes Takeover PVIDs from
hdisks True
q_err no Use QERR
bit True
q_type simple Queue
TYPE False
queue_depth 16 Queue
DEPTH True
reassign_to 120 REASSIGN time out
value True
reserve_lock yes Reserve device on
open True
rw_timeout 40 READ/WRITE time
out True
scsi_id 0xa9b00 SCSI
ID False
start_timeout 180 START unit time
out True
ww_name 0x50000974080e992c World Wide
Name False

This could be the issue?

>
> $ powermt display dev=all

mpa018:/>powermt display dev=all
Pseudo name=hdiskpower1
Symmetrix ID=000292600065
Logical device ID=0263
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - --
Stats ---
### HW Path I/O Paths Interf. Mode State Q-
IOs Errors
==============================================================================
0 fscsi0 hdisk3 FA 8fA active alive
0 0
1 fscsi2 hdisk5 FA 10fA active alive
0 0

Pseudo name=hdiskpower0
Symmetrix ID=000292600065
Logical device ID=0299
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host --------------- - Stor - -- I/O Path - --
Stats ---
### HW Path I/O Paths Interf. Mode State Q-
IOs Errors
==============================================================================
0 fscsi0 hdisk2 FA 8fA active alive
0 0
1 fscsi2 hdisk4 FA 10fA active alive
0 0

Hajo Ehlers

unread,
Aug 20, 2010, 4:52:36 PM8/20/10
to
> Symmetrix Opt.
I have only knowledge with Clariion

1) Unlikely but possible - Not that you configure AIX MPIO devices
with powerpath. ( Never did a test to see what happens )
Powerpath configuration was done in such a way that
a) All EMC disks has been removed
b) EMC ODM has been installed
c) Powerpath has been installed.
d)The EMC configue script has been run.

2) More likely that your second disk has a reservation left for any
reason.
In this case check first on the EMC and all attached node that no
other node has the Logical device ID=0299 in use or assigned to.

I think the emc odm package or EMCpower base package has a tool to
reset the reservation

Something like
/usr/lpp/EMC/Symmetrix/bin/emcpowerreset fscsiX hdiskpowerX
should break the reservation
Afterwards you should be able to access the disk.

Check with EMC whether or not the reservation should be yes or no. I
would prefer the 'no' setting since the reservation is not needed
(IMHO) and might cause problems after a server reboot.

hth
Hajo

Message has been deleted

jthom...@yahoo.com

unread,
Sep 2, 2010, 12:09:38 PM9/2/10
to

I don't think the reserve_lock=no is necessary unless you are using
oracle ASM or similar multinode schemes. We have been having issues
lately with AIX6.1TL4 and powerpath devices where the ecc, navisphere
and MARagents are locking up an hdiskpower device leaving a PVMISSING
state. Solution is to kill all the ecc , mar, navisphere agents so
the device is freed. ( Not sure which agent /process is the culprit )

0 new messages