Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MPIO disks and paths

1,881 views
Skip to first unread message

miles

unread,
Feb 21, 2007, 12:55:50 PM2/21/07
to
Hi folks,

I'm getting terrible performance from my MPIO disks, they on an HP
SAN. I think I know why, but I don't know how to change it. It looks
like 90%+ of my traffic is going through one of my two fibre adapters.

Here is some stuff from nmon:
fcs0 0.0 0.0 0.0 KB/s 0.0 14 FC Adapter
fcs1 104.6 9166.3 0.0 KB/s 108.5 14 FC Adapter

Here are the two disks:
│hdisk24 52% 4620 0|RRRRRRRRRRRRRRRRRRRRRRRRRR
>
│hdisk25 53% 4544 0|RRRRRRRRRRRRRRRRRRRRRRRRRRR
>

Notice the sum of transferred data matches nicely.

lscfg for fsc0 (this seems wrong to me, it should be fcs1)
+ fcs0 U7311.D20.107308A-P1-C01-
T1 FC Adapter
* fcnet0 U7311.D20.107308A-P1-C01-
T1 Fibre Channel Network Protocol
Device
* fscsi1 U7311.D20.107308A-P1-C01-
T1 FC SCSI I/O Controller Protocol
Device
* hdisk2 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
LC000000000000 HP MPIO Disk (Fibre)
* hdisk3 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
LD000000000000 HP MPIO Disk (Fibre)
* hdisk14 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L0 HP MPIO Disk (Fibre)
* hdisk16 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L2000000000000 HP MPIO Disk (Fibre)
* hdisk17 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L3000000000000 HP MPIO Disk (Fibre)
* hdisk18 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L4000000000000 HP MPIO Disk (Fibre)
* hdisk19 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L5000000000000 HP MPIO Disk (Fibre)
* hdisk20 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L6000000000000 HP MPIO Disk (Fibre)
* hdisk21 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
L7000000000000 HP MPIO Disk (Fibre)
* hdisk24 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
LA000000000000 HP MPIO Disk (Fibre)
* hdisk25 U7311.D20.107308A-P1-C01-T1-W50060E80032A1421-
LB000000000000 HP MPIO Disk (Fibre)

lscfg for fcs1 (this seems wrong to me, it should be fcs0)
+ fcs1 U7311.D20.107308A-P1-C06-
T1 FC Adapter
* fcnet1 U7311.D20.107308A-P1-C06-
T1 Fibre Channel Network Protocol
Device
* fscsi0 U7311.D20.107308A-P1-C06-
T1 FC SCSI I/O Controller Protocol
Device
* hdisk4 U7311.D20.107308A-P1-C06-T1-W50060E80032A1431-
L8000000000000 HP MPIO Disk (Fibre)
* hdisk15 U7311.D20.107308A-P1-C06-T1-W50060E80032A1431-
L1000000000000 HP MPIO Disk (Fibre)
* hdisk23 U7311.D20.107308A-P1-C06-T1-W50060E80032A1431-
L9000000000000 HP MPIO Disk (Fibre)

I also noticed that the max transfer size of the MPIO disks seems
small:
root@unxr1:/>lsattr -El hdisk24
PCM PCM/friend/xparray Path Control
Module False
PR_key_value none Reserve
Key True
algorithm fail_over
Algorithm True
clr_q no Device CLEARS its
Queue on error True
hcheck_interval 60 Health Check
Interval True
hcheck_mode nonactive Health Check
Mode True
location Location
Label True
lun_id 0xa000000000000 Logical Unit Number
ID False
lun_reset_spt yes SCSI LUN
reset True
max_transfer 0x40000 N/
A True
node_name 0x50060e80032a1421 Node
Name False
pvid 000d9f7f2e3f98540000000000000000 Physical Volume
ID False
q_err yes Use QERR
bit False
q_type simple Queue
TYPE True
queue_depth 16 Queue
DEPTH True
reassign_to 120 REASSIGN time
out True
reserve_policy single_path Reserve
Policy True
rw_timeout 60 READ/WRITE time
out True
scsi_id 0x651d00 SCSI
ID False
start_timeout 60 START UNIT time
out True
ww_name 0x50060e80032a1421 FC World Wide
Name False

But when I try to change it:
chdev -a max_transfer=100000 -l fscsi0 -P
Method error (/usr/lib/methods/chggen):
0514-017 The following attributes are not valid for the
specified device:
max_transfer

max_transfer is not a valid option.

Here is the lspath info:
lspath -p fscsi1 -F'status name path_id parent connection' | sort -k 2
| grep hdisk24
Enabled hdisk24 0 fscsi1 50060e80032a1421,a000000000000
lspath -p fscsi0 -F'status name path_id parent connection' | sort -k 2
| grep hdisk24
Enabled hdisk24 1 fscsi0 50060e80032a1431,a000000000000


Does any know how to move MPIO disks from one path to another? Or
change the max_transfer size?
Thanks Miles

Hajo Ehlers

unread,
Feb 22, 2007, 3:33:25 AM2/22/07
to


As far as i see you are running in fail-over mode. So the system works
as configured. Also the max. transfer size is normaly only changed
( increased ( for tape devcies .

hth
Hajo


Stanislaw Zakrzewski

unread,
Feb 27, 2007, 12:30:12 PM2/27/07
to
"
reserve_policy single_path Reserve
> Policy True
"

Uhm ...

chdev -dev hdisk5 -attr reserve_policy=no_reserve

This command is from VIO server so AIX one will look similar but different.

http://inetsd01.boulder.ibm.com/pseries/fr_FR/aixbman/admnconc/hotplug_mgmt.htm#mpioconcepts

Stan


miles

unread,
Mar 2, 2007, 8:44:33 AM3/2/07
to
On Feb 27, 11:30 am, "Stanislaw Zakrzewski" <stanzak...@yahoo.com>
wrote:

> "
> reserve_policy single_path Reserve> Policy True
>
> "
>
> Uhm ...
>
> chdev -dev hdisk5 -attr reserve_policy=no_reserve
>
> This command is from VIO server so AIX one will look similar but different.
>
> http://inetsd01.boulder.ibm.com/pseries/fr_FR/aixbman/admnconc/hotplu...
>
> Stan

Can you be more specific about what changing the reserve_policy does?

Miles

Stanislaw Zakrzewski

unread,
Mar 2, 2007, 12:01:05 PM3/2/07
to

>>
>> chdev -dev hdisk5 -attr reserve_policy=no_reserve
>>
>> This command is from VIO server so AIX one will look similar but
>> different.
>>
>> http://inetsd01.boulder.ibm.com/pseries/fr_FR/aixbman/admnconc/hotplu...
>>
>> Stan
>
> Can you be more specific about what changing the reserve_policy does?
>
> Miles

reserve_policy
"single_path_reserv" means the device can be accessed only by the initiator
that issued the reserve. his policy prevents other initiators in the same
host or on other hosts from accessing the device. This policy uses the SCSI2
reserve to lock the device to a single initiator (path), and commands routed
through any other path result in a reservation conflict. - prevents you from
connecting one disk to more then one controller at a time so you cannot do
any loadbalancing
"no_reserve" does not apply a reservation methodology for the device. The
device might be accessed by other initiators, and these initiators might be
on other host systems - allows connecting disk to more then one controller
at a time so you can do loadbalancing

algorithm
"failover" - Sends all I/O down a single path, if one path fails then sends
throug other.
"round_robin" - Distributes the I/O across all enabled paths.

Set reserve poliby to no_reserve and algorithm to round_robin and that
should do.
Commands should be something like that:
chdev -d hdisk5 -a reserve_policy=no_reserve
chdev -d hdisk5 -a algorithm=round_robin

One drive form SAN apears as two hdisks ( becouse each FC controller notice
the same drive separetly ) but you can see that it is the same drive running
lsdev -d hdisk5 and finding PVID ond/or Lun number, it will be same on both
hdisks) You have to change those attributes on both disks.

I never have done it on aix itself, w was creating entviroment under two VIO
partitions but since VIO is in fact soft of AIX the concept and commands
should be the same. If I am wrong correct me.

Stan


Stanislaw Zakrzewski

unread,
Mar 2, 2007, 12:33:56 PM3/2/07
to

>> "
>> reserve_policy single_path Reserve> Policy
>> True
>>

You can do everything through SMIT menu.
That's easy, you can do it :)
Have a good weekend.
Cheers


Cameron McRae

unread,
Mar 5, 2007, 10:07:57 AM3/5/07
to
On Feb 21, 11:55 am, "miles" <my_spam_acco...@shaw.ca> wrote:
> Hi folks,
>
> I'm getting terrible performance from my MPIO disks, they on an HP
> SAN. I think I know why, but I don't know how to change it. It looks
> like 90%+ of my traffic is going through one of my two fibre adapters.

>From another of your posts I seeyou are using the XP 1024, and so this
information won't apply to you directly, but it may help with your
load balancing question. I am experiencing exactly what you describe,
only on an HP EVA 8000. I have a lot of experience with AIX and IBM
storage but recently I have begun to dabble with HP storage. I will
give you all the background I have so that this is complete for future
reference.

We have an HP EVA 8000 and in order to use AIX's native MPIO you need
to install a 'driver':

devices.fcp.disk.HP.hsv.mpio.rte
1.0.1.0 C F ODM definitions for
HP
Enterprise Virtual
Array disk
devices

Which was downloaded by following the AIX links here:

http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html

It's not so much a driver as it is ODM definitions that allow AIX to
properly identify the HP disks:

hdisk2 Available 05-08-02 HP HSV210 Enterprise Virtual Array

Rather than "Other FC disk" as you will see if you've discovered the
LUNs prior to installing the fileset. The details and instructions
related to the HP EVA can be found here:

http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00373254/c00373254.pdf

In that document, static LUN based load balancing is explained, which
appears to be the only load balancing that is supported. It's somewhat
unclear on how you accomplish it, but it has to do with LUN priority
(they all start out with a priority of 1) and something along those
lines. I'm currently investigating implementing the load balancing
described in the documentation.

The fileset also installs a few utilities (that by default live in /
opt/hphsv/bin), including tools for listing the HBAs (lshba),
displaying the paths (hsvpaths) and LUNs (lshsv), and another one I
just noticed this morning as I was writing a rant to our storage guy:

user@host:/opt/hphsv/bin> ./lbhsv
usage: lbhsv -e <Node WWN|ALL> | -d <Node WWN|ALL>
-e Enable static load balancing for a single Node WWN
or all WWNs
-d Disable static load balancing for a single Node
WWN or all WWNs

I'm thinking perhaps something similar exists for the XP1024? 'lbhsv'
is not discussed in the document I linked to above. This is entirely
related to the HP EVA, but you could always send it to your HP rep and
get some information that way. Perhaps the utilities already exist. I
found this stuff by running lslpp -f against the fileset because the
install didn't tell me about any of this. It wasn't until later that I
discovered the documentation.

You could also check out AntemetA, but I believe you have to pay for
it, and most people I've run into want to use native MPIO.

On another note, when I've wanted to test failover and such, I've made
the storage guy do it on the switch, or you can use the chpath command
that is part of AIX native MPIO to enable/disable paths.

Anyways, feel free to contact me directly if you wish to further
discuss my experiences with HP storage and AIX or want clarification
on anything. Heck, I may be contacting you, since we're considering
buying even higher-end HP storage. Oh, this is all on AIX 5.3 TL 05.

Hope this helps.

/cam


aix...@yahoo.com

unread,
Mar 5, 2007, 11:38:51 AM3/5/07
to
> http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00373254/c00...

On my system, NMON reports stats for fscsi0 and fscsi1, then totals
the two,
and reports the total on fcs1, leading one to think fcs0 is inactive.
What do
you report for fscsi0 and fscsi1?

--Adapter-I/O-
Statistics--------------------------------------------------------
Name %busy read write xfers Disks Adapter-Type
fscsi0 0.0 10126.9 410.7 KB/s 238.7 46 FC SCSI I/O
Controllele
fscsi1 0.0 20020.2 813.3 KB/s 489.2 92 FC SCSI I/O
Controllele
ide0 0.0 0.0 0.0 KB/s 0.0 1 ATA/IDE
Controller De
fcs1 0.0 30147.2 1224.0 KB/s 728.2 39 FC Adapterer
sisscsia1 0.0 0.0 0.0 KB/s 0.0 4 PCI-X Dual
Channel Ul
TOTALS 5 adapters 60294.4 2448.0 KB/s 1456.1 182 TOTAL(MB/
s)=61.3

Cameron McRae

unread,
Mar 5, 2007, 12:38:06 PM3/5/07
to
On Mar 5, 10:38 am, aixd...@yahoo.com wrote:
> On my system, NMON reports stats for fscsi0 and fscsi1, then totals
> the two,
> and reports the total on fcs1, leading one to think fcs0 is inactive.
> What do
> you report for fscsi0 and fscsi1?
>
> --Adapter-I/O-
> Statistics--------------------------------------------------------
> Name %busy read write xfers Disks Adapter-Type
> fscsi0 0.0 10126.9 410.7 KB/s 238.7 46 FC SCSI I/O
> Controllele
> fscsi1 0.0 20020.2 813.3 KB/s 489.2 92 FC SCSI I/O
> Controllele
> ide0 0.0 0.0 0.0 KB/s 0.0 1 ATA/IDE
> Controller De
> fcs1 0.0 30147.2 1224.0 KB/s 728.2 39 FC Adapterer
> sisscsia1 0.0 0.0 0.0 KB/s 0.0 4 PCI-X Dual
> Channel Ul
> TOTALS 5 adapters 60294.4 2448.0 KB/s 1456.1 182 TOTAL(MB/
> s)=61.3

nmon reports nothing at all for fscsi0 and fscsi1.

Name %busy read write xfers Disks Adapter-
Type


│sisscsia1 0.0 0.0 0.0 KB/s 0.0 2 PCI-X
Ultra320 SCSI
A

│fcs1 0.0 0.0 0.0 KB/s 0.0 2 FC
Adapter

│fcs0 0.0 43.5 182.1 KB/s 49.0 2 FC
Adapter

│TOTALS 3 adapters 43.5 182.1 KB/s 49.0 6 TOTAL(MB/
s)=0.2

miles

unread,
Mar 6, 2007, 11:00:33 AM3/6/07
to
On Mar 5, 9:07 am, "Cameron McRae" <cpmc...@gmail.com> wrote:
> On Feb 21, 11:55 am, "miles" <my_spam_acco...@shaw.ca> wrote:
>
> > Hi folks,
>
> > I'm getting terrible performance from myMPIOdisks, they on an HP

> > SAN. I think I know why, but I don't know how to change it. It looks
> > like 90%+ of my traffic is going through one of my two fibre adapters.
> >From another of your posts I seeyou are using the XP 1024, and so this
>
> information won't apply to you directly, but it may help with your
> load balancing question. I am experiencing exactly what you describe,
> only on an HP EVA 8000. I have a lot of experience with AIX and IBM
> storage but recently I have begun to dabble with HP storage. I will
> give you all the background I have so that this is complete for future
> reference.
>
> We have an HP EVA 8000 and in order to use AIX's nativeMPIOyou need

> to install a 'driver':
>
> devices.fcp.disk.HP.hsv.mpio.rte
> 1.0.1.0 C F ODM definitions for
> HP
> Enterprise Virtual
> Array disk
> devices
>
> Which was downloaded by following the AIX links here:
>
> http://h18006.www1.hp.com/products/sanworks/multipathoptions/index.html
>
> It's not so much a driver as it is ODM definitions that allow AIX to
> properly identify the HP disks:
>
> hdisk2 Available 05-08-02 HP HSV210 Enterprise Virtual Array
>
> Rather than "Other FC disk" as you will see if you've discovered the
> LUNs prior to installing the fileset. The details and instructions
> related to the HP EVA can be found here:
>
> http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00373254/c00...
> that is part of AIX nativeMPIOto enable/disable paths.

>
> Anyways, feel free to contact me directly if you wish to further
> discuss my experiences with HP storage and AIX or want clarification
> on anything. Heck, I may be contacting you, since we're considering
> buying even higher-end HP storage. Oh, this is all on AIX 5.3 TL 05.
>
> Hope this helps.
>
> /cam

Thank you for pointing me in the right direction, I have figured it
out.

How to setup manual load balancing on AIX with an HP XP1024:

Find the VG to fix:

lspv

hdisk2 000d9f7f2e408149 TSMSTM1vg1024
active
hdisk3 00cff9ddde67f786 TSMSTM1vg1024
active

Move hdisk3 to fscsi0.

fscsi0 fscsi1
hdisk3 1 - primary 2
hdisk2 2 1 - primary

lspath

lspath -F'status name path_id parent connection' | grep -w hdisk2
Enabled hdisk2 0 fscsi0 50060e80032a1431,c000000000000
Enabled hdisk2 1 fscsi1 50060e80032a1421,c000000000000

lspath -F'status name path_id parent connection' | grep -w hdisk3
Enabled hdisk3 0 fscsi0 50060e80032a1431,d000000000000
Enabled hdisk3 1 fscsi1 50060e80032a1421,d000000000000

chpath -l hdisk2 -p fscsi0 -w "50060e80032a1431,c000000000000" -a
priority=2
path Changed

chpath -l hdisk3 -p fscsi1 -w "50060e80032a1421,d000000000000" -a
priority=2
path Changed

Verify

lspath -AHE -l hdisk2 -p fscsi0 -w "50060e80032a1431,c000000000000"
attribute value description user_settable

scsi_id 0x661d00 SCSI ID False
node_name 0x50060e80032a1431 Node Name False
priority 2 Priority True

lspath -AHE -l hdisk2 -p fscsi1 -w "50060e80032a1421,c000000000000"
attribute value description user_settable

scsi_id 0x651d00 SCSI ID False

node_name 0x50060e80032a1421 Node Name False

priority 1 Priority True

lspath -AHE -l hdisk3 -p fscsi0 -w "50060e80032a1431,d000000000000"
attribute value description user_settable

scsi_id 0x661d00 SCSI ID False
node_name 0x50060e80032a1431 Node Name False
priority 1 Priority True

lspath -AHE -l hdisk3 -p fscsi1 -w "50060e80032a1421,d000000000000"
attribute value description user_settable

scsi_id 0x651d00 SCSI ID False

node_name 0x50060e80032a1421 Node Name False

priority 2 Priority True


br...@earthlink.net

unread,
Mar 6, 2007, 11:35:16 AM3/6/07
to

Cam is right about that kit from HP. You must load that kit from HP
which provides ODM entries to define the default attributes and
changeable attributes for the EVA disks. It also provides a few HP-
provided commands like lshsv, lshba, rmhsv and hsvpaths.

By default, when AIX scans the adapters, etc during a cfgmgr it
enumerates the disks in ascending LUN order through the first adapter
it finds which explains why all the disks are on the same (lowest fcsx
number) physical path upon a reboot. Likely some cfgmgr details
missing here.....

If you download the 1.0.1.0 HP MPIO kit, then you can use path
priority to load balance. The AIX 5.3 System Management Concepts:
Operating System and Devices book has a little info on path priority
in the MPIO section. The highest priority is 1 so any path set to a
priority of 1 would be the preferred path. You can effectively load
balance using this feature. Also you could set the path used next
for failover.

Another method that works sometimes is to use the chpath command to
disable the current path. This will force the hdisk to use an
alternate path, typically the next path instance in the list. Once
the disk has failed over, you can enable the previously disabled path
and you now are using a different path and likely a different adapter
since the path instances typically alternate between the adapters.
However, *sometimes* when you enable the previously disabled path, the
hdisk will go back to that path so you have accomplished nothing.

The last method I know of is to set a perferred path for the vdisk in
Command View EVA on the SAN Appliance. If you choose a preferred
path, choose only the paths that are listed as "Path A Failover Only"
or "Path B Failover Only". This would at least allow you to have the
hdisks spread evenly across the available adapters but not across all
the available EVA ports.

Regarding performance, keep in mind that any EVA4/6/8000 XCS
(firmware) version earlier than XCS 6.000 will only use a queue depth
of 1 for the hdisks on AIX. This is due to a difference in
expectations when negotiating the link between the host and EVA though
the fabric. If you are using an EVA4/6/8000 with firmware XCS 5.x
the queue depth for all EVA hdisks will be set to 1 in AIX. As of XCS
6.000 or higher and the 1.0.1.0 HP MPIO kit, you can use any supported
queue depth.

-Bret

David J Dachtera

unread,
Mar 6, 2007, 8:52:39 PM3/6/07
to

Pardon my butting in here...

My shop will soon be moving from VMS to AIX. So, I'd like to pose a couple of
quick questions.

From what you've written above, I take it that upon a "mount" command AIX does
not attempt to switch the (disk) being mounted to the path with the least
currently "mount"-ed (volumes). (Sorry - some VMS terminology does not translate
directly to UN*X.)

Is this a correct conclusion?

By contrast, when we MOUNT our FC disks in VMS, the system will auto-switch them
to the least-used path of the available paths to each device at the time we
issue the MOUNT command.

Then, regarding manual path switching, does it take a third-party supplied
command utility such as the mentioned "chpath", or is something supplied with
AIX to achieve this?

The VMS equivalent would be SET DEVICE/SWITCH/PATH=PGcu:wwid device_name
(In VMS's command language, command keywords/parameters are separated by white
space, qualifiers are delimited by slash ("/").)

Also, do you perchance have any experience with EMC and AIX? Does EMC's
"PowerPath" provide utilities similar to what you mention from HP?

Forgive my AIX newbie-ness. I did UNIX back in 1986 for a year or three, and
more recently FreeBSD and Linux on PCs, but I've been doing VMS since 1983.

--
David J Dachtera
dba DJE Systems
http://www.djesys.com/

Unofficial OpenVMS Marketing Home Page
http://www.djesys.com/vms/market/

Unofficial Affordable OpenVMS Home Page:
http://www.djesys.com/vms/soho/

Unofficial OpenVMS-IA32 Home Page:
http://www.djesys.com/vms/ia32/

Unofficial OpenVMS Hobbyist Support Page:
http://www.djesys.com/vms/support/

br...@earthlink.net

unread,
Mar 6, 2007, 9:47:46 PM3/6/07
to
David,

Nice to see both of my favorite operating systems talked about in the
same thread. I have been working with VMS for 20 years and AIX for 7
years. I am sure we have talked in the past at some point. To
answer your VMS-AIX questions...

When you mount a filesystem in AIX it can autoswitch to the best
path. There is a round_robin path algorithm that will distribute the
I/O to all the disk's paths. You can use the path priority I
mentioned earlier to control what paths are most used for I/O or set
all paths to the same priority and I/O will be evenly balanced across
all paths.

AIX MPIO also has a healthcheck function that will test alternate
paths and failed paths automatically at a user-selectable interval.

As to commands, the chpath, etc are are native AIX commands. The HP
MPIO kit just creates some commands that provide a shortcut to viewing
the data you want when using an EVA. I do not know of a way to
manually move the path other than using the chpath to fail the current
path. I believe the issue here is once the AIX hdisk is part of a
volume group and that volume group is active, the attributes for the
disk, including path information is locked in the ODM. If someone
has a nice way to manually move a path in AIX, I would be very
interested in hearing about it.

There is a short but sweet writeup on MPIO in AIX here:

http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/doc/base/aixinformation.htm

Select AIX Documentation -> System Management -> Operating System and
Device Management -> Multiple Path I/O

I hope you enjoy working with AIX as much as I have. Granted you
have been spoiled by VMS clustering and builtin mutipath for many,
many years, but AIX is a very nice UNIX in my humble opinion.

-Bret

miles

unread,
Mar 7, 2007, 8:50:15 AM3/7/07
to
On Mar 6, 7:52 pm, David J Dachtera <djesys...@spam.comcast.net>
wrote:
> dba DJE Systemshttp://www.djesys.com/
>
> Unofficial OpenVMS Marketing Home Pagehttp://www.djesys.com/vms/market/

>
> Unofficial Affordable OpenVMS Home Page:http://www.djesys.com/vms/soho/
>
> Unofficial OpenVMS-IA32 Home Page:http://www.djesys.com/vms/ia32/
>
> Unofficial OpenVMS Hobbyist Support Page:http://www.djesys.com/vms/support/- Hide quoted text -
>
> - Show quoted text -

The only thing I'll add is that using HP's XP1024 with AIX appears to
be different then using HP's EVA storage. We have an XP1024. Sorry I
haven't used EMC storage. It was SSA (which I really liked) and a
FAStT before the XP1024.

The XP1024 driver or PCM does NOT support load balancing or robin
round. All you get is path fail over. Hence the efforts I went through
when I realized all my disks were on the same FC adapter, to try and
balance the load. Now I can read / write about 130-150 Mbytes/s
aggregate across two FC adapters. Much better performance.

0 new messages