'Partitioning' raid controller for VMS on Integrity

288 views
Skip to first unread message

Rich Jordan

unread,
Sep 8, 2021, 8:06:12 PMSep 8
to
VMS doesn't do partitions. Are there any SAS type RAID controllers for RX servers that can present 'partitioned' logical drives to VMS so it sees separate spindles?

Wondering if it is possible at all to f/ex build a 4 disk ADG array and present more than one spindle to VMS from it.

Thanks

Dave Froble

unread,
Sep 8, 2021, 8:52:19 PMSep 8
to
There may have been times when partitions were needed. But not today,
as I see it. What possible advantage might there be for partitions
today? It is still a single disk, and lose it, you lose all the
partitions on it.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Arne Vajhøj

unread,
Sep 8, 2021, 9:05:37 PMSep 8
to
On 9/8/2021 8:49 PM, Dave Froble wrote:
> On 9/8/2021 8:06 PM, Rich Jordan wrote:
>> VMS doesn't do partitions.  Are there any SAS type RAID controllers
>> for RX servers that can present 'partitioned' logical drives to VMS so
>> it sees separate spindles?
>>
>> Wondering if it is possible at all to f/ex build a 4 disk ADG array
>> and present more than one spindle to VMS from it.
>
> There may have been times when partitions were needed.  But not today,
> as I see it.  What possible advantage might there be for partitions
> today?  It is still a single disk, and lose it, you lose all the
> partitions on it.

Maybe to work around VMS max volume size limit?

Arne

dthi...@gmail.com

unread,
Sep 8, 2021, 9:24:26 PMSep 8
to
Yes, the P400 and P800 SAS controllers can RAID any or all the drives together, and then you can partition the RAIDset into multiple logical drives.

I do that on my Integrity 2660 and P800, With an ADG (RAID6) across 7 SAS drives, then cut it up into multiple logical drives (DKx0, DKx1, DKx2).

The built-in RAID firmware can create the first logical drive, but you need to use MSA$UTIL to create the rest of the logical drives. Or you can just boot OpenVMS from a flash drive, then use MSA$UTIL to erase and re-RAID and/or cut the RAIDset up into multiple logical drives.

David

dthi...@gmail.com

unread,
Sep 8, 2021, 9:27:37 PMSep 8
to
I forgot to mention that you can do the same logical disk partitioning with SCSI on Integrity as well.

I have an rx2600 with a 6402 SCSI RAID controller, and have cut a large 7-drive RAIDset into DKx0, DKx1, and DKx2.

Bob Gezelter

unread,
Sep 9, 2021, 8:25:25 AMSep 9
to
Rich,
.
WADR, IMO the question is not "can a disk be partitioned", but rather, "Can a disk be divided logically with several different file systems?"

The answer is undoubtedly "Yes", on two different levels.

As has been mentioned earlier in this thread, the last several generations of external storage arrays have supported "logical volumes" which are storage BLOBs of indeterminate composition (non-RAID, RAID-0, RAID-1, ... ).

Outside the hardware space, OpenVMS has a are multiple choices within a volume and collections of volumes.

The original supra-volume solution was and is FILES-11 volume sets. Supported since the beginning, and still useful for a variety of purposes.

Host-based Volume Shadowing allows the creation of shadow-sets, including single volume shadow sets, which can be expanded dynamically for various purposes, e.g. volume changeover, volume expansion, and backup.

There is also the LD facility, which allows one to create virtual volumes within an overarching volume structure, which can be used to effectively "partition" a disk volume, e.g. create a logical disk, then make the file contiguous. I have used this "file system within a file system" to create disks with different cluster factors for different uses, e.g., one file system for sources, command files, and other user programs, with a relatively small cluster factor, side by side or within a file structure with a
cluster factor more attuned to large (multi-gigabyte) database or other files.

One can also play mix-and-match with the facilities in various ways, depending upon your goal.

Outside of the Hobbyist world, it is a good idea to document what was done, and more importantly WHY it was done.

- Bob Gezelter, http://www.rlgsc.com

Phillip Helbig (undress to reply)

unread,
Sep 9, 2021, 9:01:42 AMSep 9
to
In article <4d714e26-1132-46f3...@googlegroups.com>, Bob
Gezelter <geze...@rlgsc.com> writes:

> Outside of the Hobbyist world, it is a good idea to document what was done,
> and more importantly WHY it was done.

Also a good idea in the hobbyist world. :-)

hb

unread,
Sep 9, 2021, 9:28:04 AMSep 9
to
On 9/9/21 2:25 PM, Bob Gezelter wrote:
> There is also the LD facility, which allows one to create virtual volumes within an overarching volume structure, which can be used to effectively "partition" a disk volume, e.g. create a logical disk, then make the file contiguous. I have used this "file system within a file system" to create disks ...

FWIW, $ ld help connect /lbn
...
/LBN=(START=x,END=y) or
/LBN=(START=X,COUNT=y)

Connect an LD device to a physical disk, starting at LBN x, with either
an end-lbn specified or a count. The ending lbn may not be beyond the
end of the physical device. The count plus the starting lbn may also
not be beyond the end. With this qualifier it is possible to partially
map a physical disk.


And yes, the VMS virtual disk driver, VD, could/can do this as well.

Simon Clubley

unread,
Sep 9, 2021, 1:41:10 PMSep 9
to
On 2021-09-09, Bob Gezelter <geze...@rlgsc.com> wrote:
>
> The original supra-volume solution was and is FILES-11 volume sets. Supported since the beginning, and still useful for a variety of purposes.
>
> Host-based Volume Shadowing allows the creation of shadow-sets, including single volume shadow sets, which can be expanded dynamically for various purposes, e.g. volume changeover, volume expansion, and backup.
>
> There is also the LD facility, which allows one to create virtual volumes within an overarching volume structure, which can be used to effectively "partition" a disk volume, e.g. create a logical disk, then make the file contiguous. I have used this "file system within a file system" to create disks with different cluster factors for different uses, e.g., one file system for sources, command files, and other user programs, with a relatively small cluster factor, side by side or within a file structure with a
> cluster factor more attuned to large (multi-gigabyte) database or other files.
>

Bob,

The VMS-hosted options you list above are all in the wrong direction
_if_ the goal is to make use of larger disk sizes than VMS supports.

What does actually happen if you try to initialise a (say) 8TB disk
with an ODS-2/ODS-5 filesystem ?

Do you get a disk is too big error message ?

Does VMS just use the first 2TB of the volume and ignore the rest ?

Does it try to create a 8TB filesystem anyway and then either hang or
create an invalid filesystem ?

I've never tried it so I don't know.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Stephen Hoffman

unread,
Sep 9, 2021, 3:51:39 PMSep 9
to
Recent array controllers can do this (logical drives distributed across
one or more arrays of physical devices), though the HP/HPE
documentation here seemed lacking.

Among these, P410 and P812, and others.

You'll probably be using MSA$UTIL to establish this, and/or one of the
HP/HPE tools.

Getting disks in and out of the RAIDset can sometimes be a little
tricky to manage when a failure arises. Best to get extra HDDs or SSDs,
and pre- provision some into the arrays as spares.

http://h10032.www1.hp.com/ctg/Manual/c02289065.pdf

etc.


--
Pure Personal Opinion | HoffmanLabs LLC

Rich Jordan

unread,
Sep 9, 2021, 7:48:30 PMSep 9
to
Thanks to all who responded. The older system has 18 drives providing nine mirrored volumes, plus another 6 mirrored 'universal' drives proving backup staging space. They are hoping that we can put in an RX-2800 with eight larger SAS drives (which even RAID'ed will add up to much more space) but due to the age and complexity of the software involved we really want to keep the same number of spindles/volumes visible to VMS. If possible without a second controller and more drives in a shelf. If we have to merge more than one or two disks together it will seriously increase the complexity of acceptance testing; everything should be clearly controlled via logicals but a lot of hands wrote a lot of software over 30+ years, so dropping to four large mirrorsets is not going to happen. I know we can do the work, but the time and cost...

We have an RX2660 we use for testing here; it has a P410i with the advanced license and some 1.2TB SAS drives (and four smaller ones). I can make an ADG array with the four large drives in ORCA, but it does not provide any means of 'breaking that down' into smaller volumes to pass to VMS. I tried (once, so maybe missed something) to use MSA$UTIL from the DVD boot of HP VMS V8.4, and it also doesn't seem to have options to break down that large disk (which is already visible to VMS). So if a P410 can do this (is a P410i different in that regard?) I should be able to test on the 2660 and confirm that it can be done on the potential RX-2800.

I'm installing VMS kit on a local disk temporarily so I can see if the MSA$UTIL options are different when booting from a live disk. I've got the appropriate update kits available to install

If SAUPDATE from the UEFI console is also capable of doing the volume setups then I'm going to try to find that kit/download somewhere; HPe moved it all behind paywalls. I should have RX2620 firmware kits from when ours was under support but I doubt I have that for the 2660 (which was a customer letting us have it back when they retired); no idea if we could use one kit for just that one utility on a different box.

Thanks again

Dave Froble

unread,
Sep 9, 2021, 9:28:21 PMSep 9
to
On 9/9/2021 7:48 PM, Rich Jordan wrote:

> Thanks to all who responded. The older system has 18 drives providing nine mirrored volumes, plus another 6 mirrored 'universal' drives proving backup staging space. They are hoping that we can put in an RX-2800 with eight larger SAS drives (which even RAID'ed will add up to much more space) but due to the age and complexity of the software involved we really want to keep the same number of spindles/volumes visible to VMS. If possible without a second controller and more drives in a shelf. If we have to merge more than one or two disks together it will seriously increase the complexity of acceptance testing; everything should be clearly controlled via logicals but a lot of hands wrote a lot of software over 30+ years, so dropping to four large mirrorsets is not going to happen. I know we can do the work, but the time and cost...
>
> We have an RX2660 we use for testing here; it has a P410i with the advanced license and some 1.2TB SAS drives (and four smaller ones). I can make an ADG array with the four large drives in ORCA, but it does not provide any means of 'breaking that down' into smaller volumes to pass to VMS. I tried (once, so maybe missed something) to use MSA$UTIL from the DVD boot of HP VMS V8.4, and it also doesn't seem to have options to break down that large disk (which is already visible to VMS). So if a P410 can do this (is a P410i different in that regard?) I should be able to test on the 2660 and confirm that it can be done on the potential RX-2800.
>
> I'm installing VMS kit on a local disk temporarily so I can see if the MSA$UTIL options are different when booting from a live disk. I've got the appropriate update kits available to install
>
> If SAUPDATE from the UEFI console is also capable of doing the volume setups then I'm going to try to find that kit/download somewhere; HPe moved it all behind paywalls. I should have RX2620 firmware kits from when ours was under support but I doubt I have that for the 2660 (which was a customer letting us have it back when they retired); no idea if we could use one kit for just that one utility on a different box.
>
> Thanks again
>

I don't know how your system(s) are configured, so this may not be helpful.

We use those "dratted logicals" to define disks. So we can move things
around, just with the logicals.

Say:

Disk0
Disk1
Disk2
Disk3

Are current. However if we wanted to replace both Disk2 and Disk3 with
one larger disk, all we would do is to have both logical names Disk2 and
Disk3 both point to the new larger disk.

There could still be issues. Say the old disks have identical directory
names that are used for different purposes. Or other unknown to me issues.

If you're not using logical names in a like manner, then perhaps your
solutions will be a bit more difficult.

abrsvc

unread,
Sep 10, 2021, 6:48:53 AMSep 10
to
The use of concealed devices can "fix" this too. I have successfully used these to accomplish having multiple "disks" on a single one.

Dan

Rich Jordan

unread,
Sep 10, 2021, 10:26:35 AMSep 10
to
The system was designed with logicals. The devs over the years were supposed to use said logicals, and the last major system change 10 or so years go rooted out the mistakes when the Integrity disks had different names (for which we also used concealed logicals). However we want to avoid going through the effort needed to merge multiple current disks together onto fewer larger ones. Yes, there are overlaps in the directory structures between the drives. Old stuff, moved stuff, archived stuff, and the devs are not at all or barely available for questions.

This is not an issue about overall disk size (other than getting more space, easy since the largest current spindle is 300GB mirrorset). If we 1.5x the size of every current disk (way more than enough given past growth) the seven data drives will still fit the 2.4TB 4 drive ADG that I set up on the test box.

But we really want at least 7 spindles (8 or 9 better) with RAID protection without doing an external shelf and a whole other set of disks. Paritioning at the controller level would do that.

We'll find out if msa$util from installed VMS (as opposed to DVD boot) can do something with the P410i in that regard. I can wipe and rebuild the ADG array for any testing.

Rich

David Turner

unread,
Sep 10, 2021, 10:58:09 AMSep 10
to
For what it's worth, I have several customers now who have gone the the
P2000 external RAID arrays
They look kinda like the D2700 shelf except the have the option modules
for SAS, Fiber, ISCSI or all three
I know you can partition disks on these
Great web based user interface too. Easiest external box I have ever set up

We stock them of course...

David


On 9/8/2021 8:06 PM, Rich Jordan wrote:

Rich Jordan

unread,
Sep 13, 2021, 7:07:32 PMSep 13
to
And the results are in unless there's super secret MSA$UTIL commands that I don't have docs for. No go. The P410i apparently does not provide any means of generating individual disks that are a subset of a single RAID logical disk. Whether its a single disk or a mirror pair, or a RAID set with multiple disks, only the single unit is provided to VMS.

We're going to look at the second controller and drive shelf to get 8 spindles; that will be enough to get by without major work. Hate losing the optical drive for that but that can be worked around.

Interestingly... the 2660 has a SCSI Card in it for use with a tape library; it is the same card that I understand was spec'd for the RX-2800s. I hooked an external disk cabinet to it. When I boot from the VSI DVD, the disks mount and I can copy to/from them (this is just staging; they cannot stay on the system). When I boot the freshly installed VMS loaded from that DVD, none of the disks in that cabinet will mount; they are visible to VMS but are declared offline requiring operator intervention. I'll get patches and updates installed and do a little more testing.

DuncanMorris

unread,
Sep 14, 2021, 4:49:25 AMSep 14
to
The p40i has no problem presenting multiple units with MSA$UTIL to OpenVMS.
This example is from a recent colleague's update agenda for a client:

MSA> add unit 2/disk=(3,4,5,6,7,8)/raid=1/part=0/size=32GB ! Backup of Live system disk
MSA> add unit 3/disk=(3,4,5,6,7,8)/raid=1/part=1/size=8GB ! Copy of VSI VMS v8.4-2L3
MSA> add unit 4/disk=(3,4,5,6,7,8)/raid=1/part=2/size=20GB ! USER
MSA> add unit 5/disk=(3,4,5,6,7,8)/raid=1/part=3/size=10GB ! BATS
MSA> add unit 6/disk=(3,4,5,6,7,8)/raid=1/part=4/size=25GB ! DOCS
MSA> add unit 7/disk=(3,4,5,6,7,8)/raid=1/part=5/size=40GB ! DATA_1
MSA> add unit 8/disk=(3,4,5,6,7,8)/raid=1/part=6/size=40GB ! DATA_2
MSA> add unit 9/disk=(3,4,5,6,7,8)/raid=1/part=7/size=40GB ! INET_1
MSA> add unit 10/disk=(3,4,5,6,7,8)/raid=1/part=8/size=20GB ! INET_LOGS
MSA> add unit 11/disk=(3,4,5,6,7,8)/raid=1/part=9/size=150GB ! BACKUP

Rich Jordan

unread,
Sep 14, 2021, 2:04:09 PMSep 14
to
On Tuesday, September 14, 2021 at 3:49:25 AM UTC-5, DuncanMorris wrote:

> The p40i has no problem presenting multiple units with MSA$UTIL to OpenVMS.
> This example is from a recent colleague's update agenda for a client:
>
> MSA> add unit 2/disk=(3,4,5,6,7,8)/raid=1/part=0/size=32GB ! Backup of Live system disk
> MSA> add unit 3/disk=(3,4,5,6,7,8)/raid=1/part=1/size=8GB ! Copy of VSI VMS v8.4-2L3
> MSA> add unit 4/disk=(3,4,5,6,7,8)/raid=1/part=2/size=20GB ! USER
> MSA> add unit 5/disk=(3,4,5,6,7,8)/raid=1/part=3/size=10GB ! BATS
> MSA> add unit 6/disk=(3,4,5,6,7,8)/raid=1/part=4/size=25GB ! DOCS
> MSA> add unit 7/disk=(3,4,5,6,7,8)/raid=1/part=5/size=40GB ! DATA_1
> MSA> add unit 8/disk=(3,4,5,6,7,8)/raid=1/part=6/size=40GB ! DATA_2
> MSA> add unit 9/disk=(3,4,5,6,7,8)/raid=1/part=7/size=40GB ! INET_1
> MSA> add unit 10/disk=(3,4,5,6,7,8)/raid=1/part=8/size=20GB ! INET_LOGS
> MSA> add unit 11/disk=(3,4,5,6,7,8)/raid=1/part=9/size=150GB ! BACKUP

Thank you! Will try that out on the 2660 and see what happens. I didn't dig deeply enough into the add command help.

Rich

chris

unread,
Sep 14, 2021, 5:47:24 PMSep 14
to
Different approach? Proliant hp smart array controllers can
take a set of disks and present several logical drives, usually
mirror or raid 5 n, but down to single drive level, which might
be a solution.

Afaik, no ability to provide the unix style multiple hardware
partitions across s single drive. VMSmay have logical
drive subsetting capability via a single drive. RT11 V 5
certainly had and a driver to handle it...

Chris

Rich Jordan

unread,
Sep 20, 2021, 5:49:20 PMSep 20
to
Duncan's info worked and now that I know the syntax I was able to find some references online; before that google-fu failed.

Is there a way to change the device name presented/selected by VMS on an Integrity server? I could swear I read somewhere that you could change the third letter by some kind of handwavium, I think in one of the EFI files or UEFI device management. So controller PKA0 (by default the dedicated slot P400i which comes up as PKA0) could be known to VMS as PKD0 (and its disks become DKD#)? Can't just move things around in slots to change the order, and so the naming.

Pretty sure there's no way to change individual disk names to reflect different controllers when they're all on the same one, but if we can change the seven data disks to DKD on this controller, we can work around the system, alt system, and staging disks also now being DKD# instead of DKC and DKA respectively. Logicals are good enough for those. I'd like to avoid using new logicals for anything related to the production drives though.

Lawrence D’Oliveiro

unread,
Oct 7, 2021, 9:26:40 PMOct 7
to
On Tuesday, September 21, 2021 at 9:49:20 AM UTC+12, Rich Jordan wrote:
> Is there a way to change the device name presented/selected by VMS on an Integrity server?
> I could swear I read somewhere that you could change the third letter by some kind of
> handwavium, I think in one of the EFI files or UEFI device management. So controller PKA0
> (by default the dedicated slot P400i which comes up as PKA0) could be known to VMS as
> PKD0 (and its disks become DKD#)? Can't just move things around in slots to change the
> order, and so the naming.

But ... but ... 26 drive letters ought to be enough for anybody!
Reply all
Reply to author
Forward
0 new messages