Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

OpenVMS df - Disk Free Utility

523 views
Skip to first unread message

alan...@btinternet.com

unread,
Jun 2, 2014, 6:36:10ā€ÆPM6/2/14
to


OpenVMS df - Disk Free Utility

I know there are other versions of this Linux command
out there for OpenVMS as DCL scripts, etc.

I'd like to add a version which I have written for VAX
and Alpha which I have found to be quite useful (I don't
have access to an IA64 system).

The default display is "df -b" display 512-byte blocks
for example, on my small Alpha:

$ df
Device Name Blocks Used Avail Capacity Volume Label
ALPHA$DKB100: 71833096 15939784 55893312 22% ALPHASYS
ALPHA$DKB200: 71833096 17662162 54170934 24% DEVDISK1
$

and the "df -h" human readable output:

$ df -h
Device Name Size Used Avail Capacity Volume Label
ALPHA$DKB100: 34.2GB 7.6GB 26.6GB 22% ALPHASYS
ALPHA$DKB200: 34.2GB 8.4GB 25.8GB 24% DEVDISK1
$

The human readable units are true multiples of 1024
but the output will never display more than 999 of
any unit. No rounding is performed as I have found
that in doing so can result in nonsensical results.

If you want a true picture use "df" of "df -b" to
display the 512-byte blocks.

I have not tested this on bound-volume sets or any
large raid-arrays, so I don't know what the results
will be.

This is free software for anyone to use commercial
or non-commercial.

The provided "df.vax_exe" and "df.axp.exe" executables
will run on any OpenVMS version.

These are available from:

https://github.com/alan-fay/openvms

The simplest way download is to "Download ZIP".

Install "df" as a foreign command. Copy the executable:

$ copy df.axp_exe sys$common:[sysexe]

and define the "df" symbol as:

$ df :== $sys$common:[sysexe]df.axp_exe

Alan Fay







Stephen Hoffman

unread,
Jun 2, 2014, 6:54:15ā€ÆPM6/2/14
to
On 2014-06-02 22:36:10 +0000, alan...@btinternet.com said:

> OpenVMS df - Disk Free Utility
> ...(I don't have access to an IA64 system).

FWIW, Deathrow <http://deathrow.vistech.net> has an Itanium
<jack.vistech.net> running OpenVMS V8.4, and accounts there are free
for non-commercial use.
Log in as NEWUSER with the password NEWUSER, and follow the bouncing ball...



--
Pure Personal Opinion | HoffmanLabs LLC

abrsvc

unread,
Jun 2, 2014, 9:10:44ā€ÆPM6/2/14
to
If you can not get access to an I64 system, let me know. I can either port it for you or get you access.

Dan

John E. Malmberg

unread,
Jun 3, 2014, 10:23:32ā€ÆPM6/3/14
to
On 6/2/2014 5:36 PM, alan...@btinternet.com wrote:
>
>
> OpenVMS df - Disk Free Utility
>
> I know there are other versions of this Linux command
> out there for OpenVMS as DCL scripts, etc.
>
> I'd like to add a version which I have written for VAX
> and Alpha which I have found to be quite useful (I don't
> have access to an IA64 system).

GNU coreutils 8.21 has a df utility in it.

It handles VMS mount points which were added with VMS 8.3.

http://sourceforge.net/projects/vms-ports/files/i640840/

http://sourceforge.net/projects/vms-ports/files/axp0830/

I have not had time to complete the VAX port.

Regards,
-John
Personal Opinion Only

John E. Malmberg

unread,
Jun 3, 2014, 11:09:48ā€ÆPM6/3/14
to
On 6/3/2014 9:23 PM, John E. Malmberg wrote:
> On 6/2/2014 5:36 PM, alan...@btinternet.com wrote:
>>
>>
>> OpenVMS df - Disk Free Utility
>>
>> I know there are other versions of this Linux command
>> out there for OpenVMS as DCL scripts, etc.
>>
>> I'd like to add a version which I have written for VAX
>> and Alpha which I have found to be quite useful (I don't
>> have access to an IA64 system).
>
> GNU coreutils 8.21 has a df utility in it.
>
> It handles VMS mount points which were added with VMS 8.3.
>
> http://sourceforge.net/projects/vms-ports/files/i640840/
>
> http://sourceforge.net/projects/vms-ports/files/axp0830/

The VAX port for this and Bash 4.3.x need some work in that the vms
st_ino type is 24 bits and needs a macro for doing compares/copies if
you have time to join in on what is going on with the rebuilding of the
GNV project on sourceforge along with the vms-ports project.

http://sourceforge.net/projects/gnv/files/

Regards,
-John

alan...@btinternet.com

unread,
Jun 4, 2014, 5:14:17ā€ÆPM6/4/14
to
On Tuesday, 3 June 2014 02:10:44 UTC+1, abrsvc wrote:
> If you can not get access to an I64 system, let me know. I can either port it for you or get you access.
>
>
>
> Dan

Thank you Dan for the offer to use your IA64 system.

I've taken Stephen Hoffman's advice and applied for an
account on http://deathrow.vistech.net with a telnet to
jack.vistech.net but I haven't heard back yet.

So I logged into the "demo" account and it looks like
a great OpenVMS IA64 system!

I've created a "df.i64_exe" which I've only run on the
deathrow system. Here's what it looks like for just "df":

DEMO$ df
Device Name Blocks Used Avail Capacity Volume Label
$3$DQA0: 58633344 39269700 19363644 66% GEIN_SYS
$3$DQB1: 234441648 111519873 122921775 47% GEIN_DATA
$4$DKA0: 35565080 18535960 17029120 52% JACK_SYS
$4$DKA100: 71132960 45323528 25809432 63% JACK_USERS
$4$DKB200: 286749488 112191856 174557632 39% JACK_ATTIC
$9$LDA1: 8380080 6192756 2187324 73% LD_USR
$9$LDA2: 4110480 2495288 1615192 60% LD_EXTRA
$9$LDA3: 8888924 6039407 2849517 67% LD_FTP
$9$LDA4: 4110480 917643 3192837 22% LD_USR_ODS5
DEMO$

and "df -h"

DEMO$ df -h
Device Name Size Used Avail Capacity Volume Label
$3$DQA0: 27.9GB 18.7GB 9.2GB 66% GEIN_SYS
$3$DQB1: 111.7GB 53.1GB 58.6GB 47% GEIN_DATA
$4$DKA0: 16.9GB 8.8GB 8.1GB 52% JACK_SYS
$4$DKA100: 33.9GB 21.6GB 12.3GB 63% JACK_USERS
$4$DKB200: 136.7GB 53.4GB 83.2GB 39% JACK_ATTIC
$9$LDA1: 3.9GB 2.9GB 1.0GB 73% LD_USR
$9$LDA2: 1.9GB 1.1GB 788.6MB 60% LD_EXTRA
$9$LDA3: 4.2GB 2.8GB 1.3GB 67% LD_FTP
$9$LDA4: 1.9GB 448.0MB 1.5GB 22% LD_USR_ODS5
DEMO$

Both show the LDD container file devices.

I've added "df.i64_exe" to

https://github.com/alan-fay/openvms

"Download ZIP" to download the files.

Thank you Stephen Hoffman and "jack" for the kind use of your
OpenVMS IA64 system.

Alan Fay



pcov...@gmail.com

unread,
Jun 10, 2014, 2:42:31ā€ÆPM6/10/14
to
I downloaded it and installed on my test IA64 and interesting what it does with members of a shadow set... see below...

$ df -h
Device Name Size Used Avail Capacity Volume Label
$1$DGA3000: 25.0GB 21.9GB 3.0GB 87% VMS_TEST1
$1$DGA3110: 25.0GB 25.0GB 0B 100% COMMON_TEST
$1$DGA3200: 25.0GB 25.0GB 0B 100% CACHE_TEST
$1$DGA3301: 350.0GB 350.0GB 0B 100% DB1_TEST
$1$DGA3302: 400.0GB 400.0GB 0B 100% DB2_TEST
$1$DGA3303: 550.0GB 550.0GB 0B 100% DB3_TEST
$1$DGA3304: 660.0GB 660.0GB 0B 100% DB4_TEST
$1$DGA8110: 25.0GB 25.0GB 0B 100% COMMON_TEST
DSA1110: 25.0GB 21.0GB 3.9GB 84% COMMON_TEST
DSA1200: 25.0GB 4.9GB 20.0GB 19% CACHE_TEST
DSA1301: 350.0GB 329.4GB 20.5GB 94% DB1_TEST
DSA1302: 400.0GB 357.2GB 42.7GB 89% DB2_TEST
DSA1303: 550.0GB 521.2GB 28.7GB 94% DB3_TEST
DSA1304: 660.0GB 632.8GB 27.1GB 95% DB4_TEST

abrsvc

unread,
Jun 10, 2014, 2:47:55ā€ÆPM6/10/14
to
While not hte best display in regards to the shadowsets, the display is correct. There are no blocks available at the physical device level. From that perspective, the volumes are 100% utilized...

Dan

VAXman-

unread,
Jun 10, 2014, 4:21:55ā€ÆPM6/10/14
to
In article <7a26e453-eb0a-4b71...@googlegroups.com>, "pcov...@gmail.com" <pcov...@gmail.com> writes:
>I downloaded it and installed on my test IA64 and interesting what it does with members of a shadow set... see below...
>
>$ df -h
>Device Name Size Used Avail Capacity Volume Label
>$1$DGA3000: 25.0GB 21.9GB 3.0GB 87% VMS_TEST1
>$1$DGA3110: 25.0GB 25.0GB 0B 100% COMMON_TEST
>$1$DGA3200: 25.0GB 25.0GB 0B 100% CACHE_TEST
>$1$DGA3301: 350.0GB 350.0GB 0B 100% DB1_TEST
>$1$DGA3302: 400.0GB 400.0GB 0B 100% DB2_TEST
>$1$DGA3303: 550.0GB 550.0GB 0B 100% DB3_TEST
>$1$DGA3304: 660.0GB 660.0GB 0B 100% DB4_TEST
>$1$DGA8110: 25.0GB 25.0GB 0B 100% COMMON_TEST
>DSA1110: 25.0GB 21.0GB 3.9GB 84% COMMON_TEST
>DSA1200: 25.0GB 4.9GB 20.0GB 19% CACHE_TEST
>DSA1301: 350.0GB 329.4GB 20.5GB 94% DB1_TEST
>DSA1302: 400.0GB 357.2GB 42.7GB 89% DB2_TEST
>DSA1303: 550.0GB 521.2GB 28.7GB 94% DB3_TEST
>DSA1304: 660.0GB 632.8GB 27.1GB 95% DB4_TEST

See $GETDVI => DVI$_SHDW_MEMBER

Don't report on shadow members; it only clutters the field.

FWIW, I use a DCL procedure that I heavily modified (original author posted
the procedure in the Linked-In VMS forum) which shows, via a bar-graph, the
disk usage on my systems.

http://tmesis.net/DCL/DISK_BLOCKS.COM

It has also been my "goto" procedure for development and testing of my DCL
debugger because of the bar-graph.
--
VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)ORG

I speak to machines with the voice of humanity.

MG

unread,
Jun 11, 2014, 6:26:40ā€ÆAM6/11/14
to
On 10-jun-2014 22:21, VAXman- @SendSpamHere.ORG wrote:
> FWIW, I use a DCL procedure that I heavily modified (original author
> posted the procedure in the Linked-In VMS forum) which shows, via a
> bar-graph, the disk usage on my systems.

It's a pretty nice procedure, but I didn't know that it originally
came from LinkedIn...

- MG

VAXman-

unread,
Jun 11, 2014, 12:35:37ā€ÆPM6/11/14
to
It might have been posted elsewhere but the original author made mention of
it on a Linked-In VMS forum which is when and where I got ahold of it.

I ripped out much of the format computation and used F$fao. I don't under-
stand why so many avoid F$fao when it can -- and far more easily -- format
output in DCL procedures.

FYI, try "$ SET VERIFY" with that and see how much fun it would be to debug
it. ;)

alan...@btinternet.com

unread,
Jun 12, 2014, 4:52:53ā€ÆPM6/12/14
to
On Tuesday, 10 June 2014 19:42:31 UTC+1, pcov...@gmail.com wrote:
> I downloaded it and installed on my test IA64 and interesting what it does with members of a shadow set... see below...
>
>
>
> $ df -h
>
> Device Name Size Used Avail Capacity Volume Label
> $1$DGA3000: 25.0GB 21.9GB 3.0GB 87% VMS_TEST1
> $1$DGA3110: 25.0GB 25.0GB 0B 100% COMMON_TEST
> $1$DGA3200: 25.0GB 25.0GB 0B 100% CACHE_TEST
> $1$DGA3301: 350.0GB 350.0GB 0B 100% DB1_TEST
> $1$DGA3302: 400.0GB 400.0GB 0B 100% DB2_TEST
> $1$DGA3303: 550.0GB 550.0GB 0B 100% DB3_TEST
> $1$DGA3304: 660.0GB 660.0GB 0B 100% DB4_TEST
> $1$DGA8110: 25.0GB 25.0GB 0B 100% COMMON_TEST
> DSA1110: 25.0GB 21.0GB 3.9GB 84% COMMON_TEST
> DSA1200: 25.0GB 4.9GB 20.0GB 19% CACHE_TEST
> DSA1301: 350.0GB 329.4GB 20.5GB 94% DB1_TEST
> DSA1302: 400.0GB 357.2GB 42.7GB 89% DB2_TEST
> DSA1303: 550.0GB 521.2GB 28.7GB 94% DB3_TEST
> DSA1304: 660.0GB 632.8GB 27.1GB 95% DB4_TEST
>
>

Thank you very much for your feedback.

As Dan has already said, "not the best display of shadow sets".
The easiest way to "fix" this would be to totally ignore the
physical devices of a shadow set virtual unit. But I don't think
that would be right - the physical devices of a shadow set are
useful to know (their names and how many are associated with a
virtual unit?).

The above "df -h" display from "pcov" is much more than I could
hope to test with. With my limited testing ability I have made
a change to the "df" display. The physical device names of a
shadow set will now display it's virtual unit member name.

These changes are now available from:

https://github.com/alan-fay/openvms

"Download ZIP" to download the files.

If "pcov" or anyone else with a shadow set system could provide
some feedback "good or bad" I would be grateful.

Alan Fay


VAXman-

unread,
Jun 12, 2014, 5:49:37ā€ÆPM6/12/14
to
$ SHOW SHADOW DSAn:


>The above "df -h" display from "pcov" is much more than I could
>hope to test with. With my limited testing ability I have made
>a change to the "df" display. The physical device names of a
>shadow set will now display it's virtual unit member name.
>
>These changes are now available from:
>
>https://github.com/alan-fay/openvms
>
>"Download ZIP" to download the files.
>
>If "pcov" or anyone else with a shadow set system could provide
>some feedback "good or bad" I would be grateful.

Device Name Blocks Used Avail Capacity Volume Label
$1$DKA0: 8380080 2465440 5914640 29% OpenVMS Dump
$1$DKA100: 142264000 142264000 0 100% (member of DSA0:)
$1$DKA200: 142264000 142264000 0 100% (member of DSA0:)
$1$DKA300: 71132000 71132000 0 100% (member of DSA1:)
$1$DKA400: 71132000 71132000 0 100% (member of DSA1:)
$1$DKA500: 35565080 35565080 0 100% (member of DSA2:)
$1$DKA600: 35565080 35565080 0 100% (member of DSA2:)
$1$DKA900: 71132960 18805840 52327120 26% OpenVMS Srcs
DSA0: 142264000 72126448 70137552 50% OpenVMSAXP84
DSA1: 71132000 52526978 18605022 73% Shadow_Set_1
DSA2: 35565080 30596312 4968768 86% Shadow_Set_2



I'd prefer to see the shadow set members groups with their virtual unit
such as this:

Device Name Blocks Used Avail Capacity Volume Label
$1$DKA0: 8380080 2465440 5914640 29% OpenVMS Dump
$1$DKA900: 71132960 18805840 52327120 26% OpenVMS Srcs
DSA0: 142264000 72126448 70137552 50% OpenVMSAXP84
$1$DKA100: 142264000 142264000 0 100% (member of DSA0:)
$1$DKA200: 142264000 142264000 0 100% (member of DSA0:)
DSA1: 71132000 52526978 18605022 73% Shadow_Set_1
$1$DKA300: 71132000 71132000 0 100% (member of DSA1:)
$1$DKA400: 71132000 71132000 0 100% (member of DSA1:)
DSA2: 35565080 30596312 4968768 86% Shadow_Set_2
$1$DKA500: 35565080 35565080 0 100% (member of DSA2:)
$1$DKA600: 35565080 35565080 0 100% (member of DSA2:)

John E. Malmberg

unread,
Jun 12, 2014, 10:49:59ā€ÆPM6/12/14
to
For the actual DF utility:

The first column is the source of the volume. On VMS, I do not know a
way to look at NFS source path as all I can tell is that it is an NFS
container. The 6th field is either the mounted device, or the path to
where the filesystem is mounted.

If the tool is not going to fully emulate the output of the Unix
utility, it may be a good idea to give it a different name to prevent
confusion.

LION> df -h
Filesystem Size Used Avail Use% Mounted on
_LION$DKA0: 8.5G 6.4G 2.1G 76% /LION$DKA0
_LION$DKA100: 8.5G 1.4G 7.1G 17% /LION$DKA100
_LION$DNA0: 932G 13G 920G 2% /LION$DNA0
_DNFS1: 919G 116G 803G 13% /DNFS1
_LION$LDA1: 651M 533M 118M 82% /LION$LDA1

For an LDA device, an enhancement would be to have the container file
under the Filesystem column.

In this case lda1 actually resides on _DNFS1:. An interesting way to
get ODS-2/5 volumes hosted on a NFS volume.

VMS 8.3 introduced mount points where you can mount other volumes.
The VMS specific source for looking them up is in the GNV Coreutils
mercurial repository.

When this is done, the actual df utility is useful in showing you in one
place where all the mount points are.

(Output edited to fit 72 columns)
$ df :== $gnv$gnu:[bin]gnv$df.exe
$ df -h
Filesystem Size Used Avail Use% Mounted on
DISK$ODS5_1:[VMS$COMMON.gnv]usr.DIR 1.2G 429M 789M 36% /usr
DISK$ODS5_1:[VMS$COMMON.gnv]man.dir 1.2G 429M 789M 36% /man
DISK$ODS5_1:[VMS$COMMON.gnv]lib.DIR 1.2G 429M 789M 36% /lib
DISK$ODS5_1:[VMS$COMMON.gnv]include.dir 1.2G 429M 789M 36% /include
DISK$ODS5_1:[VMS$COMMON.gnv]etc.dir 1.2G 429M 789M 36% /etc
DISK$ODS5_1:[VMS$COMMON.gnv]bin.DIR 1.2G 429M 789M 36% /bin
_EISNER$DRA6: 20G 3.4G 17G 17% /EISNER$DRA6
_EISNER$DKA0: 115G 74G 41G 65% /EISNER$DKA0
_EISNER$MDA0: 20M 771K 19M 4% /EISNER$MDA0
_EISNER$LDA1: 1.2G 429M 789M 36% /EISNER$LDA1
_EISNER$LDA2: 8.5G 4.4G 4.1G 52% /EISNER$LDA2

$ df --version
df (GNU coreutils) 8.21
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by Torbjorn Granlund, David MacKenzie, and Paul Eggert.

Regards,
-John

hb

unread,
Jun 13, 2014, 10:14:32ā€ÆAM6/13/14
to
On 06/13/2014 04:49 AM, John E. Malmberg wrote:
> For the actual DF utility:
>
> For an LDA device, an enhancement would be to have the container file
> under the Filesystem column.

which looks like an extension to me (if I compare LDA devices with loop
devices on Linux):
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop1 4015112 4015112 0 100% /mnt-system

> VMS 8.3 introduced mount points where you can mount other volumes.
> The VMS specific source for looking them up is in the GNV Coreutils
> mercurial repository.

Maybe mount points weren't widely used or documented before 8.3, but
they were already in the COE release.

> When this is done, the actual df utility is useful in showing you in one
> place where all the mount points are.
>
> (Output edited to fit 72 columns)
> $ df :== $gnv$gnu:[bin]gnv$df.exe
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> DISK$ODS5_1:[VMS$COMMON.gnv]usr.DIR 1.2G 429M 789M 36% /usr
> DISK$ODS5_1:[VMS$COMMON.gnv]man.dir 1.2G 429M 789M 36% /man
> DISK$ODS5_1:[VMS$COMMON.gnv]lib.DIR 1.2G 429M 789M 36% /lib
> DISK$ODS5_1:[VMS$COMMON.gnv]include.dir 1.2G 429M 789M 36% /include
> DISK$ODS5_1:[VMS$COMMON.gnv]etc.dir 1.2G 429M 789M 36% /etc
> DISK$ODS5_1:[VMS$COMMON.gnv]bin.DIR 1.2G 429M 789M 36% /bin
> _EISNER$DRA6: 20G 3.4G 17G 17% /EISNER$DRA6
> _EISNER$DKA0: 115G 74G 41G 65% /EISNER$DKA0
> _EISNER$MDA0: 20M 771K 19M 4% /EISNER$MDA0
> _EISNER$LDA1: 1.2G 429M 789M 36% /EISNER$LDA1
> _EISNER$LDA2: 8.5G 4.4G 4.1G 52% /EISNER$LDA2

The shown GNV mount points are on LDA1, as one can guess from the Size
Used and Avail numbers. It would be nice to see an exact match in the
"Filesystem" names. And as with shadow sets, it would be better to show
that they share the disk space (as df is a utility to report file system
disk space usage).

On the other hand I would have expected to see an entry for the root as
well, which usually is a rooted-device logical name (sharing disk space
with another "Filesystem"). It's defined like "_EISNER$LDA1:[14,2,0.]".
Also, I would have expected such root directories as "Filesystem" for
the mount points, something like "_EISNER$LDA1:[VMS$COMMON.gnv.usr.]"
for "/usr".

Stephen Hoffman

unread,
Jun 13, 2014, 5:24:05ā€ÆPM6/13/14
to
On 2014-06-04 21:14:17 +0000, alan...@btinternet.com said:

> I've taken Stephen Hoffman's advice and applied for an account on
> http://deathrow.vistech.net with a telnet to jack.vistech.net but I
> haven't heard back yet.

Responses for the username registration requests are mailed immediately
ā€” before you log out of the registration process ā€” meaning that the
registration information was probably either spam-filtered somewhere,
or the target email address was mispelled. Check your spam folder, if
you have one. Alternatively, try a different email address, or with
different spam settings.

alan...@btinternet.com

unread,
Jun 13, 2014, 6:15:04ā€ÆPM6/13/14
to
On Friday, 13 June 2014 22:24:05 UTC+1, Stephen Hoffman wrote:
> On 2014-06-04 21:14:17 +0000, alan...@btinternet.com said:
>
>
>
> I've taken Stephen Hoffman's advice and applied for an account on
> http://deathrow.vistech.net with a telnet to jack.vistech.net but I
> haven't heard back yet.
>
>
>
> Responses for the username registration requests are mailed immediately
> -- before you log out of the registration process -- meaning that the
> registration information was probably either spam-filtered somewhere,
> or the target email address was mispelled. Check your spam folder, if
> you have one. Alternatively, try a different email address, or with
> different spam settings.
>
> --
>
> Pure Personal Opinion | HoffmanLabs LLC

Stephen,

First of all I'd like to say thank you for the Deathrow
system, without it I would never had access to an OpenVMS
IA64 system.

Not sure if this should be public... I logged into the system
with NEWUSER pass NEWUSER and requested a user name of ALAN
with my email address of alan...@btinternet.com (I live in
Hampshire, UK) and I never heard back. So I just tried again
but it says:

Email address unavailable, invalid or is already in use; please use another

I cant change my email address (its the only one I've got).

Not sure what to do next?

Alan Fay

Stephen Hoffman

unread,
Jun 13, 2014, 6:34:57ā€ÆPM6/13/14
to
On 2014-06-13 22:15:04 +0000, alan...@btinternet.com said:

> Email address unavailable, invalid or is already in use; please use another

That's because the registration system already has your existing registration.

> Not sure what to do next?

Check your spam folder, or whatever spam processing your provider uses.
Deathrow's SMTP DNS setup is known to run afoul of spam filters.
Unfortunately.

One of the cluster operators has overridden the process and manually
sent the registration information to the specified email address.

Alternatively, get a gmail or other non-ISP account. (IIRC, BT was
using Yahoo for their hosting. Donno if that's changed.) If not and
otherwise, don't forget your login password, as the password reset
messages will probably disappear down the same SMTP black hole.)

To preempt the intended-to-be-helpful follow-up comments I've received
subsequent previous comments similar to the above: I'm quite familiar
with setting up DNS and SMTP servers, and know what the issue is with
the current setup that's triggering spam detection, and simply I do not
have the administrative access necessary to correct the DNS issue.
Yes, I know who has that access. Yes, I've let them know.

John E. Malmberg

unread,
Jun 14, 2014, 2:06:15ā€ÆPM6/14/14
to
On 6/13/2014 9:14 AM, hb wrote:
> On 06/13/2014 04:49 AM, John E. Malmberg wrote:
>> For the actual DF utility:
>>
>> For an LDA device, an enhancement would be to have the container file
>> under the Filesystem column.
>
> which looks like an extension to me (if I compare LDA devices with loop
> devices on Linux):
> # df
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/loop1 4015112 4015112 0 100% /mnt-system
>
>> VMS 8.3 introduced mount points where you can mount other volumes.
>> The VMS specific source for looking them up is in the GNV Coreutils
>> mercurial repository.
>
> Maybe mount points weren't widely used or documented before 8.3, but
> they were already in the COE release.

I was not sure if they were implemented the same. I only know what was
used for VMS 8.3.

>> When this is done, the actual df utility is useful in showing you in one
>> place where all the mount points are.
>>
>> (Output edited to fit 72 columns)
>> $ df :== $gnv$gnu:[bin]gnv$df.exe
>> $ df -h
>> Filesystem Size Used Avail Use% Mounted on
>> DISK$ODS5_1:[VMS$COMMON.gnv]usr.DIR 1.2G 429M 789M 36% /usr
>> DISK$ODS5_1:[VMS$COMMON.gnv]man.dir 1.2G 429M 789M 36% /man
>> DISK$ODS5_1:[VMS$COMMON.gnv]lib.DIR 1.2G 429M 789M 36% /lib
>> DISK$ODS5_1:[VMS$COMMON.gnv]include.dir 1.2G 429M 789M 36% /include
>> DISK$ODS5_1:[VMS$COMMON.gnv]etc.dir 1.2G 429M 789M 36% /etc
>> DISK$ODS5_1:[VMS$COMMON.gnv]bin.DIR 1.2G 429M 789M 36% /bin
>> _EISNER$DRA6: 20G 3.4G 17G 17% /EISNER$DRA6
>> _EISNER$DKA0: 115G 74G 41G 65% /EISNER$DKA0
>> _EISNER$MDA0: 20M 771K 19M 4% /EISNER$MDA0
>> _EISNER$LDA1: 1.2G 429M 789M 36% /EISNER$LDA1
>> _EISNER$LDA2: 8.5G 4.4G 4.1G 52% /EISNER$LDA2
>
> The shown GNV mount points are on LDA1, as one can guess from the Size
> Used and Avail numbers. It would be nice to see an exact match in the
> "Filesystem" names. And as with shadow sets, it would be better to show
> that they share the disk space (as df is a utility to report file system
> disk space usage).

As the filesystem field seems to be free-format, I just put what was in
the mount point.

I have not yet tried abusing the VMS mount point API to see if I can
mount a search list. If that is allowed, then the calculation could get
more complex.

I just passed through the information that VMS stores for accessing the
mount point in the logical name table.

It does sound like a useful enhancement.

> On the other hand I would have expected to see an entry for the root as
> well, which usually is a rooted-device logical name (sharing disk space
> with another "Filesystem"). It's defined like "_EISNER$LDA1:[14,2,0.]".

I forgot to include "/". Implementing it could be a bit tricky as it
means I have to lookup all the mount points and resolve them instead of
just iterating through them. I do not remember if the internals of the
API I emulated to determine how complex it should be.

> Also, I would have expected such root directories as "Filesystem" for
> the mount points, something like "_EISNER$LDA1:[VMS$COMMON.gnv.usr.]"
> for "/usr".

Again, I just reported what I got for the mount point as is.

Regards,
-John

Richard Maher

unread,
Jun 14, 2014, 8:08:11ā€ÆPM6/14/14
to
On 6/14/2014 6:15 AM, alan...@btinternet.com wrote:

> First of all I'd like to say thank you for the Deathrow
> system, without it I would never had access to an OpenVMS
> IA64 system.
>

If I'm not mistaken you may wish to spare a thought for Hein

> Alan Fay
>

Paul Sture

unread,
Jun 15, 2014, 8:28:13ā€ÆAM6/15/14
to
I understand that Hein generously donated that IA64 system.

--
You can't look at a glass as half full or half empty if it's overflowing.

MG

unread,
Jun 15, 2014, 12:12:19ā€ÆPM6/15/14
to
On 15-jun-2014 14:28, Paul Sture wrote:
> On 2014-06-15, Richard Maher <maher_rj...@hotmail.com> wrote:
>> On 6/14/2014 6:15 AM, alan...@btinternet.com wrote:
>>
>>> First of all I'd like to say thank you for the Deathrow
>>> system, without it I would never had access to an OpenVMS
>>> IA64 system.
>>>
>>
>> If I'm not mistaken you may wish to spare a thought for Hein
>
> I understand that Hein generously donated that IA64 system.

That's right and it took a long time before it was fully set up,
ready to be added into the cluster and to be accessed and used
by the 'public'.

Probably the best way for people to show their gratitude, is to
not merely let that rx2600 sit in a rack and heat up the place
idly and to actually /use/ it and make its energy consumption
and heat dissipation worthwhile.

- MG

Stephen Hoffman

unread,
Jun 15, 2014, 1:41:01ā€ÆPM6/15/14
to
On 2014-06-15 12:28:13 +0000, Paul Sture said:

> I understand that Hein generously donated that IA64 system.

Correct.

HoffmanLabs LLC donated the AlphaServer. Brian Schenkenberger donated
a memory upgrade for the AlphaServer, though that has not yet been
installed, AFAIK.

As part of the cluster rebuild and upgrade to OpenVMS V8.4, the
Deathrow cluster is now running on two different "new" servers.
GEIN:: (the "new" AlphaServer DS10L) was added to JACK:: (the "new"
Integrity rx2600) within the cluster. That's with new OpenVMS installs
on both servers, and it's now JACK:: where all the user data resides,
and from which the user data is served to GEIN::.

The "old" GEIN:: AlphaServer DS10L that was the primary (OpenVMS Alpha
V7.3-1) cluster host for a number of years was donated by Island
Computing. That "old" AlphaServer was overheating and failing, and was
the cause of much of the Deathrow cluster instability.

VAXman-

unread,
Jun 15, 2014, 2:15:38ā€ÆPM6/15/14
to
In article <lnklrd$7t7$1...@dont-email.me>, Stephen Hoffman <seao...@hoffmanlabs.invalid> writes:
>On 2014-06-15 12:28:13 +0000, Paul Sture said:
>
>> I understand that Hein generously donated that IA64 system.
>
>Correct.
>
>HoffmanLabs LLC donated the AlphaServer. Brian Schenkenberger donated
>a memory upgrade for the AlphaServer, though that has not yet been
>installed, AFAIK.

I'm now wondering if that memory will ever find its way into GEIN::.

Richard Maher

unread,
Jun 17, 2014, 4:36:14ā€ÆAM6/17/14
to
On 6/16/2014 12:12 AM, MG wrote:
> Probably the best way for people to show their gratitude, is to
> not merely let that rx2600 sit in a rack and heat up the place
> idly and to actually /use/ it and make its energy consumption
> and heat dissipation worthwhile.

Sorry I can't help you. I was censored/banned/expelled from death row
years ago by some of the same thought police that destroyed Digital.

Besides, thanks again in large part to Hein, I have my own heater and
noise maker.
>
> - MG
>

MG

unread,
Jun 17, 2014, 11:49:45ā€ÆAM6/17/14
to
On 17-jun-2014 10:36, Richard Maher wrote:
> On 6/16/2014 12:12 AM, MG wrote:
>> Probably the best way for people to show their gratitude, is to
>> not merely let that rx2600 sit in a rack and heat up the place
>> idly and to actually /use/ it and make its energy consumption
>> and heat dissipation worthwhile.
>
> Sorry I can't help you. I was censored/banned/expelled from death row
> years ago by some of the same thought police that destroyed Digital.

...Oh?

- MG

MG

unread,
Jun 21, 2014, 5:56:44ā€ÆAM6/21/14
to
On 15-jun-2014 19:41, Stephen Hoffman wrote:
> HoffmanLabs LLC donated the AlphaServer.

Interesting, I had no idea that HoffmanLabs possessed AlphaServers.

- MG

George Cornelius

unread,
Jul 1, 2014, 3:11:19ā€ÆAM7/1/14
to
Richard Maher wrote:
> On 6/16/2014 12:12 AM, MG wrote:
>
>> Probably the best way for people to show their gratitude, is to
>> not merely let that rx2600 sit in a rack and heat up the place
>> idly and to actually /use/ it and make its energy consumption
>> and heat dissipation worthwhile.
>
>
> Sorry I can't help you. I was censored/banned/expelled from death row
> years ago by some of the same thought police that destroyed Digital.

You sure your account did not just time out? Happens after a
few months of disuse, I believe, and seems to be easy to reactivate
afterwards. Possibly sans files.

George
0 new messages