Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

VAX Harddisk Benchmark / VAX 4000/105A SCSI performance

347 views
Skip to first unread message

hans.h...@gmail.com

unread,
Aug 30, 2018, 2:24:52 AM8/30/18
to
Hi,

does anyone happen to have a hard disk or file system benchmark for VAX/VMS that they'd be willing to share? I need to get rid of the RF36 drive that is in my VAX 4000/105A as it is too loud and I would like to experiment with a few SCSI options. Having a way to measure relative performance would be useful.

I've been unable to find information as to the performance of the SCSI subsystem in the 4000/105A, in particular in comparison to DSSI. I tend to suspect that even if the SCSI transfer rate is slower than that of the DSSI bus, a solid state drive on the SCSI bus would still be overall faster than the RF36. Then again, I could be wrong, so I'm looking for a way to measure (or experience reports from knowledgeable people).

I can write my own benchmark, but maybe there is some benchmarking tool that can generate a specific I/O mix of random and sequential accesses and report the results already?

Thanks,
Hans

Stephen Hoffman

unread,
Aug 30, 2018, 11:21:45 AM8/30/18
to
On 2018-08-30 06:24:50 +0000, hans.h...@gmail.com said:

> ...does anyone happen to have a hard disk or file system benchmark for
There's little performance distance between the glacial storage of DSSI
and the glacial storage of SCSI-1, in terms of modern storage
performance.

The DSSI SHAC controller was theoretically capable of ~4 MBps, and
SCSI-1 of ~1.5 MBps. HDDs were good for ~120 IOPS and not a whole lot
of bandwidth, which is what throttled most everything back then, and
why ginormous arrays of HDDs were common. In this case, the VAX and
the memory and the rest of this old box will also throttle performance.
Pragmatically, I wouldn't expect much of a difference, but then I'm
well used to SSD speeds and feeds and HDDs all seem slow.

If you have any requirements for performance, transfer your environment
and boot a VAX emulation and use an SSD. I'm routinely getting tens of
thousands of IOPS with SSD, and a mid-grade array provides most of a
million IOPS and at 0.7ms latency; at a very small fraction of the
access time of an HDD. And emulation on any recent Windows, Linux,
macOS or BSD system will be vastly faster than an actual hardware VAX.

Here are some DSSI disk specs:
http://manx-docs.org/collections/mds-199909/cd1/vax/514aatib.pdf — feel
free to scrounge for some equivalent-era SCSI storage specs for however
you're connecting the SCSI storage to this old VAX; via the internal
SCSI bus or via one of the various external HSD adapters.

What follows are related discussions of OpenVMS I/O performance from
David Mathog from a number of years ago, and a benchmark tool that
David had written. Lots of old discussions around here, available via
the Google Groups archives. I've not tried the SAF download link in a
few years, though. That tool that was still available back then, and
I have a copy of the download if it's gone missing.

https://groups.google.com/d/msg/comp.os.vms/4FZHjDQ1R4A/DO5xV-z-XGEJ
ftp://saf.bio.caltech.edu/pub/software/benchmarks/mybenchmark.zip


--
Pure Personal Opinion | HoffmanLabs LLC

Dave Froble

unread,
Aug 30, 2018, 1:38:16 PM8/30/18
to
Be interesting to see any results you might get.

From past memory, DSSI was a bit better than the old SCSI interfaces.
(What isn't?) The old 50 pin SCSI is rather slow. Any modern disk or
SSD will be far faster, but, you're still going to be moving data at the
speed of the old SCSI interface.

But does it really matter? You'll get the performance of the old SCSI
interface with the old drives, or with newer HW. That's the best you
can do.

flunk...@gmail.com

unread,
Sep 1, 2018, 1:12:16 AM9/1/18
to
A couple of notes.
1. Unless you have some special attached hardware, or are doing this for fun, think about SIMH or a commercial VAX emulator. I/O is an order of magnitude faster.

2. If you are using an old version of VMS (i.e. 5.5), be aware it does not recognize non-dec scsi drives. Later VMS version are more permissive.

2. A nice combination of SCSI and DSSI is to stick an HSD10 in a BA356 (or like) cabinet and use modern cheap RZ1XX series scsi drives. If the HSD10 has later firmware like C294 (HSD10 are flashable) it will use almost any scsi drive; you can partition them to several small virtual drives (useful for old VMS with the 8GB limit); also do local mirroring and other neat stuff. IMHO, The nicest setup is identical ba356 towers with an HSD10 in each on a separate DSSI connection (your 105A has 2 dssi ports) with volume shadowing on the host, You'll have speed and redundancy.

3. I've played with a couple of SSD/NVRAM on SCSI options.
A. I've opened an RZ1CB and replaced the drive with ACARD ARS-2160H with a 64GB SATA SSD in it. The HSD10 had no problems with it. I partitioned it and exported 8 8GB drives to VMS 5.5-2h4 with no problems and lots of speed. But the acard SCSI adapters are now in short supply and prices have sky-rocketed.
B. Check out SCSI2SD. I've tried the V6 cards with the HSD10 with good luck. They should work on the internal SCSI bus. They let you configure the Device name so it should work on V5.5 if you set it up right. Also use a good SD card designed for endurance.

I have some benchmark results from using some simple programs and scripts. If I can find them, I'll post them.

Johnny Billquist

unread,
Sep 1, 2018, 6:28:31 AM9/1/18
to
There will be a noticeable improvement. The majority of time of any disk
operation is seek time. Newer disks commonly have much better seek times
than those really old disks.
The interface limits things only when you actually shuffle the data,
which is a very small portion of the time for many I/O operations. You
are lucky if the heads are already on the right track, and the disk is
at the right place in the rotation when you issue you I/O request.

Johnny

--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol

flunk...@gmail.com

unread,
Sep 2, 2018, 12:10:11 AM9/2/18
to
Disk thrash benchmark testing.

Times in Secs
Stage1 Stage2 Stage3 total Write test KB/S

100A HSD10 to SSD 152 66 102 320
100A Shad HSD10&SSD 153 65 105 323 1812
100A HSD10 to SD 162 65 106 333 1530
100A HSD10 to slowSD 206 66 146 417 1158

100A HSD10 to RZ29 294 93 190 577 529
100A HSD10 to RZraid 406 103 297 808
100A HSD10 to RZ1FC 256 88 160 507 768

100A RF73 696 88 562 1346
100A RZ29 230 89 195 514

ds10l U160 Scsi 217 29 176 422
sim2 virt/SSD 22 23 13 58
DS25 U320 SCSI 158 6 129 293


HSD10 has 32MB read cache.

SSD on HSD10 uses an ARS-2160H scsi adapter with a standard sata2 32GB SSD paritioned into 4 8GB drives.
SD on HSD10 uses a SCSI2SD V6 adapter with a partitioned 128GB Class10 SD card.
Slow SD is the SCSI2D V6 using a cheap 8GB SD card.
Shad HSD10&SSD is 2 HSD10 each with an SSD drive contected to seperate DSSI busses, using host based volume shadowing.
RZraid is the HSD10 mirroring 2 RZ29s (other processes may have been active on drive)
RZ1FC is a 36GB 7200rpm drive configured to 5 7gb partitions

100A is Vax 4000/100A runing VMS 5.5-2H4
DS10L is 600Mz DS10L running VMS 7.3-2
DS25 is a 1Ghz DS25 running VMS 7.3-2
SIM2 is VMS 5.5-2h4 on SIMH 3900 on a i5-6600 (16GB,Raid1 256GB SSDs) running slackware 4.2

Stage 1 creates 5000 files in a single directory (Think seek test)(VMS 5.5 does this poorly)
Stage 2 does a fully directory listing to a file (Think throughput test)
Stage 3 deletes all files (Think seek test)

$ ! thrash a disk. do in an empty directory
$ !
$ fname := "Long_filename_for.testing"
$ i = 1
$ show time ! start time
$ top1:
$ open/write foo 'fname'
$ close foo
$ i = i + 1
$ if i .lt. 5000 then goto top1
$ !
$ show time ! stage 1
$ dir/full/out='fname'
$ show time ! stage 2
$ delete 'fname';*
$ show time ! stage 3

Write test is a script that repeatly calls a program that creates and writes 10MBs of data. This done until the disk is full.
Using time taken and files written, speed in Kilobytes per second is calculated. I can provide code if really needed.

Dave Froble

unread,
Sep 2, 2018, 12:42:12 PM9/2/18
to
Thanks for that. Your SimH results have me again thinking that I need
to perhaps do some tinkering in that direction.

Since it seems you have some experience with SimH, let me ask, do you
attempt to compartmentalize, ie; keep VMS resources separate, or, do you
just mix it all in with the host?

John E. Malmberg

unread,
Sep 2, 2018, 12:59:18 PM9/2/18
to
I setup an LXD container on the host. It makes the SimH instance
visible by management applications that support libvirt.

https://sourceforge.net/p/vms-ports/wiki/SimH-VAX%20in%20a%20Container/

What would be nice full libvirt integration to SimH.

Short term would be startup script that would translate the libvirt
generated XML into the SimH ini file.

-John

Chris Scheers

unread,
Sep 2, 2018, 9:35:33 PM9/2/18
to
flunk...@gmail.com wrote:
> A couple of notes.
> 1. Unless you have some special attached hardware, or are doing this for fun, think about SIMH or a commercial VAX emulator. I/O is an order of magnitude faster.
>
> 2. If you are using an old version of VMS (i.e. 5.5), be aware it does not recognize non-dec scsi drives. Later VMS version are more permissive.

Technically, VMS 5.5 does not have a problem with non-DEC SCSI drives.

It does have a problem with INQUIRY strings that are too long (> 15
characters?).

It also has a problem if the failure management bits on the drive are
not set the way 5.5 wants.

If the drive supports it, you can set the failure management bits and
out of the box 5.5 will use the drive. Alternately, there is a patched
driver floating around that ignores these bits.

I don't think VMS 6.2 or later has either of these issues.

> 2. A nice combination of SCSI and DSSI is to stick an HSD10 in a BA356 (or like) cabinet and use modern cheap RZ1XX series scsi drives. If the HSD10 has later firmware like C294 (HSD10 are flashable) it will use almost any scsi drive; you can partition them to several small virtual drives (useful for old VMS with the 8GB limit); also do local mirroring and other neat stuff. IMHO, The nicest setup is identical ba356 towers with an HSD10 in each on a separate DSSI connection (your 105A has 2 dssi ports) with volume shadowing on the host, You'll have speed and redundancy.
>
> 3. I've played with a couple of SSD/NVRAM on SCSI options.
> A. I've opened an RZ1CB and replaced the drive with ACARD ARS-2160H with a 64GB SATA SSD in it. The HSD10 had no problems with it. I partitioned it and exported 8 8GB drives to VMS 5.5-2h4 with no problems and lots of speed. But the acard SCSI adapters are now in short supply and prices have sky-rocketed.
> B. Check out SCSI2SD. I've tried the V6 cards with the HSD10 with good luck. They should work on the internal SCSI bus. They let you configure the Device name so it should work on V5.5 if you set it up right. Also use a good SD card designed for endurance.
>
> I have some benchmark results from using some simple programs and scripts. If I can find them, I'll post them.


--
-----------------------------------------------------------------------
Chris Scheers, Applied Synergy, Inc.

Voice: 817-237-3360 Internet: ch...@applied-synergy.com
Fax: 817-237-3074

flunk...@gmail.com

unread,
Sep 4, 2018, 12:48:24 AM9/4/18
to
> Thanks for that. Your SimH results have me again thinking that I need
> to perhaps do some tinkering in that direction.
>
> Since it seems you have some experience with SimH, let me ask, do you
> attempt to compartmentalize, ie; keep VMS resources separate, or, do you
> just mix it all in with the host?

The hosts sole purpose is to run SIMH. I also put an Ethernet card in the system and give SIMH it's own Ethernet port. Otherwise, nothing special beside using slackware with a bunch stuff turned off. I'm a big fan of keeping things simple even at the cost of being user-friendly.

A note of the disk speed of SIMH: The emulated VMS system has maybe 10GB of disk total. With the linux system with 16GB of ram and no other processes, every VMS disk operation is a cache hit from ram. That's the real secret to the SIMH's disk speed.

Rich Jordan

unread,
Sep 10, 2018, 2:34:19 PM9/10/18
to
Just in case you're interested; Nemonix Engineering made (makes?) accessory cards for many VAXen including yours that would provide both 100Mbit ethernet and Ultra Wide SCSI interfaces. I've never run one but I bought one on ebay long ago (too cheap to pass up at the time) and the person I eventually gave it to was quite happy with the performance; I think he had a 4000...

They are not cheap from the vendor and the era of bargain old computer stuff on ebay has faded (everything is precious and collectible) but a little shopping around might be worth it if you want to keep your system running instead of moving to an emulator.
0 new messages