Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

VMS to VMS data copy options/performance when losing a DECnet link

492 views
Skip to first unread message

Rich Jordan

unread,
Apr 7, 2022, 7:24:03 PM4/7/22
to
Customer has decided to turn off "legacy" DECnet support on their network. They currently use DECnet for copies between two nonclustered integrity servers that are on different IP subnets/VLANs and have Cisco doing whatever magic it does to make DECnet appear local. THis is Phase IV, not DECnet over IP.

Is there any documentation on relative performance for bulk data transfer with preliminary data archiving (ie ZIP or backup to a local archive file so not transferring lots of individual files) on VMS using FTP/SFTP versus using NFS as the transfer mechanism? I know details matter and I don't have them yet but is one likely to be enough different that it is worth pursuing a test? Which won't be trivial given equipment availability...

Thanks for any info

Robert A. Brooks

unread,
Apr 7, 2022, 8:10:49 PM4/7/22
to
On 4/7/2022 7:24 PM, Rich Jordan wrote:
> Customer has decided to turn off "legacy" DECnet support on their network.
> They currently use DECnet for copies between two nonclustered integrity
> servers that are on different IP subnets/VLANs and have Cisco doing whatever
> magic it does to make DECnet appear local. THis is Phase IV, not DECnet over
> IP.

Install DECnet-Plus (IA64) or DECnet/OSI (Alpha), and enable DECnet over TCP/IP.

That's likely the easiest solution, since it'll require no application change.

Other than the initial configuration, it'll be "set it and forget it".

If you have non-transparent task-to-task applications, you'll need to understand
some of the differences between Phase IV and Phase V, mostly in terminology.
Phase IV "objects" are Phase V "session control applications". NCL vs. NCP
has a bit of a learning curve.

If your use of DECnet is limited to $ SET HOST and $ COPY, then the differences
between Phase IV and Phase V will barely be noticeable.

--
-- Rob

Steven Schweda

unread,
Apr 7, 2022, 9:06:30 PM4/7/22
to
> [...] THis is Phase IV, not DECnet over IP.

> Install DECnet-Plus (IA64) or DECnet/OSI (Alpha), and enable DECnet
> over TCP/IP.

Yeah, what he said.

I've never worried about performance, but when I ran a SIMH VAX
emulator on a Mac, I was pleased with the DECnet-like behavior of
DECnet(-Plus) between my main IA64 system (ITS, 10.0.0.140, 1.140) and
the pseudo-VAX (WISP, 10.0.4.32, 1.32). Normal file access, like, say:
ITS $ directory /size wisp::
just worked.

In that case, the Mac was the router between the two IP subnets
(main Ethernet = 10.0.0.0/24 and Mac-emulator = 10.0.4.0/24).

Dave Froble

unread,
Apr 7, 2022, 9:16:21 PM4/7/22
to
On 4/7/2022 7:24 PM, Rich Jordan wrote:
Tell customer to re-think ...

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Bob Gezelter

unread,
Apr 8, 2022, 7:24:37 AM4/8/22
to
Rich,

As Robert noted, DECnet Phase V over IP is a viable choice. In situations where the client did not want to do that change, I have used sftp quite effectively, often in concert with ZIP/UNZIP.

As an example, in one quick test, I was able to transfer more than a terabyte of data in a reasonable time (I do not have the precise numbers in easy reach at the moment). The context was non-tape backup. The process was:
- Do BACKUP/IMAGE to a scratch device
- use ZIP "-V" to compress the BACKUP save set. Your mileage will vary, I was able to get approximately 90% size reduction.
- Use sftp to copy the resulting ZIP files to the destination machine
- UNZIP the transferred files
- use BACKUP to restore the saveset on the destination machine

The above was run in multiple BATCH jobs on both sides.

- Bob Gezelter, http://www.rlgsc.com

Rich Jordan

unread,
Apr 8, 2022, 11:30:49 AM4/8/22
to
Thank you all for responses. I'm not excited about the thought of installing Phase V mainly because it has been 25+ years since I even looked at it. I can see if it is an option. At least that I can test here at our office.

Losing current DECnet transit is not negotiable apparently; we lost that argument but that choice is with the customer and their network vendor/support; we and the VMS boxes are a side issue that just needs to deal with the change.

As noted we already do some data moves by doing backup to local disk savesets (using the /DATA_FORMAT=COMPRESS option) then FTP'ing those sets to a PC. Fully tested so we know we can retrieve those savesets, fix the attributes, and restore them because doing even a zero compression encapsulating ZIP takes too much time for the larger savesets (ZIP would preserve the backup saveset file attributes).

The current VMS to VMS is done with DECnet copies of individual files, or backup of folders/trees or bulk files to remote saveset via DECnet that gets restored on the remote system.

Steven Schweda

unread,
Apr 8, 2022, 11:56:00 AM4/8/22
to
> [...] backup to local disk savesets (using the /DATA_FORMAT=COMPRESS
> option) [...]

> [...] fix the attributes, and restore them because doing even a zero
> compression encapsulating ZIP takes too much time for the larger
> savesets (ZIP would preserve the backup saveset file attributes).

If all you wanted Zip+UnZip to do was preserve the attributes of a
BACKUP save set, then I wouldn't bother. There are enough DCL scripts
floating around which can restore them in near-zero time.

It might be worth a quick experiment to see whether BACKUP without
compression followed by Zip with compression makes any sense.

> The current VMS to VMS is done with DECnet copies of individual files,
> [...]

The principal advantage of new DECnet (ising IP) is that it works
like old DECnet.

Johnny Billquist

unread,
Apr 8, 2022, 12:48:31 PM4/8/22
to
On 2022-04-08 02:10, Robert A. Brooks wrote:
> On 4/7/2022 7:24 PM, Rich Jordan wrote:
>> Customer has decided to turn off "legacy" DECnet support on their
>> network.
>> They currently use DECnet for copies between two nonclustered integrity
>> servers that are on different IP subnets/VLANs and have Cisco doing
>> whatever
>> magic it does to make DECnet appear local. THis is Phase IV, not
>> DECnet over
>> IP.
>
> Install DECnet-Plus (IA64) or DECnet/OSI (Alpha), and enable DECnet over
> TCP/IP.
>
> That's likely the easiest solution, since it'll require no application
> change.

Meh. I wouldn't even go there. I'd just install Multinet and do DECnet
over IP, and staying on Phase IV. Phase V is just going to make a mess
for no gain.

Johnny

Simon Clubley

unread,
Apr 8, 2022, 1:50:44 PM4/8/22
to
On 2022-04-08, Rich Jordan <jor...@ccs4vms.com> wrote:
>
> Thank you all for responses. I'm not excited about the thought of installing Phase V mainly because it has been 25+ years since I even looked at it. I can see if it is an option. At least that I can test here at our office.
>

Two potential problems with that:

1) They may not like the idea of you running an older less secure
protocol over what is really just a TCP/IP tunnel.

2) Does DECnet Phase V itself offer any form of encryption ?
If not, they may not like the idea of unencrypted traffic running
on their network.

> Losing current DECnet transit is not negotiable apparently; we lost that argument but that choice is with the customer and their network vendor/support; we and the VMS boxes are a side issue that just needs to deal with the change.
>

If they are reducing the number of protocols running on their network
for security reasons, and especially getting rid of the older less
secure ones for that same reason, that's a very reasonable thing for
them to decide to do.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Walking destinations on a map are further away than they appear.

Simon Clubley

unread,
Apr 8, 2022, 1:51:50 PM4/8/22
to
On 2022-04-07, Dave Froble <da...@tsoft-inc.com> wrote:
> On 4/7/2022 7:24 PM, Rich Jordan wrote:
>> Customer has decided to turn off "legacy" DECnet support on their network. They currently use DECnet for copies between two nonclustered integrity servers that are on different IP subnets/VLANs and have Cisco doing whatever magic it does to make DECnet appear local. THis is Phase IV, not DECnet over IP.
>>
>> Is there any documentation on relative performance for bulk data transfer with preliminary data archiving (ie ZIP or backup to a local archive file so not transferring lots of individual files) on VMS using FTP/SFTP versus using NFS as the transfer mechanism? I know details matter and I don't have them yet but is one likely to be enough different that it is worth pursuing a test? Which won't be trivial given equipment availability...
>>
>> Thanks for any info
>>
>
> Tell customer to re-think ...
>

The customer may be doing this precisely because they do want to
make their network more secure.

Dave Froble

unread,
Apr 8, 2022, 2:41:49 PM4/8/22
to
On 4/8/2022 1:51 PM, Simon Clubley wrote:
> On 2022-04-07, Dave Froble <da...@tsoft-inc.com> wrote:
>> On 4/7/2022 7:24 PM, Rich Jordan wrote:
>>> Customer has decided to turn off "legacy" DECnet support on their network. They currently use DECnet for copies between two nonclustered integrity servers that are on different IP subnets/VLANs and have Cisco doing whatever magic it does to make DECnet appear local. THis is Phase IV, not DECnet over IP.
>>>
>>> Is there any documentation on relative performance for bulk data transfer with preliminary data archiving (ie ZIP or backup to a local archive file so not transferring lots of individual files) on VMS using FTP/SFTP versus using NFS as the transfer mechanism? I know details matter and I don't have them yet but is one likely to be enough different that it is worth pursuing a test? Which won't be trivial given equipment availability...
>>>
>>> Thanks for any info
>>>
>>
>> Tell customer to re-think ...
>>
>
> The customer may be doing this precisely because they do want to
> make their network more secure.
>
> Simon.
>

Security or nice helpful tools, choose one ...

Bob Gezelter

unread,
Apr 8, 2022, 7:17:59 PM4/8/22
to
Rich,

If you are transferring trees or subtrees, the /IMAGE is completely unnecessary.

The reason I used ZIP was to preserve the OpenVMS file attributes when transferring. OpenVMS ZIP/UNZIP can preserve file attributes without problem.

How large are the data volumes being transferred.

Steven Schweda

unread,
Apr 8, 2022, 11:33:45 PM4/8/22
to
> [...] OpenVMS ZIP/UNZIP can preserve file attributes without problem.

They can do it _with_ problems, too. There have been bugs, and there
are unsupported file-system features which BACKUP handles better.

help set file /enter

Also, idiosyncratic names like "a^.b." might get altered.
Everything's complicated.

Bob Gezelter

unread,
Apr 12, 2022, 9:49:29 AM4/12/22
to
Steve,

Quite. Which is my preferred solution is to do a BACKUP to a saveset and then ZIP/UNZIP the saveset.

Rich Jordan

unread,
Apr 14, 2022, 10:59:27 AM4/14/22
to
did a couple of quick tests. For 25-50GB backup savesets on the RX2800s, run to local disks with backup parameters optimized (and RMS), composite time for compressed backup and ZIP with no compression was about 15% shorter than backup without compression and ZIP "-V6" (from 6 to 9 got very little more compression in our test but added a fair amount of ZIP time). The system disk backup done while booted from an alternate boot disk, which is our smallest disk, the times were almost even. So for now we're sticking with backup doing the compression.

Rich Jordan

unread,
Apr 14, 2022, 11:32:36 AM4/14/22
to
Sorry for delays, got pulled into someone else's mega-project for a while.

To be clear the image backups are made and transferred to a PC server so they go offsite. Data transfer from one VMS machine to the other is done using non-image backups, then transfer the saveset, or if there is only a small batch of files, backup with the target being the remote system using proper syntax, or even just copy.

Its not impossible that we might extremely occasionally transfer an image backup to the remote system (or make the image backup with the saveset on the remote system) but that would be a serious recovery operation, not daily data updates.

We only used ZIP on backup savesets to preserve saveset attributes when they were transferred to PC servers, not for any VMS to VMS usage.

I know ZIP can be used to create a local archive of files or folders/trees with all file attributes saved (then we transfer the archive); I guess its worth trying but there is the _occasional_ need to update one or two major files using /ignore=interlock to get them to the backup system. In these cases the file is held open by processing until their staff fixes the problem that caused the hold but no changes are occurring so the data is consistent. THis can happen when processing runs into a problem, and the customer's staff is alerted and probably working the issue but we still need to get that file moved. ZIP can't do that.

And no, telling their devs to change that isn't going to go anywhere. Gotta work with what we got.

Still wishing they'd jumped for a cluster license and shadowing.

Chris Townley

unread,
Apr 14, 2022, 11:44:02 AM4/14/22
to
Have you tried zip -3 (or -2) ? You get a little less compression, but a
much lower CPU load/elapsed time

--
Chris

Rich Jordan

unread,
Apr 14, 2022, 11:47:38 AM4/14/22
to
Did, don't recall the numbers. 5 and 6 were the breakeven for time including the transfer after backup and ZIP were completed. But even -0 took quite a while even with the optimizations in place.

Rich Jordan

unread,
Apr 14, 2022, 2:31:51 PM4/14/22
to
Well, tried various combinations of the TCP protocol settings (like NODELAY_ACK), the FTP logicals for initial and append allocations, the FTP window size, etc. FTP is doing 1.4 to 1.64Mbps (that last due to /NODELAY_ACK) and won't go any higher. SFTP is supposed to be slower but we'll try it anyway. For now going to locate iPerf ports for VMS and run some tests with it to see what actual throughput we are getting between the sites.

Jeffrey H. Coffield

unread,
Apr 14, 2022, 2:52:54 PM4/14/22
to
We had to move large save sets over FTP when a system was replaced with
another system at a different location and we used the following setting
to speed up the transfers:

$ TCPIP
sysconfig -r socket sb_max=2000000
sysconfig -r socket somaxconn=10240
sysconfig -r socket sominconn=10240
sysconfig -r inet tcp_sendspace=300000 tcp_recvspace=300000
sysconfig -q socket
sysconfig -q inet tcp_sendspace tcp_recvspace

I know I didn't figure this out but I don't remember where I found these
settings.

Jeff
www.digitalsynergyinc.com

Steven Schweda

unread,
Apr 14, 2022, 3:56:52 PM4/14/22
to
> We only used ZIP on backup savesets to preserve saveset attributes
> when they were transferred to PC servers, not for any VMS to VMS usage.

Just to repeat, using "zip -0" simply to preserve the attributes of a
BACKUP save set seems to me like excessive I/O to preserve a record
format and a record length, both of which could be restored in
approximately no time using SET FILE /ATTRIBUTE (or BACKUP /REPAIR).
DCL scripts exist to do it automatically. See, for example:

http://antinode.info/dec/sw/fixrec.html

Rich Jordan

unread,
Apr 14, 2022, 4:17:12 PM4/14/22
to
Jeff, thanks will take a shot with those.

I'm currently testing remote VMS ftp to/from the PC server that the main system sends its nightly backup savesets to, and getting about 4.8 Mbps, so about 3x the VMS to VMS rate. The main system gets about 15Mbps to that PC but they are in the same datacenter. Still slow for gigabit but it fits their timing windows.

We'll see if the above parameters make a difference between the remote VMS server and the PC first; I can't make changes to the production box until the weekend, especially if a reboot is needed (will have to check)

Thanks again

Jeffrey H. Coffield

unread,
Apr 14, 2022, 5:12:16 PM4/14/22
to
No reboot necessary. I set up those commands to be executed on system
startup

Jeff

Mark Daniel

unread,
Apr 14, 2022, 8:12:07 PM4/14/22
to
On 15/4/22 4:22 am, Jeffrey H. Coffield wrote:
>
>
> On 04/14/2022 11:31 AM, Rich Jordan wrote:
>> On Thursday, April 7, 2022 at 6:24:03 PM UTC-5, Rich Jordan wrote:
8< snip 8<> We had to move large save sets over FTP when a system was
replaced with
> another system at a different location and we used the following setting
> to speed up the transfers:
>
> $       TCPIP
> sysconfig -r socket sb_max=2000000
> sysconfig -r socket somaxconn=10240
> sysconfig -r socket sominconn=10240
> sysconfig -r inet tcp_sendspace=300000 tcp_recvspace=300000
> sysconfig -q socket
> sysconfig -q inet tcp_sendspace tcp_recvspace
>
> I know I didn't figure this out but I don't remember where I found these
> settings.

A quick consult with Dr Google shows lamentably few hits for

"openvms sysconfig -r socket sb_max"

and the rest (though some which may be of interest).

I also notice an online manual

"HP TCP/IP Services for OpenVMS Tuning and Troubleshooting"

available from various sites, e.g. (quoted to prevent wrapping)

> https://www.digiater.nl/openvms/doc/alpha-v8.3/83final/documentation/pdf/aa_rn1vb_te.pdf

which seems to be missing from the VSI collection

https://docs.vmssoftware.com/

Are the directives and recommendations still applicable to VSI TCP/IP
Services 5 and 6?

> Jeff
> www.digitalsynergyinc.com

--
Anyone, who using social-media, forms an opinion regarding anything
other than the relative cuteness of this or that puppy-dog, needs
seriously to examine their critical thinking.

Rich Jordan

unread,
Apr 20, 2022, 5:18:49 PM4/20/22
to
I actually did go through the TCPIP troubleshooting manual and tried a couple of the suggestions. Benefits were minimal and could have just been random impact of actual network load at the time of testing. Could not do jumbo packets (but also found no reference to indicate that FTP would benefit from jumbo packets) because the intermediate network doesn't support them.

Main VMS server and PC backup server on the same LAN, second VMS server at the remote site. The test saveset was one of the small ones; I'll need to get times on the three much larger ones.

Main VMS to local PC backup server transfers a 6.7M block backup file in 3 minutes 41 seconds
Main VMS to remote VMS transfers same file in 32 minutes 21 seconds (push or pull)
Remote VMS server pulls the same backup file from the PC backup server in 10 minutes 30 seconds.

So it is faster to relay backups (and presumably other data of any significant size) through the PC backup server than doing it directly VMS to VMS.

The sysconfig changes above did not make a measurable difference when set on either VMS system or both; maybe 2% difference in time that was likely more due to network usage.

Setting TCP protocol DELAY_ACK to disabled made a few percentage point difference overall but still nothing major.

For now I guess we'll have to live with it. I'll try setting up sftp/scp to test but everything I've read says those will be slower.


Simon Clubley

unread,
Apr 21, 2022, 1:43:44 PM4/21/22
to
On 2022-04-20, Rich Jordan <jor...@ccs4vms.com> wrote:
>
> Main VMS server and PC backup server on the same LAN, second VMS server
> at the remote site. The test saveset was one of the small ones; I'll need
> to get times on the three much larger ones.
>
> Main VMS to local PC backup server transfers a 6.7M block backup file in
> 3 minutes 41 seconds
> Main VMS to remote VMS transfers same file in 32 minutes 21 seconds (push
> or pull)
> Remote VMS server pulls the same backup file from the PC backup server in
> 10 minutes 30 seconds.
>

That middle VMS to VMS time is especially pathetic.

Once the production version of VMS is available, someone should do
some testing comparing the time it takes to move data across the
network on the same hardware using both Linux and VMS.

If the results are anything like the above, that might shame
VSI Engineering into allocating engineering time to track down
the performance problems and fix them.

> So it is faster to relay backups (and presumably other data of any
> significant size) through the PC backup server than doing it directly VMS
> to VMS.
>

[snip]

> For now I guess we'll have to live with it. I'll try setting up sftp/scp
> to test but everything I've read says those will be slower.
>

I don't suppose there are any auto-negotiation LAN interface setting
problems (or similar) are there ?

Are any errors being logged at physical network interface level ?

Do you have access to SMB on the VMS systems ?

If so, I wonder if SMB would show better performance.

If it does, then for goodness sake test the hell out of it before
switching to using that method!!!

Simon Clubley

unread,
Apr 22, 2022, 1:56:14 PM4/22/22
to
On 2022-04-21, Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
>
> Do you have access to SMB on the VMS systems ?
>
> If so, I wonder if SMB would show better performance.
>

I also wonder if you might get better performance using NFS if that
is an option for you.

> If it does, then for goodness sake test the hell out of it before
> switching to using that method!!!
>

This comment applies to NFS just as much as it does to SMB! :-)

chris

unread,
Apr 22, 2022, 7:21:26 PM4/22/22
to
On 04/22/22 18:56, Simon Clubley wrote:
> On 2022-04-21, Simon Clubley<clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:
>>
>> Do you have access to SMB on the VMS systems ?
>>
>> If so, I wonder if SMB would show better performance.
>>
>
> I also wonder if you might get better performance using NFS if that
> is an option for you.
>
>> If it does, then for goodness sake test the hell out of it before
>> switching to using that method!!!
>>
>
> This comment applies to NFS just as much as it does to SMB! :-)
>
> Simon.
>

NFS on modern hardware and OS is very quick, Mb per second
without really trying and on a 1Gb network. Use it all the
time here...

Chris

Mark Berryman

unread,
Apr 22, 2022, 8:51:46 PM4/22/22
to
I think there may be something wrong with your network.
For me, VMS to VMS is around 309 Mbits/sec. Both FTP and DECnet are
essentially the same.
VMS to Mac, using FTP, is around 763 Mbits/sec. However, I have jumbo
frames turned on (FTP can take advantage of jumbo frames, DECnet not so
much) and my Mac uses SSD instead of physical disks.

If you are really only getting 1.4 to 1.6 Mbps then either there is
something wrong with your network or something is seriously slowing I/O
on your VMS systems. If you have the space, how fast does the backup
file copy disk to disk on the same VMS system? I used a 4GB file as a
test and it took about a minute.

Mark Berryman

Mark Daniel

unread,
Apr 26, 2022, 1:53:32 PM4/26/22
to
On 15/4/22 9:42 am, Mark Daniel wrote:
> On 15/4/22 4:22 am, Jeffrey H. Coffield wrote:
8< snip 8<
> I also notice an online manual
>
>   "HP TCP/IP Services for OpenVMS Tuning and Troubleshooting"
>
> available from various sites, e.g. (quoted to prevent wrapping)
>
>> https://www.digiater.nl/openvms/doc/alpha-v8.3/83final/documentation/pdf/aa_rn1vb_te.pdf
>>
>
> which seems to be missing from the VSI collection
>
> https://docs.vmssoftware.com/
>
> Are the directives and recommendations still applicable to VSI TCP/IP
> Services 5 and 6?
>
>> Jeff
>> www.digitalsynergyinc.com

VSI have responded, adding (quoted to prevent wrapping):

> https://docs.vmssoftware.com/vsi-tcp-ip-services-for-openvms-tuning-and-troubleshooting/

Rich Jordan

unread,
Apr 29, 2022, 2:54:34 PM4/29/22
to
Mark
Unfortunately the two servers are not colocated; one is remote connected by some config of their 'metropolitan area network' but we have no control or access over that,

We have tried tweaking the sysconfig settings on both boxes, and the few possibly relevant FTP logicals, and run through the HP troubleshooting guide (will look at the VSI one). Eventually production will be upgraded to VSI but we're still waiting on dev support to be available to test because we expect issues with the SSL and SSH version changes. To be clear the backup server would be running HPE VMS also if it is brought up; the alternate boot disk that it lives on to do the transfers and restore the savesets each night is running VSI.

The local disk to disk backups on the main server (which is HPE VMS V8.4 still), running to compressed savesets have times as follows. The settings used were the result of a lot of testing and tweaking of RMS and backup parameters on the previous RX3600 server, and two backup streams are running simultaneously, again after testing showed that gave us the best overall throughput. The destination disk is a unit on a mirrorset on the raid controller, the source disks are on a four-drive ADG array.

Backups are image backups with data disks fully mounted but all activity quiesced so no open files.
Some time samples:

System disk DKA7: Saveset size 6,598,848 blocks compressed, elapsed time 16 minutes 42 seconds. Source data is 52M blocks including the /NOBACKUP system files
User disk DKA0: Saveset size 23,283,008 blocks compressed, elapsed time 48 minutes 26 seconds. Source data is 82M blocks, no /NOBACKUP files
Primary data disk DKA5: Saveset size 59,619,040 blocks compressed, elapsed time 3 hours 4 minutes. Source data is 273M blocks, no /NOBACKUP files

I tested copying the DKA7 saveset disk to disk (this time from the mirrored backup disk to a plain old disk used for transfer staging) and back. 29 seconds and 27 seconds respecitvely
Copying the DKA5 saveset as above took 3 minutes 57 seconds and 3 minutes 54 seconds.

Doesn't sound like we have disk level issues on the copies. The backups do seem long but the user and production disks have a couple hundred thousand small files along with quite a few very large ones. They're also completing in about 60% of the time they ran on the retired RX3600 server with SA controller and universal SCSI disks. But we didn't get to do the full retuning and testing on backups to see if we could get them running faster on the new box due to time constraints so its still the same backup commands and RMS extend settings.

We'll see if the tweaks from the VSI troubleshooting guide, if any are applicable, affect things.

BTW when the backup server was still in our office, our FTP transfers from an HP V8.4 Alphaserver DS10 with GbE and an RX2660 running V8.3-1H1 with GbE on the same Procurve switch were terrible also, though I don't recall the exact numbers.

Thanks for responding, sorry for the delay.

Rich

Tad Winters

unread,
May 22, 2022, 11:00:05 AM5/22/22
to info...@rbnsn.com
> _______________________________________________
> Info-vax mailing list
> Info...@rbnsn.com
> http://rbnsn.com/mailman/listinfo/info-vax_rbnsn.com

I'm so behind in reading that I haven't followed closely, but I think
you were transferring save sets, but I don't recall seeing all the
qualifiers you used. Years ago, I recall using /GROUP_SIZE=0,
overriding the default size of 10. This made the save sets smaller.
The group size was more useful in the days of unreliable tape.

- Tad

0 new messages