Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Backup /statistics - like what's available with search/statistics

192 views
Skip to first unread message

IanD

unread,
Aug 13, 2017, 10:47:26 AM8/13/17
to
8.4 VSI L1 OpenVMS

Just wondering if Backup comes with any statistics reporting? I'm meaning something along the lines of search/stat

i.e. backup/statistics=(performance)

I didn't see anything as a qualifier when I briefly looked at the Backup command

I know if you use compression you get a percentage compression number spat out but that in itself is fairly useless since I didn't see where it recorded the bytes actually backup up

I would like to keep track of things like:
Bytes backed up, Files and directories backed-up up, Throughput data (maybe average throughput, 95 percentile stats), elapsed time and cpu time, and similar info to what search/stat gives

Since backup is processing this data as it backs up, why not keep a running total of such data and dump it at the end and/or into defined symbols like search/stats does

I'm sure people are thinking...why not just use the data at the bottom of a completed batch job?
Well, not every backup is performed on a one to one batch job ratio, besides that, if search can have statistics, why not backup? Backup is trolling through the data anyhow

For my particular use, we call the backup command in a loop, cycling through various disks and at times various directories, knowing some of the statistics per backup operations would certainly be helpful and can be used to feed into a running backup statistical DB, making it easier to construct backup estimates for future projections etc

While we are enhancing backup :-) how about a backup/statistics=(tape_report) too. Backup can then report back on the percentage full a tape is, bytes consumed and an estimated amount of free bytes left on the tape. Obviously the tape report option is a fair amount of work but I would have thought the statistics one should be fairly easy since backup is processing the data anyhow

Also, out of interest, is backup multi-threaded? I'm thinking when performing the compression, most zip code is multi-threaded, I'm wondering if the backup compression takes advantage of multiple cores or not when performing compression

Craig A. Berry

unread,
Aug 13, 2017, 3:38:41 PM8/13/17
to
On 8/13/17 9:47 AM, IanD wrote:
> 8.4 VSI L1 OpenVMS
>

> Just wondering if Backup comes with any statistics reporting? I

Seems like backup/progress_report might do what you want.


Stephen Hoffman

unread,
Aug 14, 2017, 1:55:54 PM8/14/17
to
On 2017-08-13 14:47:24 +0000, IanD said:

> 8.4 VSI L1 OpenVMS

V8.4-2L1.

> Just wondering if Backup comes with any statistics reporting? I'm
> meaning something along the lines of search/stat

Usual brute-force approach is to use SDA commands and SHOW PROCESS and
MONITOR and maybe SHOW MEMORY /CACHE and $getrmi and other services and
wall-clock times to gather that data, though BACKUP with the
recommended quota settings will be within a few percent of the I/O
throughput for the server; of the slowest path used by BACKUP.

There's a section on performance in the old FAQ that might be
interesting: "What can I do to improve BACKUP performance?", and
keeping the proportions among the recommended process quotas is a big
part of that. Using /IO_LOAD is the current mechanism used to throttle
the load inherently generated by BACKUP, FWIW. Compression usually
helps with performance, too, though it's better to use host compression
or out in the tape might take some testing with your particular data
and system loads and the hardware involved.

> Bytes backed up, Files and directories backed-up up, Throughput data
> (maybe average throughput, 95 percentile stats), elapsed time and cpu
> time, and similar info to what search/stat gives
> ...
> I'm sure people are thinking...why not just use the data at the bottom
> of a completed batch job?Well, not every backup is performed on a one
> to one batch job ratio, besides that, if search can have statistics,
> why not backup? Backup is trolling through the data anyhow

At some point, having a statistics option added into the standard
command qualifier package would be an interesting enhancement to
OpenVMS, because otherwise the various utilities will each develop
unique designs for reporting this data. Getting it to optionally
generate the data in a machine readable format will interest some
folks, too. This does mean modifying each of the tools, where some
other platforms will use an approach analogous to using PIPE and
separate performance-collection tools to gather that data.

> For my particular use, we call the backup command in a loop, cycling
> through various disks and at times various directories, knowing some of
> the statistics per backup operations would certainly be helpful and can
> be used to feed into a running backup statistical DB, making it easier
> to construct backup estimates for future projections etc

If your storage devices support the retrieval of the current tape
position (I've met some drives that don't), toss a SS$_DIAGNOSE command
at the device between BACKUP operations and ask it.

For finer control than the blunt-instrument of a DCL procedure, there's
also the callable BACKUP API.

> While we are enhancing backup :-)

While redecorating the existing BACKUP design certainly is possible,
I'm skeptical that any fundamental improvements are feasible within the
current BACKUP design. Some talented folks have looked at and have
worked diligently on the current design, too. The folks that were
last measuring this had the BACKUP I/O performance at 90% of the speed
of the slowest part of the path, too. So there's very likely not much
room for improvement in the current BACKUP design, with users left to
select compression or incremental or other and better-targeted input
selection or potentially defragging, or (of course) faster hardware.
Or some fairly fundamental design changes for archiving data on
OpenVMS, of course.

Rummaging the file system directory by directory and file by file and
in an application-uncoordinated fashion only gets you so far with
performance. You eventually get throttled by the I/O path.

Whether there might be hooks to make this process more efficient in the
upcoming VAFS (storage compression, or maybe some future support for
ZFS-style snapshots, or application-coordinated BACKUP support, etc),
or maybe some new backup design or new tool under consideration?

Yes, it's possible to pull members from OpenVMS HBVS RAID sets after
quiescing their apps, and there are sites that use exactly that, too.
But I digress.

> Backup can then report back on the percentage full a tape is, bytes
> consumed and an estimated amount of free bytes left on the tape.

The remaining capacity of the tape requires querying the tape position
as the tapes themselves implement compression. SSD and HDD storage
usage is getting murky in recent years too, with de-duplication and
provisioning.

> Also, out of interest, is backup multi-threaded? I'm thinking when
> performing the compression, most zip code is multi-threaded, I'm
> wondering if the backup compression takes advantage of multiple cores
> or not when performing compression

I'd tend to doubt that BACKUP is using threads as I've never seen
BACKUP light up more than one core, though it's been a while since I've
looked at the innards of BACKUP. BACKUP is most definitely using
overlapping I/O very heavily. Data compression and encryption are not
usually the performance bottleneck with other archival tools on other
platforms due to the relative efficiencies of those processes; the
processors are generally faster than the OpenVMS-vintage I/O paths.
Compressing (unencrypted) data is more time-efficient than the I/O
overhead of writing out that same data. Data that you don't write
beats data that you do. Some boxes are compressing the contents of
main memory, too. But I digress. Whether adding threads would help
BACKUP performance? It might help with data compression and
encryption, but there's the coordination overhead inherent in
threading, and adding more threads doesn't and won't ever make the one
hopefully-already-streaming tape drive any faster. If you're
throttled on I/O, then adding threaded compression or encryption
eventually stalls.

Spending time figuring out how to back up less data is usually the
biggest win. Or going from SSD to HDD or from HDD to bigger-slower
HDD for instance, and from that intermediate tier out to tape or other
longer-term storage. (Tapes can be fast as compared with HDDs, but
quiescing apps and splitting off RAID volumes or using a ZFS-style
snapshot approach is massively faster.)


--
Pure Personal Opinion | HoffmanLabs LLC

IanD

unread,
Aug 16, 2017, 10:14:52 PM8/16/17
to
On Monday, August 14, 2017 at 5:38:41 AM UTC+10, Craig A. Berry wrote:
> On 8/13/17 9:47 AM, IanD wrote:
> > 8.4 VSI L1 OpenVMS
> >
>
> > Just wondering if Backup comes with any statistics reporting? I
>
> Seems like backup/progress_report might do what you want.

Not quite. It's not a definitive result at the end. It's like what a lot of vms seems to be when it comes to performance data, snapshot data

I was more interested in something that told a statistical picture at the end of the operation so that data could then be fed into a collection to work on trending

On Tuesday, August 15, 2017 at 3:55:54 AM UTC+10, Stephen Hoffman wrote:
> On 2017-08-13 14:47:24 +0000, IanD said:
>

<snip - not because I didn't find it more than interesting...>

>
> At some point, having a statistics option added into the standard
> command qualifier package would be an interesting enhancement to
> OpenVMS, because otherwise the various utilities will each develop
> unique designs for reporting this data. Getting it to optionally
> generate the data in a machine readable format will interest some
> folks, too. This does mean modifying each of the tools, where some
> other platforms will use an approach analogous to using PIPE and
> separate performance-collection tools to gather that data.
>

Pretty much what I was driving at, as a bigger picture

Data is important, without it, your flying blind, taking guesses and often missing the mark.

Being able to get data out of a process and in a meaningful and workable format isn't just a nice to have IMO. It's vital, especially if one want to be part of the greater work of production workload management, versus running individual processes

Yes, the lack of getting stats out of vms I have raised before in various ways. Lots of general data data available via things like monitor but its usually snapshot data and often rolled up so that fine grained details within are not available

I'm sure this type of request would involve modifying every tool but at least if we worked towards some type of statistics / performance module, we would eventually get there.

Even something like Dectrace or a derivative could be used perhaps? Was that sold to Oracle too? *sigh*

<snip>
Alas, while all valid, they are all outside of the time constraints I have to dedicate to anything beyond a simple scripted solution.
If getting meaningful data out of your OS all the time involves work-arounds, going to low level code type solutions, then perhaps its time for a revamp of the OS! oh wait, that's what we are doing :-)

I was in part, portraying, in what is my opinion, the greater need to get more data out of VMS

> > Backup can then report back on the percentage full a tape is, bytes
> > consumed and an estimated amount of free bytes left on the tape.
>
> The remaining capacity of the tape requires querying the tape position
> as the tapes themselves implement compression. SSD and HDD storage
> usage is getting murky in recent years too, with de-duplication and
> provisioning.
>

This extra want of mine was pie in the sky thinking :-)
I have unverified assumptions that RDB backups are multi-threaded, but rdb has the luxury of being able to work on multiple storage areas at the same time, very different to RMS I guess, that has to troll sequentially over a volume?

While multi-threading the data stream for backup might be too much of a development sinkhole, I would think spinning off the compression shouldn't be too hard? (says me who has not written multi-threaded stuff for a very long time).
I think the compression uses zlib doesn't it?
I wonder how much of a performance kick multi-threading zlib compression would give to the overall backup run time? Might not be worth the development effort to save probably around 40 mins on a volume? But what about bigger volumes that are coming???

>
> --
> Pure Personal Opinion | HoffmanLabs LLC

Thanks once again Hoff

Stephen Hoffman

unread,
Aug 17, 2017, 12:28:50 PM8/17/17
to
I'd prefer to avoid modifying everything. Modifying every tool to add
this instrumentation is a classic OpenVMS approach, but the approach
doesn't scale well. We're almost inevitably facing some tool that
wasn't modified, and adding the hooks into the apps we're supporting.
Yes, having the qualifier into the utility qualifiers set and adding
it into specific apps will cover specific cases, and it avoids
expending even more effort dealing with the tools that do get modified.
But it's still not the best way to systematically gather and collect
performance data.

Yes, if you have a large or complex app that you're maintaining, you
should most definitely instrument it. What I'm referencing here is
adding generic monitoring mechanisms to the operating system itself and
that reduce the need to more heavily instrument every app and every
tool on the system environment.

The available OpenVMS performance tools and APIs are ad-hoc. The
POLYCENTER software was divested, and some of the capabilities those
POLYCENTER tools provided were subsequently reimplemented in OpenVMS or
add-on tools. (AFAIK, PCSI was the sole exception to that divestment,
too.) OpenVMS development tools are simply not competitive; neither
the debugger and SDA extensions, nor DECset PCA and LSEDIT, nor the SDA
extensions, nor the third-party tools. The OpenVMS options for
implementing instrumentation involves reading and writing kernel mode
directly or via SDA, or what can be gotten from $getjpi and $getrmi
calls, the accounting and auditing data, trolling MONITOR data, etc.
There's not a lot of support for acquiring data from various OpenVMS
system components, either; from XQP, XFC, et al. Ad-hoc means, writ
large.

As for what's available on other platforms, OpenVMS instrumentation is
a whole lot less than what's available on Linux and BSD.

http://www.brendangregg.com/linuxperf.html
https://forums.freebsd.org/threads/48802/

Or just start rummaging here: http://www.brendangregg.com

Yes, some of these Linux and BSD tools are cryptic, arcane, fussy and
can otherwise be problematic. Developers need to learn from and avoid
the problems and the limitations of existing tools, too.

Using Xcode and Instruments on macOS is so far past what OpenVMS
offers, there's just no comparison with the OpenVMS tools. And that's
before launching dtrace for a look around.

>> Spending time figuring out how to back up less data is usually the
>> biggest win. Or going from SSD to HDD or from HDD to bigger-slower
>> HDD for instance, and from that intermediate tier out to tape or other
>> longer-term storage. (Tapes can be fast as compared with HDDs, but
>> quiescing apps and splitting off RAID volumes or using a ZFS-style
>> snapshot approach is massively faster.)
>>
>
> I have unverified assumptions that RDB backups are multi-threaded, but
> rdb has the luxury of being able to work on multiple storage areas at
> the same time, very different to RMS I guess, that has to troll
> sequentially over a volume?

IIRC/AFAIK, the Rdb backup and restoration tools were a fork of the
OpenVMS BACKUP source code, but have most certainly diverged. The
RMU/BACKUP processes I've looked at — which are not exactly current
versions — were also single-threaded when last examined.

While an entirely different implementation to how Rdb can distribute
the constituent files of a database across multiple sector-addressable
storage devices, OpenVMS can be configured to distribute I/O over
multiple volumes via RAID, and that was and remains a very common
approach with HDD storage. That's because rotating rust HDDs are
really slow. That's how we got massive racks of HDDs and RAID-1 and
RAID-10 and RAID-6 et al. With SSD storage, the folks I'm working
with have tended to abandon that approach and now largely use RAID for
its perceived benefits to reliability; using RAID for performance is
less common. (I'm here ignoring distributed clusters, where HBVS and
local SSD storage certainly can have a performance benefit and
particularly for keeping the locality of the read activity. But from
your previous comments, you're seemingly not operating in that
environment.)

> While multi-threading the data stream for backup might be too much of a
> development sinkhole, I would think spinning off the compression
> shouldn't be too hard? (says me who has not written multi-threaded
> stuff for a very long time).

When I'm deciding whether to retrofit threading in an existing app, I
prefer to run some performance tests to try to see where the app is
spending its time, and what is throttling performance; at what outcome
any particular optimization effort will reasonably have. While there
can be reasons to rework source code for reasons of stability and
maintainability, optimizing any not-actually-the-bottleneck-code to be
faster is usually less-than-helpful from a performance perspective.
q.v. Amdahl's Law. I've occasionally found that DECset PCA can point
me at a completely different and surprising routine in the code that's
the performance bottleneck, too.

I simply don't know that making compression or encryption multithreaded
will make BACKUP any faster. It might. Or it might all be a lot of
effort that still runs up against the speed limits on the input and
output I/O paths. But best case, adding multithreading just gets us
that much closer to that same performance wall.

Always compress before you encrypt. Always. But I digress.

> I think the compression uses zlib doesn't it?
> I wonder how much of a performance kick multi-threading zlib
> compression would give to the overall backup run time? Might not be
> worth the development effort to save probably around 40 mins on a
> volume? But what about bigger volumes that are coming???

Threading adds complexity to the code, and — without some performance
modeling — I'm not convinced that changes to speed encryption or
compression would be a benefit in this particular case. BACKUP tends
to be I/O limited, and gets close to 90% of the hardware bandwidth,
which means that either the data has to be reduced (compression, being
much smarter about what gets backed up, defragmenting, etc), or the I/O
path hardware gets an upgrade. Tapes are usually faster than HDDs.
SSDs make the I/O path a whole lot less of an issue for main storage,
and tapes are glacial in comparison to SSD. If you optimize the
performance of some code that's not the bottleneck, your optimized code
is inevitably going to stall waiting for the bottleneck to clear.

Then there's that the existing design of how folks back up their data
on OpenVMS is often firmly rooted in antiquity, which doesn't help
things. For those that haven't already, take a look at how ZFS is
designed; at how it "violates" the layering that can be common in many
app and system designs.

Then there's that a number of OpenVMS installations are still using HDD
storage. That works for many of those same sites, certainly. But
swapping in SSD storage is often a whole lot cheaper than tuning a slow
app.

Migrating to designs that can stripe out data across multiple target
devices works, and so can splitting off volumes in HBVS. None of
which addresses that we are all often backing up a whole lot of the
same data with each pass, which is just wasteful. File system de-dup
targets this issue from the storage controllers, and database backups
and various of which archive snapshots of the database and then
continuously archive the change logs are another way to reduce the
volume of data being backed up.

Splitting HBVS volumes has the added advantage of a short window to
quiesce the applications, and a consistent view of the entire storage,
rather than the approach BACKUP uses; of what's on the storage device
under a particular name when the BACKUP directory traversal captures
the data and which involves either skewed data or a much longer window
to keep the apps quiesced.

For existing users, this stuff is less of an issue. We're here, we're
familiar with and using what's available. And various of us OpenVMS
folks can be unfamiliar with what is currently available on other
platforms; many of us are specialized. For new users and for wholly
new apps seeking to select a deployment platform to target, or where
there's another platform that's been selected as the common target,
these sorts of omissions can be seen to increase the cost of deploying
on or even maintaining OpenVMS.

Jan-Erik Soderholm

unread,
Aug 17, 2017, 1:56:31 PM8/17/17
to
Den 2017-08-17 kl. 18:28, skrev Stephen Hoffman:

> On 2017-08-17 02:14:49 +0000, IanD said:
>
>>
>> I have unverified assumptions that RDB backups are multi-threaded, but
>> rdb has the luxury of being able to work on multiple storage areas at the
>> same time, very different to RMS I guess, that has to troll sequentially
>> over a volume?
>
> IIRC/AFAIK, the Rdb backup and restoration tools were a fork of the OpenVMS
> BACKUP source code, but have most certainly diverged. The RMU/BACKUP
> processes I've looked at — which are not exactly current versions — were
> also single-threaded when last examined.

The default Rdb RMU BACKUP is single process/multi threaded. It can read
multiple storage areas (database files) at the same time writing to
multiple tape stations in parallel. Still single CPU.

According to the docs, there is some "parallel backup" option for Rdb.
In that case the workload is spread over multiple process that also
can be running on different nodes in a cluster.

> While an entirely different implementation to how Rdb can distribute the
> constituent files of a database across multiple sector-addressable storage
> devices,...

Even with the files in the same (like SAN) storage device, the option
to write to multiple tape stations in parallel can still be valuable.

Now, backups in general are probably moving away from tape stations...

Hans Vlems

unread,
Aug 17, 2017, 4:58:32 PM8/17/17
to
Not entirely. Enter a customer with more data than the backup tape drives could handle. So the backup sets landed on a large disk pool. After a while the disk pool ran out of space. Disk space is cheap. The most recent data on tape was more than three months old.
Then cryptolocker entered the headlines, hitting everything stored on disk.
Suddenly tape storage got immensely popular.
Hans
0 new messages