Compaq not as bad as Andrew says (wish?)

215 views
Skip to first unread message

Rudolf Wingert

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
Hello,

yesterday I did read, that Compaq is on rank 2 in the HTPC market
(5.8Billion$). Rank one HP 23% followed by Compaq, SGI and IBM. On
rank 5 (12%) follows Sun. I think 5.8 Billion $ is not the whole
market, but the market with the best income.

Regards Rudolf Wingert

Terry C. Shannon

unread,
May 25, 2000, 3:00:00 AM5/25/00
to

"Rudolf Wingert" <w...@fom.fgan.de> wrote in message
news:2000052505...@fom.fgan.de...

According to IDC numbers Compaq is Number Two in HPTC by less than one point
(the faltering SGI being Number One).

Compaq should gain Number One ranking in 2H00.

Keith Brown

unread,
May 25, 2000, 3:00:00 AM5/25/00
to

Forgive my ignorance, what is HPTC?
--
Keith Brown
kbro...@usfamily.net

Terry C. Shannon

unread,
May 26, 2000, 3:00:00 AM5/26/00
to

"Keith Brown" <kbro...@usfamily.net> wrote in message
news:392DD22E...@usfamily.net...

Easy... High Performance Technical Computing!

cheers,

terry s

David Mathog

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
In article <Fv58C...@world.std.com>, "Terry C. Shannon" <sha...@world.std.com> writes:
>> >
>> > Compaq should gain Number One ranking in 2H00.
>>
>> Forgive my ignorance, what is HPTC?
>
>Easy... High Performance Technical Computing!
>

By definition though, none of this is OpenVMS. It refers to the huge
Tru64 and Linux/Alpha farms that places like Celera run. Go visit
the HPTC pages and nary a word about OpenVMS will you find.

http://www.digital.com/hpc/

Compaq is absolutely not interested in selling OpenVMS for this market. If
they were they would keep the compiler features on par with Tru64 (the C
compiler for Tru64 has profile based optimization and all the libraries are
available compiled to take advantage of the latest processors). They would
also deal with the lack of automatic file caching, which no amount of RMS
fiddling will make up for and leads to dramatic increases in throughput in
most instances. (Data integrity is not much of an issue in this market -
most of the computing is data in, crunch, data out, and if the power fails
in the middle you just start over again.)

The irony is that OpenVMS was the HPTC workhorse of the 80's, and it was
probably that market which enabled it to grow into the "Enterprise" class
OS that Compaq says it is today.

Regards,

David Mathog
mat...@seqaxp.bio.caltech.edu
Manager, sequence analysis facility, biology division, Caltech
**************************************************************************
* RIP VMS *
**************************************************************************

David A Froble

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
David Mathog wrote:
>
> By definition though, none of this is OpenVMS. It refers to the huge
> Tru64 and Linux/Alpha farms that places like Celera run. Go visit
> the HPTC pages and nary a word about OpenVMS will you find.
>
> http://www.digital.com/hpc/
>
> Compaq is absolutely not interested in selling OpenVMS for this market. If
> they were they would keep the compiler features on par with Tru64 (the C
> compiler for Tru64 has profile based optimization and all the libraries are
> available compiled to take advantage of the latest processors). They would
> also deal with the lack of automatic file caching, which no amount of RMS
> fiddling will make up for and leads to dramatic increases in throughput in
> most instances. (Data integrity is not much of an issue in this market -
> most of the computing is data in, crunch, data out, and if the power fails
> in the middle you just start over again.)
>
> The irony is that OpenVMS was the HPTC workhorse of the 80's, and it was
> probably that market which enabled it to grow into the "Enterprise" class
> OS that Compaq says it is today.

The problem is not with VMS, but with C programs written to use Unix
capabilities. Should the same application be written to use VMS's capabilities,
it should match the performance of T64, and many times exceed T64. I'd rather
use global sections than file caching in some cases. Too many capabilities to
start a list here.

As for the evolution of VMS, the earliest systems in 1978 were more suited to
scientific computing. It was the input of the business users that helped VMS
grow into an enterprise class OS. Things like BACKUP, extensive print/batch
queue capabilities, the data integrity you don't seem to care about.

Dave

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
170 Grimplin Road E-Mail: da...@tsoft-inc.com
Vanderbilt, PA 15486

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
David A Froble <da...@tsoft-inc.com> wrote in message
news:392EB8D4...@tsoft-inc.com...

...

> The problem is not with VMS, but with C programs written to use Unix
> capabilities. Should the same application be written to use VMS's
capabilities,
> it should match the performance of T64, and many times exceed T64.

Despite appearances, I'm enough of a VMS bigot myself to be inclined to
agree that an optimized application on VMS should usually be able to match
or exceed the performance of an optimized application on <pick your Unix>.
However, the skill set required to optimize an application on VMS is in
radically shorter supply than the skill set required to optimize an
application on most Unixes, and if you're willing to settle for a
good-but-not-literally-optimal approach (which in the real world is almost
always the case) the gap in availability of skill sets widens even more.
And it is reasonably arguable that the complexity of optimization, or even
near-optimization, on VMS is greater in an absolute sense, independent of
familiarity to the masses.

So the real problem is that VMS doesn't provide an environment in which the
skill sets of the people who write and use these applications can be used
effectively: if it did, then it might well remain an overall-effective
solution for them.

I'd rather
> use global sections than file caching in some cases.

Care to list them? Even for simple cross-application read-caching, global
sections are a bit of a pain compared to a central system cache: must be
set up, torn down, and sized explicitly on an individual basis, don't
balance activity across independent application sets to achieve best overall
system throughput. For write-back caching, add the need to figure out when
and how often to flush them (Unix typically flushes automatically by default
at 30-second intervals, and some variants tweak this mechanism to attain
improved on-disk file contiguity and eliminate disk writes entirely for
files deleted before they're flushed). If you're willing to use 'locate
mode' access to operate on the buffer contents directly instead of through
RMS's normal record interface (not sure you could do this for write access
in a global buffer, though) you could assert that this saves a copy
operation, and in any event you save a system call per 'record' - but
crossing the system interface is a lot less expensive than it used to be.

Too many capabilities to
> start a list here.
>
> As for the evolution of VMS, the earliest systems in 1978 were more suited
to
> scientific computing. It was the input of the business users that helped
VMS
> grow into an enterprise class OS. Things like BACKUP, extensive
print/batch
> queue capabilities, the data integrity you don't seem to care about.

I can't remember a time, even in 1978, when anyone could have said that VMS
wasn't as concerned about data integrity as it presumably remains today.
And while backup utilities had a checkered history dating back to the 11, it
wasn't for lack of trying to make them as solid as possible (some
implementors were just less experienced than later ones).

It's true that VMS became *more* suited to business computing in areas other
than data integrity as time went on. But I don't see that it ever became
*less* suited to scientific computing - save in the area of platform
price-competitiveness, subsequent lapse into unfamiliarity, and then, due to
the resulting lack of market interest, a less-than-aggressive attitude
toward matching new features developed elsewhere.

- bill

JF Mezei

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a
> good-but-not-literally-optimal approach

I disagree significantly. There is a HUGE basin of available VMS expertise.
The problem is that due to lack of VMS work (because customers have left or
stopped improving their VMS systems over the years), most have found work
elsewhere and don't advertise their VMS capabilities nor seek work in VMS
because of its "Palmer is killing VMS" image. VMS is still seen as a "legacy"
expertise while "NT" is seen as "hire anyone who has written "NT" in their CV".

Once you have a VMS system that is tuned and operating nicely with no software
upgrades, there isn't much work needed to keep it running, and few very
experienced folks would be interested in such work anyways.

But setup a challenging VMS shop with serious applications
deployment/development, and you might get many of those ex-VMS experts out of
the woodwork.

John E. Malmberg

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a
> good-but-not-literally-optimal approach

Actually the skill set required to optimize an application on any platform
is in short supply.

Most places find it faster or cheaper to buy faster servers or add more
servers instead of actually paying for quality improvements to their
systems. And proper optimization can take time that you need the
applications running.

The exception to this is when you hit a wall were you can not purchase
faster hardware. Then you must make it work. That is were the real
expertise is needed.


J.F. Mezei wrote:

> I disagree significantly. There is a HUGE basin of available VMS
expertise.
> The problem is that due to lack of VMS work (because customers have left
or
> stopped improving their VMS systems over the years), most have found work
> elsewhere and don't advertise their VMS capabilities nor seek work in VMS
> because of its "Palmer is killing VMS" image. VMS is still seen as a
"legacy"
> expertise while "NT" is seen as "hire anyone who has written "NT" in their
CV".

Companies with good pay and work environments tend to retain top people.
They can take the time to hire promising beginners and train them. Thus
they do not need to hire "experts", they keep them.

At the sites I have been at, for VMS work, we never recruited specifically
VMS people, we hired programmers, and with little work oriented them on VMS.

The experienced people that we hired or contracted were always from "word of
mouth", not professional recruiting.

And there are plenty of advertisements that I have seen for NON-VMS work
where they state that experience on VMS is one of the prefered credential.
Very easy to find on any job board.

-John
wb8...@qsl.network

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Definitely a valid point, unlike JF's, which in asserting that getting VMS
expertise into a *single* shop might not be too difficult avoided addressing
the point he was supposedly disagreeing with, which was that the *absolute*
supply of VMS expertise is considerably smaller than that of Unix expertise,
no matter how you slice it.

Not to mention the related point that you need *more* expertise to get good
performance out of VMS than out of Unix, which gives most applications good
performance right out of the box.

And of course that last dovetails neatly with your own observation that
people seldom bother with much performance optimization: for such typical
non-performance-optimized applications, Unix therefore makes more efficient
use of the hardware than VMS does, hence is correctly perceived as more
cost-effective.

- bill

John E. Malmberg <wb8...@qsl.net> wrote in message
news:066e01bfc8e6$4c0ac230$020a...@xile.realm...

David A Froble

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Not going to let that one slide by.

Bill Todd wrote:
>
> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided addressing
> the point he was supposedly disagreeing with, which was that the *absolute*
> supply of VMS expertise is considerably smaller than that of Unix expertise,
> no matter how you slice it.
>
> Not to mention the related point that you need *more* expertise to get good
> performance out of VMS than out of Unix, which gives most applications good
> performance right out of the box.

On what do you base this claim? My perspective is that VMS has a better
development environment, thus allowing better applictions with less expertize.
Good VMS applications give good performance right out of the box. Since the
development environment is friendly, good VMS applications are rather easy to
produce.

> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.

Again, a rationalization with no supporting facts. On what do you base the
claim that Unix makes more efficient use of the hardware? Your posts are
starting to sound like wishful opinion, or outright trolls. For someone who
catches others making claims without substantiating them, you seem to be
following right in their footsteps.

You're probably going to now find a post of mine that did this. Fine. I'm
probably guilty. However, I will issue this challenge. An application that
does not favor either environment, if there is such, will be more easily and
quickly developed on VMS than on Unix. I'm willing to bet a buck on it, and do
the VMS side.

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Apologies - it's been 13+ years since I use RMS, and considerably longer
since I've thought about some of its more obscure options.

Turns out locate mode is not applicable to global buffers at all, read or
write - in fact, it only works for read access even when using local
buffers. So if you want shared buffers, you can't avoid a copy operation,
whether you're using VMS or Unix.

RMS does make use of global buffers fairly easy, though, even if it isn't
transparent and the space can't be used as flexibly as a central system
cache can.

And returning to an earlier point, the documentation for
read-ahead/write-behind states that it applies only to non-shared Sequential
files (my recollection, which I couldn't verify in a quick search, is that
you still may be able to provide multiple buffers for indexed files and
obtain some LRU caching value - e.g., for upper index levels - but not
read-ahead operation).

- bill

Bill Todd <bill...@foo.mv.com> wrote in message
news:8grolb$jg1$1...@pyrite.mv.net...


> David A Froble <da...@tsoft-inc.com> wrote in message
> news:392EB8D4...@tsoft-inc.com...
>
> ...
>
> > The problem is not with VMS, but with C programs written to use Unix
> > capabilities. Should the same application be written to use VMS's
> capabilities,
> > it should match the performance of T64, and many times exceed T64.
>
> Despite appearances, I'm enough of a VMS bigot myself to be inclined to
> agree that an optimized application on VMS should usually be able to match
> or exceed the performance of an optimized application on <pick your Unix>.

> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

David A Froble <da...@tsoft-inc.com> wrote in message
news:3931972C...@tsoft-inc.com...

> Not going to let that one slide by.
>
> Bill Todd wrote:
> >
> > Definitely a valid point, unlike JF's, which in asserting that getting
VMS
> > expertise into a *single* shop might not be too difficult avoided
addressing
> > the point he was supposedly disagreeing with, which was that the
*absolute*
> > supply of VMS expertise is considerably smaller than that of Unix
expertise,
> > no matter how you slice it.
> >
> > Not to mention the related point that you need *more* expertise to get
good
> > performance out of VMS than out of Unix, which gives most applications
good
> > performance right out of the box.
>
> On what do you base this claim?

I'm afraid that in your zeal to defend the honor of VMS, you seem to have
lost sight of the context in which this part of the discussion evolved (even
though you participated in it yourself): the very specific area of Unix's
automated file caching vs. VMS's lack of same.

That this gives typical applications better performance on Unix is a
no-brainer. It has other consequences that some people here may believe are
pernicious, but lack of performance is not one of them.

My perspective is that VMS has a better
> development environment, thus allowing better applictions with less
expertize.

There are areas in which I would not dispute this assertion, but file-system
performance is not one of them (at least in comparison with Unixes that
safely defer meta-data updates as well as user data updates, whether by use
of logs or 'soft update' mechanisms).

> Good VMS applications give good performance right out of the box.

'Good' is a subjective term, so let's just say that default, unoptimized
Unix file system performance is noticeably better than default, unoptimized
VMS file system performance for typical applications, mostly due to Unix's
default system caching (my vague recollection is that VMS can be configured
to read-cache at the disk level, though possibly only in non-clustered
environments, though that may no longer be a restriction, which helps some -
though the path to such a cache is considerably longer than that to a
file-level cache - but in any case doesn't cover write-back caching).

And at least some Unix file systems (SGI's and Veritas' - don't happen to
know details about others in this area) allow the cache to be bypassed
('direct I/O') when desired - e.g., to avoid copying overheads on large
transfers. Veritas, in fact, does this automatically above a specifiable
transfer size (256 KB by default), and SGI's XFS may as well, avoiding any
special application coding to cover this case.

Since the
> development environment is friendly, good VMS applications are rather easy
to
> produce.
>
> > And of course that last dovetails neatly with your own observation that
> > people seldom bother with much performance optimization: for such
typical
> > non-performance-optimized applications, Unix therefore makes more
efficient
> > use of the hardware than VMS does, hence is correctly perceived as more
> > cost-effective.
>
> Again, a rationalization with no supporting facts. On what do you base
the
> claim that Unix makes more efficient use of the hardware?

I obviously should have repeated, about every other sentence, the fact that
these observations were made in the context of the file-caching comment
originated in David Mathog's post. In that context, Unix indubitably makes
more efficient default use of the hardware in any environment where file I/O
is significant to performance.

Your posts are
> starting to sound like wishful opinion, or outright trolls. For someone
who
> catches others making claims without substantiating them, you seem to be
> following right in their footsteps.

No, I'm just letting David Mathog's observations about relative performance
stand. One specific example he gave was compilation performance, but I
suspect he could supply others (and not being a Unix user I can't, though I
know enough about what's happening under the covers not to have any
hesitation about drawing conclusions about performance from it, given even
moderate external confirmation).

If you want to learn something, pay attention. If you'd rather just call
names, go ahead.

- bill

>
> You're probably going to now find a post of mine that did this. Fine.
I'm
> probably guilty. However, I will issue this challenge. An application
that
> does not favor either environment, if there is such, will be more easily
and
> quickly developed on VMS than on Unix. I'm willing to bet a buck on it,
and do
> the VMS side.
>

John E. Malmberg

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

Bill Todd <bill...@foo.mv.com> wrote
in message news:8gs3ga$jbg$1...@pyrite.mv.net...

> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided
addressing
> the point he was supposedly disagreeing with, which was that the
*absolute*
> supply of VMS expertise is considerably smaller than that of Unix
expertise,
> no matter how you slice it.

Ok, let's address it. Computer systems being physical machines are still
ultimately bound by the laws of physics. What kind of expert would be
better?

One that knows those laws and but not VMS, but has the disipline to RTFM and
actually understand them.

Or one that knows VMS from experience, but does not actually understand the
physics.

> Not to mention the related point that you need *more* expertise to get
good
> performance out of VMS than out of Unix, which gives most applications
good
> performance right out of the box.

Yes UNIX can give most (not all) applications good performance. But when
you understand why, it gives you a better perspective on when to recommend
each platform.

However *more* expertise is a term I would not use. Most of the techniques
I used to use with VMS tuning came from formulas in an IBM VM/SP tuning
guide. That was when it was possible to get data to/from your paging disks
faster than the CPU could use it. Now CPUs are outrunning the disks by too
much.

Most of the tuning parameters of OpenVMS are well documented. Someone
familiar with their application and computer systems can RTFM and find this
out.

I went to several UNIX experts to find out why all systems of a particular
brand suddenly died one day. They did not know. I found out why by using a
ethernet monitor.

It turned out that it was the equivalent of a non-page pool exhausted. It
turned out a default configuration was causing it to download the routing
tables from the corporate router. I guess for "performance" it stored them
in the non-page pool.

Yes, I talked to the vendor's support line. For my efforts I got sent
copies of two articles that had nothing to with the problem statement.

I contend that true tuning expertise in other platforms is as rare as it is
in VMS. A higher market share for UNIX and M$SOFT attracts more pretenders.

> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more
efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.

I do not know if that perception is always correct. :-)

-John
wb8...@qsl.network


Keith Brown

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
>
> Apologies - it's been 13+ years since I use RMS, and considerably longer
> since I've thought about some of its more obscure options.
>
> Turns out locate mode is not applicable to global buffers at all, read or
> write - in fact, it only works for read access even when using local
> buffers. So if you want shared buffers, you can't avoid a copy operation,
> whether you're using VMS or Unix.
>
> RMS does make use of global buffers fairly easy, though, even if it isn't
> transparent and the space can't be used as flexibly as a central system
> cache can.
>
> And returning to an earlier point, the documentation for
> read-ahead/write-behind states that it applies only to non-shared Sequential
> files (my recollection, which I couldn't verify in a quick search, is that
> you still may be able to provide multiple buffers for indexed files and
> obtain some LRU caching value - e.g., for upper index levels - but not
> read-ahead operation).
>
> - bill
>
> Bill Todd <bill...@foo.mv.com> wrote in message
> news:8grolb$jg1$1...@pyrite.mv.net...

> > David A Froble <da...@tsoft-inc.com> wrote in message
> > > Dave
> > >
> > > --
> > > David Froble Tel: 724-529-0450
> > > Dave Froble Enterprises, Inc. Fax: 724-529-0596
> > > 170 Grimplin Road E-Mail: da...@tsoft-inc.com
> > > Vanderbilt, PA 15486
> >
> >
> >

Bill,

If you use HSxx controllers on your systems whether it be Unix
or VMS (we use them on both at my shop) the file caching issue
is mute because the controller does it.

--
Keith Brown
kbro...@usfamily.net

Keith Brown

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
>
> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided addressing
> the point he was supposedly disagreeing with, which was that the *absolute*
> supply of VMS expertise is considerably smaller than that of Unix expertise,
> no matter how you slice it.
>
> Not to mention the related point that you need *more* expertise to get good
> performance out of VMS than out of Unix, which gives most applications good
> performance right out of the box.
>
> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.
>
> - bill
>
> John E. Malmberg <wb8...@qsl.net> wrote in message
> news:066e01bfc8e6$4c0ac230$020a...@xile.realm...
> > Bill Todd wrote:
> > > However, the skill set required to optimize an application on VMS is in
> > > radically shorter supply than the skill set required to optimize an
> > > application on most Unixes, and if you're willing to settle for a
> > > good-but-not-literally-optimal approach
> >

At my shop we can't seem to find good Unix people, we have
looked for years. I can't say the same for VMS. I'm sure will
respond that we must not offer enough $ but we do offer what the
VMS guys get.

--
Keith Brown
kbro...@usfamily.net

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:3931BE9C...@usfamily.net...

...

> Bill,
>
> If you use HSxx controllers on your systems whether it be Unix
> or VMS (we use them on both at my shop) the file caching issue
> is mute because the controller does it.

Hardware to the rescue! But as always that's a relatively expensive way to
obtain performance when software can do the job.

Not that it always can: when you want guaranteed persistence coupled with
fast writes at high volume, stable write-back caching is the only real
solution (as long as there are lulls during which you can dump the data to
disk). But you need to double up on the controllers, each with stable write
cache, and have them communicate to avoid losing dirty data due to a cache
failure (assuming you're using some kind of redundancy at the actual disk
level to guard against single points of failure), and that gets expensive
compared to software alternatives that require no special hardware (save for
a sliver of NVRAM somewhere to make RAID recovery fast after an
interruption - which you actually only need if you can't tolerate the
latency of writing a log record for each write request that can't be
batch-logged with other contemporaneous write requests).

Until hardware RAID (especially including copious amounts of stable cache in
a no-single-point-of-failure dual configuration) prices come *'way* down,
systems that ensure file system integrity while optionally trading off
up-to-the-second persistence for overall better performance will remain
popular for applications that demand no stronger guarantees. This may not
make a great deal of sense, given how completely the cost of operating and
managing systems dwarfs the cost of purchasing them, but up-front system
price shows little indication of becoming irrelevant in typical environments
in spite of this.

- bill

>
> --
> Keith Brown
> kbro...@usfamily.net

Bill Todd

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:3931BF74...@usfamily.net...

...

> At my shop we can't seem to find good Unix people, we have
> looked for years. I can't say the same for VMS. I'm sure will
> respond that we must not offer enough $ but we do offer what the
> VMS guys get.

Interesting observation. Without pretending that it constitutes more than
anecdotal evidence, one might guess at a couple of reasons.

1) Someone professing to be competent in VMS is more likely to be close to
the truth than someone professing to be competent in Unix (hell, virtually
*everyone* who's ever used any form of Unix more than truly superficially
likely thinks they can profess to competence, whereas a casual VMS user may
have a bit more respect for what 'competence' really means).

2) Supply and demand dictate that the market for competence is hotter for
Unix people than for VMS people (by now, people who *want* to work with VMS
may be getting happy to find *any* reasonable position - note the comment
elsewhere that turnover in one company's VMS staff is virtually nil).

Bill Todd

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

John E. Malmberg <wb8...@qsl.net> wrote in message
news:sj3b2g...@corp.supernews.com...

...

> Yes UNIX can give most (not all) applications good performance. But when
> you understand why, it gives you a better perspective on when to recommend
> each platform.

Of course. But for *most* applications, as you note, Unix provides an
environment that does not demand such understanding (or any particularly
significant expertise) to get relatively good performance. (See also David
Mathog's recent thread on 'RMS tuning versus file caching' for a specific
example in addition to the compilation speed issue he raised earlier.)

...

> Most of the tuning parameters of OpenVMS are well documented. Someone
> familiar with their application and computer systems can RTFM and find
this
> out.

But in most cases on Unix they don't have to.

The point (which I overlooked in my initial comments about the relative
availability of 'expertise', but which you brought up yourself) is that
developers with *minimal* expertise can create typical (disk-bound - this
discussion started as a file system issue) applications that perform better
on Unix than equivalent applications created by similarly-inexpert VMS
developers perform running on VMS, due to the difference in default file
system approaches taken by the two systems.

So the competition is this: people who were exposed to Unix during their
schooling can create Unix applications that perform well without having to
RTFM, whereas to create an application on VMS a developer *first* must get
at least minimally acquainted with the system (since s/he likely did not
encounter it in school) and *then* must RTFM - after first searching out the
right one(s) in the 5-foot shelf - if s/he wants the application to perform
as well as it would on Unix.

Which system are developers likely to gravitate toward?

>
> I went to several UNIX experts to find out why all systems of a particular
> brand suddenly died one day. They did not know. I found out why by using
a
> ethernet monitor.
>
> It turned out that it was the equivalent of a non-page pool exhausted. It
> turned out a default configuration was causing it to download the routing
> tables from the corporate router. I guess for "performance" it stored
them
> in the non-page pool.
>
> Yes, I talked to the vendor's support line. For my efforts I got sent
> copies of two articles that had nothing to with the problem statement.

The above is an interesting commentary, but seems completely irrelevant to
application development (which if you'll look back through the earlier posts
in this thread is the context of the discussion, especially as related to
obtaining good performance from the file systems).

It is related to the question of 'expertise' (though in *system*-level
issues), but I think you have convinced me that 'expertise' is not
particularly important to the application development discussion, except as
something that most developers lack.

>
> I contend that true tuning expertise in other platforms is as rare as it
is
> in VMS. A higher market share for UNIX and M$SOFT attracts more
pretenders.

Possibly. *Application* tuning expertise is arguably less necessary to
obtain a given level of performance in Unix environments than in VMS
environments, so the rarer its existence is, the worse VMS looks (since it
needs it more than Unix).

*System* tuning is a rather different animal, and my guess would be that
real experts are extremely rare for virtually any system (though
self-professed Unix and NT experts may not be). If (as per your example)
Unix systems are *frequently* prone to handle unexpected situations less
than gracefully, then this represents a real VMS strength (since lacking
such expertise VMS continues to run where Unix does not), just not one that
is relevant to this particular thread.

- bill

>
> > And of course that last dovetails neatly with your own observation that
> > people seldom bother with much performance optimization: for such
typical
> > non-performance-optimized applications, Unix therefore makes more
> efficient
> > use of the hardware than VMS does, hence is correctly perceived as more
> > cost-effective.
>

Rob Young

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
In article <3931BF74...@usfamily.net>, Keith Brown <kbro...@usfamily.net> writes:

>
> At my shop we can't seem to find good Unix people, we have
> looked for years. I can't say the same for VMS. I'm sure will
> respond that we must not offer enough $ but we do offer what the
> VMS guys get.
>

VMS folks are hard to find but I also know Unix admins are
very difficult to find also. I know two contractors that are
in at a decent sized shop (multiple IBM RS/6000 S80s) and management
would prefer full-time sysadmins but can't find them. A lot of that
going around. Want to flush admins out of the weeds regardless
of platform? Contract for them.

Could they get full-time sysadmins? Sure. Unfortunately, the
salary that requires puts their sysadmins pay higher than
management/directors. That can't happen or you get unhappy
directors/management, so the only way out of the catch-22 is to
hire contractors and hide the cost in POs.

Rob


Larry Kilgallen

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
In article <8gspck$pr3$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:

> Until hardware RAID (especially including copious amounts of stable cache in
> a no-single-point-of-failure dual configuration) prices come *'way* down,
> systems that ensure file system integrity while optionally trading off
> up-to-the-second persistence for overall better performance will remain
> popular for applications that demand no stronger guarantees. This may not
> make a great deal of sense, given how completely the cost of operating and
> managing systems dwarfs the cost of purchasing them, but up-front system
> price shows little indication of becoming irrelevant in typical environments
> in spite of this.

Organizational voodoo dictates in many cases that capital and personnel
expenditures come from separate buckets. Thus it makes sense on the
micro scale (following company procedure so the individual manager
can advance) but not on the macro scale (good of the company overall).

Larry Kilgallen

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

So we have the free market economy in action, scarcity/demand drives
prices, etc. Some directors/management are psychologically unwilling
to accept this, since they thought _they_ were supposed to be the ones
in demand. But the company they represent pays the prevailing price
anyway, or does without.

Life is tough when techies prevail.

Goku

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
Hey guys

I do infact work as a VMS op for compaq.
They aren't getting rid of this but just not extending with new
companies.

Thanks


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


steven...@quintiles.com

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Bill,
Your comments are (like alot of what we all say here in comp.os.vms/info-vax)
very subjective.

Whilst it may be tricky to get the last gasp of performance out of a VMS system,
it is likely the same on _all_ platforms. This is why we are going through the
present cycle of "we need more horsepower to do this job so we'll buy another/a
bigger box".

This is also yet another reason why bloatware from a certain company is accepted
by the industry. It needs more CPU and more disk so the PHM's view is that
they'll just buy more disks and faster systems.

Nobody in my experience goes to get the last bit of performance out of any
application which they have bought in. A very limited number of operating
system writers or application writers probably go the extra bit to get the
maximum performance. One also has to balance out :
- the cost in terms of manpower to tune the application that final little bit;
- the cost of that faster system or that extra system;
- the cost in the future of losing something because you've taken a shortcut to
get performance and it's compromised either the system or the application data.

Most managers in most companies are forced to go for the bigger system.

Steve.

Bill Todd wrote:
>>>Not to mention the related point that you need *more* expertise to get good
performance out of VMS than out of Unix, which gives most applications good
performance right out of the box.

And of course that last dovetails neatly with your own observation that

David Mathog

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
In article <8gs3ga$jbg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
>And of course that last dovetails neatly with your own observation that
>people seldom bother with much performance optimization: for such typical
>non-performance-optimized applications, Unix therefore makes more efficient
>use of the hardware than VMS does, hence is correctly perceived as more
>cost-effective.

Exactly. And it all boils down to (essentially) a SINGLE difference
between the OS's. On a typical lightly loaded, memory rich, workstation,
Linux (and probably most other Unices, but I can't say for sure)
automatically utilize the unused portions of memory to cache file
operations. This results in huge increases in write performance and a
substantial increase in read performance in a typical workstation job mix,
which consists of a lot of small files being written, read, and then often
deleted. Only when file sizes reach up into the hundreds of megabytes or
the system becomes heavily loaded do the returns for this automatic system
break down. At that point RMS tuning on OpenVMS can outperform the Unix
workstations. However, in the normal mix of things for lightly loaded
machines, the "out of the box" Unix configurations do "disk I/O" about an
order of magnitude faster than VMS does. That's in quotes because
oftentimes the data never hits the disk. You can tune VMS processes and
programs to give better performance, but on the lightly loaded systems at
best you can get really close on the read speeds and you can never come
anywhere close on the write speeds (defining write as "program wrote data
to 'disk' successfully, without worrying about if the data ever hit the
physical disk). And you had to work at it. Whereas on Linux it just
happened automatically. Consequently, something as simple as

% tar xf whatever.tar
% cd whatever
% make

runs blindingly fast on Unix, and crawls on VMS. And here I'm talking
about two nearly identical DS10s, one running RedHat 6.2 and the other
OpenVMS 7.2-1. (It doesn't help that the Elsa graphics card and DECterms
don't scroll very quickly - sometimes you're just waiting for the text to
scroll through.)

Typically in these "lightly loaded" environments, the safety of having the
data hit the disk is of little importance. So VMS really is slow and Linux
(Unix) is fast for typical programmer/technical workstation workloads.
And Linux gets this boost without anybody having to twiddle RMS parameters,
which is a good thing, because, well, WHY NOT make good use of the unused
memory in a system? Lord knows you pay enough for it on a DS10!

Rob Young

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

So things aren't moving fast enough for you?

Caching gets much improved in the next go round of VMS (unless
I've mixed up roadmaps or am misremembering). How did this happen?
Senior engineer at a DECUS explained that: "remember, VIOC was just
a stop-gap measure... it wasn't supposed to be around this long"
or something similar. Relase notes showed new sysgen parameters
for write behind caching and write delay. Something to look forward
to. So how did this happen? Fork in the road called Spiralog from
what I understand.

So maybe in a year or less, we put the limited caching behind us
for those that are at 7.3 and higher and maybe move on to complaining
that VMS is so primitive because a lot still has to be done
at a command line. How primitive. MS-DOS is dead... I want things
to look and feel just like Windows.

NOT! Cut my pinkies off first.

So what is the next complaint we can moan about so I can get
practiced up?

Rob


Keith Brown

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
> Regards,
>
> David Mathog
> mat...@seqaxp.bio.caltech.edu
> Manager, sequence analysis facility, biology division, Caltech
> **************************************************************************
> * RIP VMS *
> **************************************************************************

I was going to respond sooner but Linux crashed and I was busy
doing an fschk.
Seriously, this is the trade off. You can argue that Unix uses
the better default config and it may be appropriate for your
environment but again it may not be for others. At my site we
tend to run bigger (than Workstations) OpenVMS machines that
have HSxxx controllers that do the writeback caching and we use
them on the 3 DU boxes too BTW, but write performance does
become a non-issue with the HSxxx controllers. I know, I know,
Bill Todd already railed me for suggesting a hardware solution
but we felt that the HSxxx's were a better solution to serve
external disks than a SWXR or some such thing like that. As you
pointed out earlier David, you can get a significant I/O boost
by tweeking RMS and even you pointed out that it was a 1 line
tweek. I think even NT people could manage that. I don't mean to
be blunt but my point is that nothing is perfect all the time.


--
Keith Brown
kbro...@usfamily.net

David Mathog

unread,
May 31, 2000, 3:00:00 AM5/31/00
to
In article <aTT5NM...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:
>In article <8h0n8i$d...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu (David Mathog) writes:
>> In article <8gs3ga$jbg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>>>
>>>And of course that last dovetails neatly with your own observation that
>>>people seldom bother with much performance optimization: for such typical
>>>non-performance-optimized applications, Unix therefore makes more efficient
>>>use of the hardware than VMS does, hence is correctly perceived as more
>>>cost-effective.
>>
>> Exactly. And it all boils down to (essentially) a SINGLE difference
>> between the OS's. On a typical lightly loaded, memory rich, workstation,
>> Linux (and probably most other Unices, but I can't say for sure)
>> automatically utilize the unused portions of memory to cache file
>> operations.

<SNIP>

>
> So things aren't moving fast enough for you?
>
> Caching gets much improved in the next go round of VMS (unless
> I've mixed up roadmaps or am misremembering). How did this happen?
> Senior engineer at a DECUS explained that: "remember, VIOC was just
> a stop-gap measure... it wasn't supposed to be around this long"
> or something similar. Relase notes showed new sysgen parameters
> for write behind caching and write delay. Something to look forward
> to. So how did this happen? Fork in the road called Spiralog from
> what I understand.
>

I'm not counting any of those chickens before they hatch. Remember the
buildup for Spiralog, and how many of us ended up using that? Meanwhile,
back at the farm, for the sorts of applications which I (and everybody else
in my field run) Linux and Tru64 "out of the box" outperform VMS "out of
the box" on identical hardware by a wide margin on systems which are
"lightly loaded". This is true for virtually every application which "runs
faster on Unix than on OpenVMS". Moreover, it's likely that much of the
weakness we see in FTP server and SMB server performance on OpenVMS are due
to this single difference (with the rest being due to TCP/IP stack
incompatibilities with the client systems.)

The point I'm trying to make is that virtually all of the difference in
performance one sees between Unix and OpenVMS is due to the presence of
file caching on the former and its absence on the latter. This is a
problem which is easily identified and SHOULD BE RECTIFIED. (Really it
should have been addressed many years ago but that's another harangue.)
Right now, as others have said, the only Compaq supplied product which
could improve the situation is an HSZ or some other dedicated storage
controller - but who can afford those for a DS10?

Nor am I saying that OpenVMS file caching need be exactly like that on Unix
- it just needs to give most of the benefits, and do so without the need
for case by case RMS twiddling. Clearly it must preserve the capability
for doing all of the things RMS does now, which really are appropriate and
useful on heavily loaded systems where there is no extra RAM around for
file caching.

> So maybe in a year or less, we put the limited caching behind us
> for those that are at 7.3 and higher and maybe move on to complaining
> that VMS is so primitive because a lot still has to be done
> at a command line. How primitive. MS-DOS is dead... I want things
> to look and feel just like Windows.
>

Oh come on. It's perfectly fair to point out that on small systems under
typical loads for such systems the file caching mechanisms used by Unix
(and WNT) do result in real increases in system performance, and it's
equally fair to point out that many pieces of software ported from Unix
implicitly assume this behavior and so run less efficiently on OpenVMS than
they do on Unix. This thread has nothing to do with GUIs vs. command line,
it's about the real 2X to 3X performance boost that you get with file
caching.

Dan Sugalski

unread,
May 31, 2000, 3:00:00 AM5/31/00
to
At 03:33 PM 5/31/00 +0000, David Mathog wrote:
>This thread has nothing to do with GUIs vs. command line,
>it's about the real 2X to 3X performance boost that you get with file
>caching.

Just out of curiosity, do you see any sorts of speedup from doing this:

$ SET RMS/BUFFER=255/BLOCK=127

before running the programs that perform less than wonderfully?

Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk

Alan Greig

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h0n8i$d...@gap.cco.caltech.edu>,

mat...@seqaxp.bio.caltech.edu wrote:
> WHY NOT make good use of the unused
> memory in a system? Lord knows you pay enough for it on a DS10!

VMS 7.2-1?

$ HELP SET FILE/CACHING

Of course you need the dead end Spiralog for this to work
but with it you can set writebehind caching on a file or
directory basis and supposedly the best bits of Spiralog
will turn up again sometime.

--
Alan Greig


Sent via Deja.com http://www.deja.com/
Before you buy.

Dave Weatherall

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
On Sun, 31 May 3900 15:33:26, mat...@seqaxp.bio.caltech.edu (David
Mathog) wrote:

..


> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than

> they do on Unix. This thread has nothing to do with GUIs vs. command line,


> it's about the real 2X to 3X performance boost that you get with file
> caching.

David
you keep quoting this 2/3 READ performance boost and I
seem to remember you posted some figures to illustrate the point.
However, I also seem to remember a post by Arne (I think) showing that
read performance across VMS, Unix and NT to be approximately equal.

Now what I think you're pointing out is that for an application which
creates lots of small files and then reads them back again, the
Read/Write back caching mechanisms of NT/Unix provide an advantage.
That I can understand and appreciate. However, is that the typical VMS
scenario? In our environment, my users use the
cross-assembler/link/build suite to build an OFP (downlable to the
target computer). This process consists of 2 passes across up to 230
source files, writing the object files (and list files for each when
required), insertion of the files into the object libraries and then 2
passes across the objects to create an image/map file.

All the tools use tuned RMS to access the source/object/image files
and 10 years ago the process took
50/55 minutes on an 8820 (VAX), now its 6 on a 2100 (AXP). However,
even on a 4000/108 (VAX) it'sstill only about 8/9 mins (IIRC).
Certainly as fast as our 8 year old AXP 4000 (600 I think).

The biggest performance boosts I gained by program changes were :-

Using hashing to do the symbol table management instead of the
binary search/insert that I inherited.
Using the VMS library routines to enable my linker to need to only
open one or two files when linking.
Using vitual memory to build by loadable image instead of a file.
Specifying my initial file size to the value I know it's going to be
when I create the down-loadable file.

The second speeded up linking quite a bit. Mainly because it avoided
the File open overhead of RMS. I've always understood this to be a
by-product of VMS security. I've no complaint.

no. 3 - well it just made more sense :-)

no 4. - again simple common sense 'cos it avoids the penalties of
$EXTEND.

Now the point I'm leading to is that the largest amount of READ i/o I
do is the source files and I'm not convinced that UNIX per se would be
particularly quicker than VMS here and still provide me with the same
level of C2 security. Similarly, as the same source files can be in
shared use by a concurrent build on another node maybe I'm better off
with the caching being done by the controller anyway.

Horses for courses.

RMS has its disadvantages over the UNIX/DOS cooked/raw options. It is
utimately slower measured in pure record access but RMS means I don't
have to manage the records anymore. It gives me Indexed, sequential,
direct access, variable/fixed length records etc. That gives me
performance gains in my applications and makes them easier to
maintain.

Cheers - Dave.


Bill Todd

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

Dave Weatherall <djw...@attglobal.net> wrote in message
news:DTiotGxQ0bj6-pn2-hpj8rWJ7JpRI@localhost...

...

> Now the point I'm leading to is that the largest amount of READ i/o I
> do is the source files and I'm not convinced that UNIX per se would be
> particularly quicker than VMS here

Might not be. But if the source files are large, Unix's ability to
pre-fetch automatically could help.

and still provide me with the same
> level of C2 security.

C2 security is C2 security, it doesn't come in levels. Some Unixes provide
it (don't happen to know which). Seems unlikely caching has much to do with
it. Larry might know.

Similarly, as the same source files can be in
> shared use by a concurrent build on another node maybe I'm better off
> with the caching being done by the controller anyway.

But you wouldn't be if VMS had ever done distributed caching right (last I
knew, they were finally heading in the direction of node-to-node
data-sharing as part of the caching enhancements Rob mentioned elsewhere).

>
> Horses for courses.
>
> RMS has its disadvantages over the UNIX/DOS cooked/raw options. It is
> utimately slower measured in pure record access

Needn't be. Counted records certainly have some performance advantages over
delimited records, and there's no intrinsic reason why any RMS processing at
all comparable to Unix/DOS non-record processing should take too many more
instructions.

Most of the perceived slowness of RMS is likely related to the caching
issues discussed elsewhere, though a certain amount of pure code bloat has
likely accumulated as well over the past 22 years. The other major problem
with RMS is that it takes so many hundreds of pages to describe how to use
all its options, but that's a programmer-performance rather than a
processing-speed issue.

but RMS means I don't
> have to manage the records anymore. It gives me Indexed, sequential,
> direct access, variable/fixed length records etc.

There are similar relatively standard packages available on Unix, but they
aren't integrated with the OS (nor need they be, since for the most part
they're process-level code - just like RMS on the 11 - though providing them
with the system would certainly help ensure inter-operability across
applications).

- bill

Larry Kilgallen

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h6505$k76$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
> Dave Weatherall <djw...@attglobal.net> wrote in message
> news:DTiotGxQ0bj6-pn2-hpj8rWJ7JpRI@localhost...
>
> ...
>
>> Now the point I'm leading to is that the largest amount of READ i/o I
>> do is the source files and I'm not convinced that UNIX per se would be
>> particularly quicker than VMS here
>
> Might not be. But if the source files are large, Unix's ability to
> pre-fetch automatically could help.
>
> and still provide me with the same
>> level of C2 security.
>
> C2 security is C2 security, it doesn't come in levels. Some Unixes provide
> it (don't happen to know which). Seems unlikely caching has much to do with
> it. Larry might know.

When an operating system is evaluated at the C2 level (or any other
level) it may include some particular require parameter settings
(the VMS system parameter SECURITY_POLICY is an example).

If any Unix operating system got a C2 evaluation that require caching
be turned off, it would say so in the evaluation report. Beyond
that NSA considers all C2 evaluated systems to be at the same level.
There is no more-C2-than-thou concept.

From my perspective, the main hazards of caching have to do with the
opportunity to scramble your data on disk by crashing at the wrong
moment. That seems more a denial-of-service than a C2 issue.

Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

It's on this PPT Roadmap on slide 18:

"Extended File Cache V 1.0

- Read ahead caching on sequential files
- Greater than 100 files in cache
- Larger cache size"

With a 2001 timeframe. That was the Feb Roadmap. If you look
at the one found there today, they also have added:

" Extended File Cache V 2.0

- Write sharing in a cluster
- Write behind caching
- User Controls
- SMP Performance boost
- Galactic Common Memory usage"

All part of the base OS. Spiralog was a bolt-on. Yes it went
away and a few may have been saddened. But your point about
the caching isn't a good one... it isn't "just a build up" and
even a jaded reading of the roadmaps would indicate it must
be something they are working on with XFC V 1.0 slated for
VMS version 7.3.

>
> The point I'm trying to make is that virtually all of the difference in
> performance one sees between Unix and OpenVMS is due to the presence of
> file caching on the former and its absence on the latter. This is a
> problem which is easily identified and SHOULD BE RECTIFIED.

Well no kidding! Let's backup a tiny bit:

>>
>> So things aren't moving fast enough for you?
>>

Maybe not for others either. But we could wish in one hand
and spit in the other hand and we would have the same results
concerning this. No change.

(Really it
> should have been addressed many years ago but that's another harangue.)
> Right now, as others have said, the only Compaq supplied product which
> could improve the situation is an HSZ or some other dedicated storage
> controller - but who can afford those for a DS10?
>

So? Run Linux on it then. Better caching isn't going to
show up faster just because we wish it would.

> Nor am I saying that OpenVMS file caching need be exactly like that on Unix
> - it just needs to give most of the benefits, and do so without the need
> for case by case RMS twiddling. Clearly it must preserve the capability
> for doing all of the things RMS does now, which really are appropriate and
> useful on heavily loaded systems where there is no extra RAM around for
> file caching.
>

Exactly like? How about better when it gets here. Think
about the last line for a bit there:

- Galactic Common Memory usage

Tell me what that means to you.

>> So maybe in a year or less, we put the limited caching behind us
>> for those that are at 7.3 and higher and maybe move on to complaining
>> that VMS is so primitive because a lot still has to be done
>> at a command line. How primitive. MS-DOS is dead... I want things
>> to look and feel just like Windows.
>>
>

> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than
> they do on Unix. This thread has nothing to do with GUIs vs. command line,
> it's about the real 2X to 3X performance boost that you get with file
> caching.
>

No kidding. My point was plainly stated in the very first line:

>>
>> So things aren't moving fast enough for you?
>>

It isn't as if they are sitting around up there over a cup
of coffee in an a.m. meeting *last week* and a hand went up in
the back of the room:

"Hey, I gotta an idea. Howsa 'bout we fix up our caching.
seems we should be able to do better than a hundred files
at a time and maybe do like some of the Unix boxes I have
heard about that has better caching."

It's a work in progress. Get it?????

Is there anything else we can moan about? I really need to
get practiced up...

Rob


Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <xxAEDA...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:

>
> It's on this PPT Roadmap on slide 18:
>

Which roadmap? This roadmap:

http://WWW.OPENVMS.DIGITAL.COM/openvms/roadmap/openvms_roadmaps.htm


Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h6h2o$c4h$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
> Rob Young <you...@eisner.decus.org> wrote in message
> news:xxAEDA...@eisner.decus.org...

>> In article <8h3bc6$2...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu
> (David Mathog) writes:
>
> ...

>
>> > I'm not counting any of those chickens before they hatch. Remember the
>> > buildup for Spiralog, and how many of us ended up using that?
>>
>> It's on this PPT Roadmap on slide 18:
>>
>> "Extended File Cache V 1.0
>>
>> - Read ahead caching on sequential files
>> - Greater than 100 files in cache
>> - Larger cache size"
>>
>> With a 2001 timeframe. That was the Feb Roadmap. If you look
>> at the one found there today, they also have added:
>>
>> " Extended File Cache V 2.0
>>
>> - Write sharing in a cluster
>> - Write behind caching
>> - User Controls
>> - SMP Performance boost
>> - Galactic Common Memory usage"
>>
>> All part of the base OS. Spiralog was a bolt-on. Yes it went
>> away and a few may have been saddened. But your point about
>> the caching isn't a good one... it isn't "just a build up"
>
> I'd be curious to know why you believe that in some way Spiralog was not as
> seriously-integrated an effort as the EFC work is: I would have
> characterized Spiralog as both considerably more ambitious and requiring
> considerably more extensive integration.
>

Oh it was a serious effort and perhaps we are parsing terms
incorrectly again. XFC I imagine will be turned on by default
with some minimum settings (maybe, maybe not). Easy to understand
and whatnot. After all, AIX (Unix I am most familiar with) caches
files in available memory... no setup necessary. I am sure
something similar will be the default with XFC.

Spiralog required a deliberate effort to setup, wasn't the default
and once something was a Spiralog volume requires work to undo.

> So while we can certainly hope that the EFC work turns out to be more
> worthwhile, I don't see David's comment as inappropriate.
>

What do you mean, specifically? i.e. cite an example of what
he said that you don't disagree with and I will let you know
if I don't disagree with it either. After all, much of
my comments to his comment began with: "No Kidding" followed
by a counter-point.

> ...


>
>> > Nor am I saying that OpenVMS file caching need be exactly like that on
> Unix
>> > - it just needs to give most of the benefits, and do so without the need
>> > for case by case RMS twiddling. Clearly it must preserve the capability
>> > for doing all of the things RMS does now, which really are appropriate
> and
>> > useful on heavily loaded systems where there is no extra RAM around for
>> > file caching.
>> >
>>
>> Exactly like? How about better when it gets here. Think
>> about the last line for a bit there:
>>
>> - Galactic Common Memory usage
>>
>> Tell me what that means to you.
>

> Not much, save in those atypical (though not truly rare) cases in which
> multiple nodes are sharing the same data, in which it primarily makes more
> efficient use of total box memory than a (good) distributed cache where that
> shared data would instead be replicated on a per-partition basis. Of
> course, you pay the price of 3x slower access (and additional
> inter-partition synchronization overhead) on *all* references to such
> centrally-cached data, whether the data in which you're interested is shared
> or not.
>
> While the listed EFC work should certainly be a general improvement over the
> current performance state w.r.t. this particular limitation, the real
> question for most users will be whether it does as good a job in a
> single-system environment as a Unix-style cache does - and for that, as
> David says, we'll just have to wait and see, or at the very least wait until
> design details are released.
>

single system? That's easy. But what about a 16 processor
system that would in normal Unix world be a single system but
does better as 2 VMS systems (separate VMS instances sharing
resources)?

>>
>> >> So maybe in a year or less, we put the limited caching behind us
>> >> for those that are at 7.3 and higher
>

> Not unless the EFC V2 release follows so quickly on the heels of EFC V1 that
> it makes it into 7.3: write-back caching isn't listed for V1, and that's a
> *big* part of the difference (especially in current VMS environments that
> have configured below-file-system-level read cache).
>

It's there on a roadmap. I didn't applaud timelines.

> ...


>
>> Is there anything else we can moan about? I really need to
>> get practiced up...
>

> My suspicion is that at least a portion of David's annoyance stems from the
> knee-jerk reactions to the effect that "VMS don't need no stinkin' Unix
> features! It's better by definition, and that's all there is to it!" when
> he presumed to suggest that Unix had performance advantages in certain areas
> that VMS might do well to consider. Poring over Compaq OpenVMS Web site
> material is not a pre-requisite for participation in comp.os.vms, the lack
> of any mention of the road map information until now (when he started
> talking about this issue months ago) is sufficient indication that it was
> not exactly foremost in the minds of other people either, and even if he had
> been aware of it not only are the details insufficient to indicate whether
> it will be comparable to the Unix facilities but he should not be blamed for
> wondering, on the basis of past future plans for VMS enhancements, whether
> it would appear on time (one who should know has suggested to me that it may
> originally have been slated for 7.2) and in full regalia.
>

Well...

Specifically:

"VMS don't need no stinkin' Unix
features! It's better by definition, and that's all there is to it!"

That would be a mis-charecterization of my criticisms of his
comments. Sticking back in what you trimmed which would
of course show I am not of that ilk is seen in this section:

---

> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than
> they do on Unix. This thread has nothing to do with GUIs vs. command line,
> it's about the real 2X to 3X performance boost that you get with file
> caching.
>

No kidding. My point was plainly stated in the very first line:

>>
>> So things aren't moving fast enough for you?
>>

It isn't as if they are sitting around up there over a cup
of coffee in an a.m. meeting *last week* and a hand went up in
the back of the room:

"Hey, I gotta an idea. Howsa 'bout we fix up our caching.
seems we should be able to do better than a hundred files
at a time and maybe do like some of the Unix boxes I have
heard about that has better caching."

It's a work in progress. Get it?????

--

Throwing interpretation on that to make sure I'm not
misunderstood, I mean that to mean:

"VMS caching is lacking.. I am sure that VMS engineering
is well aware of how others do caching. Roadmaps
show that caching development is well underway."

>>
>> So things aren't moving fast enough for you?
>>

Maybe they aren't moving fast enough for others either.
Me? I've got all the write-back caching I need in controllers.
But others aren't as fortuante.

Rob


Glenn C. Everhart

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
The new VMS caching system has been in the works for quite a
while now and is much more heavily into the VMS kernel than
Spiralog ever was. There are a number of I/O system projects
however which it must not break, and there is some concern not
to break 3rd party apps. When you think of the virtual disks,
remote caching systems, 3rd party cachers, multipath failover,
shadow drivers, software RAID, and a bunch more whose code must
not be broken, and various bits that are still in the works which
need also not to break, you may begin to see some of the complexity
involved. Spiralog was after all able to use a process space
for cache. It is better, but harder to get right, in kernel.

I'm glad to hear the wait is NEARLY over.

Bill Todd

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

Rob Young <you...@eisner.decus.org> wrote in message
news:xxAEDA...@eisner.decus.org...
> In article <8h3bc6$2...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu
(David Mathog) writes:

...

> > I'm not counting any of those chickens before they hatch. Remember the
> > buildup for Spiralog, and how many of us ended up using that?
>
> It's on this PPT Roadmap on slide 18:
>
> "Extended File Cache V 1.0
>
> - Read ahead caching on sequential files
> - Greater than 100 files in cache
> - Larger cache size"
>
> With a 2001 timeframe. That was the Feb Roadmap. If you look
> at the one found there today, they also have added:
>
> " Extended File Cache V 2.0
>
> - Write sharing in a cluster
> - Write behind caching
> - User Controls
> - SMP Performance boost
> - Galactic Common Memory usage"
>
> All part of the base OS. Spiralog was a bolt-on. Yes it went
> away and a few may have been saddened. But your point about
> the caching isn't a good one... it isn't "just a build up"

I'd be curious to know why you believe that in some way Spiralog was not as


seriously-integrated an effort as the EFC work is: I would have
characterized Spiralog as both considerably more ambitious and requiring
considerably more extensive integration.

So while we can certainly hope that the EFC work turns out to be more


worthwhile, I don't see David's comment as inappropriate.

...

> > Nor am I saying that OpenVMS file caching need be exactly like that on
Unix
> > - it just needs to give most of the benefits, and do so without the need
> > for case by case RMS twiddling. Clearly it must preserve the capability
> > for doing all of the things RMS does now, which really are appropriate
and
> > useful on heavily loaded systems where there is no extra RAM around for
> > file caching.
> >
>
> Exactly like? How about better when it gets here. Think
> about the last line for a bit there:
>
> - Galactic Common Memory usage
>
> Tell me what that means to you.

Not much, save in those atypical (though not truly rare) cases in which


multiple nodes are sharing the same data, in which it primarily makes more
efficient use of total box memory than a (good) distributed cache where that
shared data would instead be replicated on a per-partition basis. Of
course, you pay the price of 3x slower access (and additional
inter-partition synchronization overhead) on *all* references to such
centrally-cached data, whether the data in which you're interested is shared
or not.

While the listed EFC work should certainly be a general improvement over the
current performance state w.r.t. this particular limitation, the real
question for most users will be whether it does as good a job in a
single-system environment as a Unix-style cache does - and for that, as
David says, we'll just have to wait and see, or at the very least wait until
design details are released.

>


> >> So maybe in a year or less, we put the limited caching behind us
> >> for those that are at 7.3 and higher

Not unless the EFC V2 release follows so quickly on the heels of EFC V1 that


it makes it into 7.3: write-back caching isn't listed for V1, and that's a
*big* part of the difference (especially in current VMS environments that
have configured below-file-system-level read cache).

...

> Is there anything else we can moan about? I really need to
> get practiced up...

My suspicion is that at least a portion of David's annoyance stems from the


knee-jerk reactions to the effect that "VMS don't need no stinkin' Unix
features! It's better by definition, and that's all there is to it!" when
he presumed to suggest that Unix had performance advantages in certain areas
that VMS might do well to consider. Poring over Compaq OpenVMS Web site
material is not a pre-requisite for participation in comp.os.vms, the lack
of any mention of the road map information until now (when he started
talking about this issue months ago) is sufficient indication that it was
not exactly foremost in the minds of other people either, and even if he had
been aware of it not only are the details insufficient to indicate whether
it will be comparable to the Unix facilities but he should not be blamed for
wondering, on the basis of past future plans for VMS enhancements, whether
it would appear on time (one who should know has suggested to me that it may
originally have been slated for 7.2) and in full regalia.

- bill

>
> Rob
>

Larry Kilgallen

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
> Talking to a brick wall sometimes seems more effective: at least one's
> expectations can be set reasonably up front.
>
> David's comment "I'm not counting any of those chickens before they hatch"
> seems, at least to me, to have a very obvious, and reasonable,
> interpretation: Spiralog was planned, was hyped, actually shipped, and -
> flopped. The same thing could happen with EFC, and its appearance on the
> road map, in which you place so much faith, means approximately as much as
> Spiralog's did (at least I assume Spiralog appeared on a road map at some
> point).

Appearance on a Roadmap is not sufficient. The reader must examine
the characteristics described and decide whether they provide any
benefit for the reader's own circumstances.

People I know who saw Spiralog on the Roadmap universally had the
reaction "it is interesting that they have a file system alternative
for those with write-mostly applications, but my applications are not
write-mostly".

When I read the pain and anguish about future VMS disk caching not
having the fullest support for write at the start, I feel it meets
my needs quite well. My application is running compilers. They
take in many source files and produce fewer object files. Changes
happen to approximately one source file before compilation, so if
all the other source files were still cached in memory I would be
a happy camper. Or maybe it flushes the cache when I close the
file, in which case I will not be helped. Can anyone answer that ?
It would be much more interesting to me than discussion of what
Linux does.

Bill Todd

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Rob Young <you...@eisner.decus.org> wrote in message
news:Uz8oKE...@eisner.decus.org...
> In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com>
writes:

> > Talking to a brick wall sometimes seems more effective: at least one's
> > expectations can be set reasonably up front.
> >
> > David's comment "I'm not counting any of those chickens before they

hatch"
> > seems, at least to me, to have a very obvious, and reasonable,
> > interpretation: Spiralog was planned, was hyped, actually shipped,
and -
> > flopped. The same thing could happen with EFC, and its appearance on
the
> > road map, in which you place so much faith, means approximately as much
as
> > Spiralog's did (at least I assume Spiralog appeared on a road map at
some
> > point).
> >
>
> Ah, now we are in the realm of "it happened before it
> can happen again." Sure. But then the question becomes:
>
> "Do you think it will?"
>
> Is it worth a wager to you?

I've already said that I don't *expect* EFC to be the kind of failure
Spiralog was (though it could happen). And I can't remember the last time I
wagered on anything, though I'm reasonably sure it was well before VMS
appeared on the scene: I don't have anything against wagering, it's just
not of interest to me (if it were, I suspect I'd spend my time working the
market instead of this kind of thing).

However, my expectation is that, while it should improve current default
performance considerably, default file system performance with EFC V2 will
likely still fall somewhat short of Unix default file system performance (at
least for some of the better implementations, like SGI's): some of the
optimizations involve things like cooperating with the file system to defer
specific space allocation until time of actual disk write (there's no
indication I know of that EFC is getting into stuff like that), while others
aren't cache-related at all (e.g., the use of a log to allow small
synchronous writes to be made persistent immediately in an efficient
manner).

- bill

>
> Rob
>

IanPercival

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
I don't read the news group very often as I'm usually busy doing other
things - but there does seem to be an awful lot of discussion about VMS and
file data caching. I don't particularly want to get involved in any of the
various discussions - but there are an awful lot of misconceptions being
bandied about. Feel free to email me at ian.pe...@compaq.com if you have
serious questions or discussion.

VMS has two sets of data caching.
1. RMS. It can use Local or Global Buffers. Some manual setiing and control
is required in order to use these if the application isn't using them
automatically.

2. System wide data caches.
First of all I agree that the system wide central data cache that VMS
has/had known variously as VCC or VIOC which was first implemented on VAX
V6.0 a number of years ago, is pretty poor by many performance metrics. It
actually got worse, as functionality such as dynamic memory sizing got
dropped during the port to Alpha. This is the reason there are a myriad
third party implementations of caches (OK maybe not quite that many - but
quite a few!) all of which worked better than VCC.

A new system wide file data cache called XFC (stands for extended file
cache) has been developed. It is a true 64 bit cache - can store Tb of
data if you have the memory.
It can store over 100 CLOSED files (the VCC limitation was that it would
only store up to 100 closed files - not files in total!). XFC has no
limitation on number of closed files it will store - apart from the obvious
one of memory usage.
It can cache I/Os larger than 34 blocks (VCC was hard limited to 34)
It can perform readahead where appropriate.
It provides performance statistics such as absolute I/O response times in
microseconds.
It is dynamic or static or both in terms of memory usage.
My Alpha boots much quicker when using it! Many benchmarks are significantly
better when using it.

XFC V1.0 is about to enter the field test cycle. By the time it is
released, it should have some other major performance features added -
making it perform even better for users of medium to large machines.

Hope this helps a bit!

Ian Percival
XFC Project Leader
OpenVMS Engineering
(writing from home on my son's birthday!)

Dave Gudewicz

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Thanks for this update Ian and more importantly in the long run, Happy
Birthday to your son!!

Dave...

IanPercival <IanPe...@email.msn.com> wrote in message
news:#WeqE3Lz$GA.347@cpmsnbbsa08...

David Mathog

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
In article <PjT1Qf...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:
>
>"I'm not counting any of those chickens before they hatch."
>
> In fairness to David, maybe he didn't know about filesystem
> caching futures BUT I do know that has been mentioned more than
> once in this forum. So in examination of the evidence his
> response more than reveals his bent. He's a "glass is half full"
> kinda guy

Actually, I'm a "I can't load it now, and will believe it only when I can"
kind of guy. Also a "what in the hell could have been a higher priority
all these years than improving disk IO throughput 10X?" kind of guy.

Besides, regarding disk IO, my performance tests for the programs we run
here under our typical load indicate that the glass is way below half full,
in some tests, it's barely even moist. However, for CPU bound jobs it's
100% full. For security it's about 400% full.

>
> But it is a lose lose proposition for those that want to use it
> today. No wonder some of those folks are "glass half full" kinda
> folks.

It's also going to be a big loser if DII-COE goes in before an improved
caching file system does. The benchmarks/test programs are all written
to run, essentially, on a Solaris system. There is little tuning that can
be done to improve them, and direct access to RMS would seem to be out of
the question (in terms of compliance with the standard). Since they will
only use write() and fprintf() many of the IO intensive ones will run like
dogs on OpenVMS as it is now.

Jojimbo

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
at the risk of getting back to the original subject...

Having just spent the best part of a week understanding, so I
can modify it, a completely async AST driven IO program, I think
I understand a bit more why the kids like U***X.

Because it's simple!

No need to do async IO, the operating system does it. Just
read and write, so race conditions, buffers to manage until the
IO completes, etc... are not an issue. At least to the
programmer! Now the users and the customers and the operators
might be in big trouble, but the programmer is done and gone!

Sigh,

Jim

p.s.
By the way. "I have a Java Class that will do all that, although
you may have to change the interface a bit". (heard over the
cubicle wall this afternoon) Double sigh.

Bill Todd

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Jojimbo <jgesslin...@yahoo.com.invalid> wrote in message
news:020eceba...@usw-ex0102-084.remarq.com...

> at the risk of getting back to the original subject...
>
> Having just spent the best part of a week understanding, so I
> can modify it, a completely async AST driven IO program, I think
> I understand a bit more why the kids like U***X.
>
> Because it's simple!
>
> No need to do async IO, the operating system does it.

That's only true in some cases, automatic multi-buffering being perhaps the
most significant.

Some more cases are handled, though not transparently, by using multiple
threads (each using synchronous I/O) in a single process where, in the days
before system-supported threads existed, you would have had to use the
single-threaded/asynchronous-I/O whirling dervish approach. This works
pretty well when the required per-thread overhead is reasonable and the
operations performed by the individual threads are essentially sequential in
nature (but interact in ways that can be synchronized - and shared - more
efficiently within a single process than across multiple processes, which
was the old traditional Unix substitute for asynchrony and/or
multi-threading).

But when it comes to rats-nests like databases or distributed file systems,
where disk accesses and distributed network operations pop up asynchronously
and multiple times within the course of a single operation, there's still no
comparably-efficient alternative to having just enough threads to keep all
the available physical processors busy and having each such thread work its
little tail off asynchronously. Of course, such (often kernel)
'applications' nicely hide such complexity from their clients (just like the
Unix file system itself does, though I'm not sure how common real
down-and-dirty asynchrony is even in Unix kernels: it's so - unUnixy), but
the best (or at least most powerful) implementations pass support for
asynchrony up to the application level so that the applications can do the
same kind of thing when they really need to.

So the real question is whether your 'asynch AST driven IO program' could
have benefited from using threads to avoid its asynchrony or avoided it by
virtue of Unix's automatic system support for multi-buffering. If the
former, doesn't VMS also support kernel-based threads? If the latter, RMS
provides such mechanisms, albeit you have to invoke them.

In either case, VMS gives you the means to avoid those nasty ASTs (at least
any you can see) and still get the same performance you can get with Unix -
it just doesn't do this *by default*, which, while important for the casual
user (who no way is going to act differently anyway, but will just complain
about performance), is not the same as *forcing* you into the depths of
complexity you've been exploring. And if the application you're fiddling
with *really needed* to be AST-driven on VMS, then it would have needed to
be coded in just about the same manner in a Unix environment - except that
you'd be hard-pressed to find a Unix environment with as comprehensive
support for asynchrony (when you really need it) as VMS has.

- bill

Keith Brown

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
Bill Todd wrote:
>
> Rob Young <you...@eisner.decus.org> wrote in message
> news:Uz8oKE...@eisner.decus.org...
> > In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com>

As you are fond of saying about NT, "It may not be perfect but
it is good enough"

--
Keith Brown
kbro...@usfamily.net

Keith Brown

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

How difficult is this in PL/I. What can be simpler?

OPEN FILE (MY_OUTFILE) TITLE ('MY_OUTFILE.dat') OUTPUT
ENVIRONMENT (BUFFERS(255),WRITEBEHIND);

WRITE FILE (MY_OUTFILE) FROM (BUFFER));

CLOSE (MY_OUTFILE);

--
Keith Brown
kbro...@usfamily.net

Larry Kilgallen

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
In article <8h9dpt$k...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu (David Mathog) writes:

> It's also going to be a big loser if DII-COE goes in before an improved
> caching file system does. The benchmarks/test programs are all written
> to run, essentially, on a Solaris system. There is little tuning that can
> be done to improve them, and direct access to RMS would seem to be out of
> the question (in terms of compliance with the standard). Since they will
> only use write() and fprintf() many of the IO intensive ones will run like
> dogs on OpenVMS as it is now.

VMS has to meet the standard, but I don't think the standard has
a performance requirement.

That is not to say that performance is irrelevant, but that the
order of file caching vs. DII-COE support may not be critical.

We did have a post from someone working on file caching, but none
from someone working on DII-COE :-).

Bill Todd

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:39388193...@usfamily.net...

...

> How difficult is this in PL/I. What can be simpler?

1) Using a language that the average developer (who, after all, is exactly
the developer who *won't* do much if anything in the way of optimization) is
likely to select.

2) Making the desired behavior transparent rather than requiring that it be
specified (though making it easy to obtain, albeit explicitly, is certainly
better than keeping it truly obscure).

As for your other comment elsewhere, if VMS performance is 'good enough' for
you even if it doesn't match Unix performance, rejoice and be happy. But
don't assume that's 'good enough' for everyone, or even a majority -
especially given purchasing situations where VMS is typically the system
that must justify being considered against competition that sets the
expectations by virtue of its industry acceptance.

- bill

Keith Brown

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

> Bill Todd wrote:

> 1) Using a language that the average developer (who, after all, is exactly
> the developer who *won't* do much if anything in the way of optimization) is
> likely to select.
>

I am always amazed at the number of people who choose C (the
language from hell) and then complain that they can't get any
work done. Yes, I known that C is popular, yet that fact does
not does not always make it the best choice. As many have said
before me, "we have a standardized language now, too bad it is
C".

>
> 2) Making the desired behavior transparent rather than requiring that it be
> specified (though making it easy to obtain, albeit explicitly, is certainly
> better than keeping it truly obscure).
>

We certainly can't expect software developers to RTFM can we :)
. Bill, I've been using VMS for over 16 years. I never saw VMS
while I was in school BTW, it all came after I started working.
In the last 2 years I have been spending significant time
learning Linux. RingTFM is what I have to do. I do not find it
intuitive as you would imply, but do see many similarities to
VMS. I like Linux but do find that it has some funky features
that require some learning and as many times as not they are
more difficult to deal with than on VMS. I also find it to be no
easier to learn than VMS, which was easy BTW. If we are to be
dependent on SW developers that can only write code for the OS
they saw in school we will never get anywhere will we? My point
is that SW developers need to RTFM for ANY system they code on.
I they don't we won't buy their SW will we?

>
> As for your other comment elsewhere, if VMS performance is 'good enough' for
> you even if it doesn't match Unix performance, rejoice and be happy. But
> don't assume that's 'good enough' for everyone, or even a majority -
> especially given purchasing situations where VMS is typically the system
> that must justify being considered against competition that sets the
> expectations by virtue of its industry acceptance.
>

What my comment elsewhere said was that our VMS performance is
AS GOOD as Unix due to the use of external controllers. Note
also that even though Unix does have a default performance edge
on I/O we still chose to use HSZxx controllers on the Unix
systems for reliability reasons as we did on VMS. There is no
free lunch. What Unix gains in I/O performance it looses in
reliability. Go ahead, ask about the AdvFS restore we did a few
months back after DU crashed before flushing the cache.


--
Keith Brown
kbro...@usfamily.net

Bill Todd

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:39394525...@usfamily.net...

>
> > Bill Todd wrote:
>
> > 1) Using a language that the average developer (who, after all, is
exactly
> > the developer who *won't* do much if anything in the way of
optimization) is
> > likely to select.
> >
>
> I am always amazed at the number of people who choose C (the
> language from hell) and then complain that they can't get any
> work done. Yes, I known that C is popular, yet that fact does
> not does not always make it the best choice. As many have said
> before me, "we have a standardized language now, too bad it is
> C".

Whether you are amazed is completely irrelevant. Wake up and smell the
coffee: people use C/C++ in preference to other languages whether or not
you approve, and users evaluate system performance based in large part on
the applications such people create ('cause those are the applications they
run).

>
> >
> > 2) Making the desired behavior transparent rather than requiring that it
be
> > specified (though making it easy to obtain, albeit explicitly, is
certainly
> > better than keeping it truly obscure).
> >
>
> We certainly can't expect software developers to RTFM can we :)

Once again, your own inclinations in this area are are irrelevant if they
don't reflect what most developers do in the real world. Pat yourself on
the back all you want, but don't presume that this makes any difference to
the way the rest of the world works (hint: a great many application
developers are in it for the money, and time spent learning a relatively
obscure system so that their ported application will perform better there
may well not be recouped by increased sales due to that improved performance
in that relatively unimportant - for them - environment).

And if developers may be willing to RTFM to use Linux (or any Unix) but not
to use VMS, well, that's a reality VMS has to accept (and adjust to if
feasible), 'cause it's behavior that's existed for the past decade-plus and
short of paying the entire software development community to attend VMS
familiarization classes there's no likelihood it's going to change any time
soon.

But in the particular area under discussion, the more important point is
that developers *don't* have to RTFM to get good file system performance out
of Unixes whereas they *do* on VMS.

> . Bill, I've been using VMS for over 16 years. I never saw VMS
> while I was in school BTW, it all came after I started working.
> In the last 2 years I have been spending significant time
> learning Linux. RingTFM is what I have to do. I do not find it
> intuitive as you would imply, but do see many similarities to
> VMS. I like Linux but do find that it has some funky features
> that require some learning and as many times as not they are
> more difficult to deal with than on VMS. I also find it to be no
> easier to learn than VMS, which was easy BTW. If we are to be
> dependent on SW developers that can only write code for the OS
> they saw in school we will never get anywhere will we?

That's exactly how VMS got into the position it enjoys today. And -
surprise! - the world still moves on, even if VMS doesn't keep pace with it
(in areas like acceptance and market share). And the systems that
developers encountered in school are (belatedly) starting to offer some of
the VMS features (first extent-based file systems, then asynchrony, albeit
sometimes limited, and most recently primitive clustering) that the market
actually seems to value.

My point
> is that SW developers need to RTFM for ANY system they code on.
> I they don't we won't buy their SW will we?
>
> >
> > As for your other comment elsewhere, if VMS performance is 'good enough'
for
> > you even if it doesn't match Unix performance, rejoice and be happy.
But
> > don't assume that's 'good enough' for everyone, or even a majority -
> > especially given purchasing situations where VMS is typically the system
> > that must justify being considered against competition that sets the
> > expectations by virtue of its industry acceptance.
> >
>
> What my comment elsewhere said was that our VMS performance is
> AS GOOD as Unix due to the use of external controllers.

Not the comment I was referring to (11:39 P.M. EDT 6/2/00), which asserted
that EFC V2 performance (in the context of otherwise default environments)
would be 'good enough' (in general, not specifically for you - and since I
don't recall any other comment of yours to that effect, your confusion on
this point seems curious, though I do remember a post of yours some time
back, which I can't find in the recent ancestry of this thread, that
indicated you used hardware write-back caching). Perhaps what you mean is
that this was what you had in your head when you wrote (in response to the
statement of my own that precedes it) what I reproduce below:

---

>EFC V2 will
> likely still fall somewhat short of Unix default file system performance
(at
> least for some of the better implementations, like SGI's)

As you are fond of saying about NT, "It may not be perfect but
it is good enough"

--
Keith Brown

---

Note
> also that even though Unix does have a default performance edge
> on I/O we still chose to use HSZxx controllers on the Unix
> systems for reliability reasons as we did on VMS.

I'm curious what you mean by the above: write-back caching in stable memory
(vs. writing to disk) is purely a performance optimization, it has
absolutely nothing to do with reliability.

There is no
> free lunch. What Unix gains in I/O performance it looses in
> reliability.

This is pure bullshit, and I'm getting tired of hearing it. Even if some
Unix write-back-cache implementations may have been buggy (hell, some still
may be - I have my doubts about Linux's ext2fs), that does not reflect any
limitation of the architecture. I'm not familiar enough with the full range
of implementations to state that any are as bug-free as ODS-2 likely is, but
there are certainly good ones out there (Veritas at least has an excellent
reputation, and its use of a log allows it to provide good performance for
synchronous writes as well).

Most applications do not depend for their integrity on writes making it to
disk immediately. For the few that do, Unix provides mechanisms to ensure
this behavior (or provide 'synch' points, which is an intermediate strategy
that can be a win for a third class of applications); for the rest, Unix
default mechanisms provide good performance with *no* decrease in
reliability.

About the only positive aspect of VMS's behavior is that an application that
*does* depend upon the ordering and timing of disk writes but *doesn't*
understand the fact that it depends on them may luck out and work correctly
after an interruption, whereas it may be less likely to on Unix. But I
don't think 'reliability' is the right word to apply to such a situation.

Go ahead, ask about the AdvFS restore we did a few
> months back after DU crashed before flushing the cache.

Submit a bug report and get on with your life: this is not a conceptual
deficiency, just an implementation error.

- bill

>
>
> --
> Keith Brown
> kbro...@usfamily.net

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Bill Todd wrote:
>
> Keith Brown <kbro...@usfamily.net> wrote in message
> news:39388193...@usfamily.net...
>
> ...
>
> > How difficult is this in PL/I. What can be simpler?
>
> 1) Using a language that the average developer (who, after all, is exactly
> the developer who *won't* do much if anything in the way of optimization) is
> likely to select.

Stuff a sock in it Bill. If you're talking about C, then your average developer
is a mistake just waiting to happen, so what does it matter how fast the mistake
occurs?

> 2) Making the desired behavior transparent rather than requiring that it be
> specified (though making it easy to obtain, albeit explicitly, is certainly
> better than keeping it truly obscure).

When I transistioned from RSTS/E to VMS in the late 70s, RSTS was perceived by
it's users to be extremely user friendly, and VMS was just soooooo complicated.
I soon discovered why RSTS was so user friendly. There were few options, just
one way to do things. Fortunately, in many cases the developers made good
design decisions and the 'one way' was a rather good way. I soon found that if
I kept an open mind (I know, you'll doubt that) that VMS was a much better
environment because I, and every other user/programmer/designer could choose
from many options and if they choose well, the result would be a better
application.

Not everyone is running Unix design applications that benefit from file caching
on a single user workstation with lots of extra memory. 'Desired behavior'
isn't always easily defined.

Enjoy reading your posts.

Dave

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
170 Grimplin Road E-Mail: da...@tsoft-inc.com
Vanderbilt, PA 15486

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Bill Todd wrote:
>
> just an implementation error.
>
> - bill

Oh, I see. You're talking about Unix and C. :-)

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Larry Kilgallen wrote:

>
> In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> > Bill Todd wrote:
> >>
> >> just an implementation error.
> >>
> >> - bill
> >
> > Oh, I see. You're talking about Unix and C. :-)
>
> No. Case-sensitive filenames are not an implementation error,
> they are an error of design (or lack thereof).

Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
of Unix and C was an error. :-)

Dan Sugalski

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
On Sat, 3 Jun 2000, David A Froble wrote:

> Larry Kilgallen wrote:
> >
> > In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> > > Bill Todd wrote:
> > >>
> > >> just an implementation error.
> > >>
> > >> - bill
> > >
> > > Oh, I see. You're talking about Unix and C. :-)
> >
> > No. Case-sensitive filenames are not an implementation error,
> > they are an error of design (or lack thereof).
>
> Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
> of Unix and C was an error. :-)

C'mon, that's not fair. Both Unix and C have quite a few very nice
features. That they're so badly applied (and have a number of rather
glaring design flaws) doesn't detract from those areas that they are good
at. Pity nobody ever redid either from scratch and got them right, but
it's not like VMS doesn't have its share of quirks. (Granted they're not
of the "let's open up my system because I have lousy security granularity
and a system that has no string handling capabilities" type, but they are
there...)

Dan