Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Compaq not as bad as Andrew says (wish?)

251 views
Skip to first unread message

Rudolf Wingert

unread,
May 25, 2000, 3:00:00 AM5/25/00
to
Hello,

yesterday I did read, that Compaq is on rank 2 in the HTPC market
(5.8Billion$). Rank one HP 23% followed by Compaq, SGI and IBM. On
rank 5 (12%) follows Sun. I think 5.8 Billion $ is not the whole
market, but the market with the best income.

Regards Rudolf Wingert

Terry C. Shannon

unread,
May 25, 2000, 3:00:00 AM5/25/00
to

"Rudolf Wingert" <w...@fom.fgan.de> wrote in message
news:2000052505...@fom.fgan.de...

According to IDC numbers Compaq is Number Two in HPTC by less than one point
(the faltering SGI being Number One).

Compaq should gain Number One ranking in 2H00.

Keith Brown

unread,
May 25, 2000, 3:00:00 AM5/25/00
to

Forgive my ignorance, what is HPTC?
--
Keith Brown
kbro...@usfamily.net

Terry C. Shannon

unread,
May 26, 2000, 3:00:00 AM5/26/00
to

"Keith Brown" <kbro...@usfamily.net> wrote in message
news:392DD22E...@usfamily.net...

Easy... High Performance Technical Computing!

cheers,

terry s

David Mathog

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
In article <Fv58C...@world.std.com>, "Terry C. Shannon" <sha...@world.std.com> writes:
>> >
>> > Compaq should gain Number One ranking in 2H00.
>>
>> Forgive my ignorance, what is HPTC?
>
>Easy... High Performance Technical Computing!
>

By definition though, none of this is OpenVMS. It refers to the huge
Tru64 and Linux/Alpha farms that places like Celera run. Go visit
the HPTC pages and nary a word about OpenVMS will you find.

http://www.digital.com/hpc/

Compaq is absolutely not interested in selling OpenVMS for this market. If
they were they would keep the compiler features on par with Tru64 (the C
compiler for Tru64 has profile based optimization and all the libraries are
available compiled to take advantage of the latest processors). They would
also deal with the lack of automatic file caching, which no amount of RMS
fiddling will make up for and leads to dramatic increases in throughput in
most instances. (Data integrity is not much of an issue in this market -
most of the computing is data in, crunch, data out, and if the power fails
in the middle you just start over again.)

The irony is that OpenVMS was the HPTC workhorse of the 80's, and it was
probably that market which enabled it to grow into the "Enterprise" class
OS that Compaq says it is today.

Regards,

David Mathog
mat...@seqaxp.bio.caltech.edu
Manager, sequence analysis facility, biology division, Caltech
**************************************************************************
* RIP VMS *
**************************************************************************

David A Froble

unread,
May 26, 2000, 3:00:00 AM5/26/00
to
David Mathog wrote:
>
> By definition though, none of this is OpenVMS. It refers to the huge
> Tru64 and Linux/Alpha farms that places like Celera run. Go visit
> the HPTC pages and nary a word about OpenVMS will you find.
>
> http://www.digital.com/hpc/
>
> Compaq is absolutely not interested in selling OpenVMS for this market. If
> they were they would keep the compiler features on par with Tru64 (the C
> compiler for Tru64 has profile based optimization and all the libraries are
> available compiled to take advantage of the latest processors). They would
> also deal with the lack of automatic file caching, which no amount of RMS
> fiddling will make up for and leads to dramatic increases in throughput in
> most instances. (Data integrity is not much of an issue in this market -
> most of the computing is data in, crunch, data out, and if the power fails
> in the middle you just start over again.)
>
> The irony is that OpenVMS was the HPTC workhorse of the 80's, and it was
> probably that market which enabled it to grow into the "Enterprise" class
> OS that Compaq says it is today.

The problem is not with VMS, but with C programs written to use Unix
capabilities. Should the same application be written to use VMS's capabilities,
it should match the performance of T64, and many times exceed T64. I'd rather
use global sections than file caching in some cases. Too many capabilities to
start a list here.

As for the evolution of VMS, the earliest systems in 1978 were more suited to
scientific computing. It was the input of the business users that helped VMS
grow into an enterprise class OS. Things like BACKUP, extensive print/batch
queue capabilities, the data integrity you don't seem to care about.

Dave

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
170 Grimplin Road E-Mail: da...@tsoft-inc.com
Vanderbilt, PA 15486

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
David A Froble <da...@tsoft-inc.com> wrote in message
news:392EB8D4...@tsoft-inc.com...

...

> The problem is not with VMS, but with C programs written to use Unix
> capabilities. Should the same application be written to use VMS's
capabilities,
> it should match the performance of T64, and many times exceed T64.

Despite appearances, I'm enough of a VMS bigot myself to be inclined to
agree that an optimized application on VMS should usually be able to match
or exceed the performance of an optimized application on <pick your Unix>.
However, the skill set required to optimize an application on VMS is in
radically shorter supply than the skill set required to optimize an
application on most Unixes, and if you're willing to settle for a
good-but-not-literally-optimal approach (which in the real world is almost
always the case) the gap in availability of skill sets widens even more.
And it is reasonably arguable that the complexity of optimization, or even
near-optimization, on VMS is greater in an absolute sense, independent of
familiarity to the masses.

So the real problem is that VMS doesn't provide an environment in which the
skill sets of the people who write and use these applications can be used
effectively: if it did, then it might well remain an overall-effective
solution for them.

I'd rather
> use global sections than file caching in some cases.

Care to list them? Even for simple cross-application read-caching, global
sections are a bit of a pain compared to a central system cache: must be
set up, torn down, and sized explicitly on an individual basis, don't
balance activity across independent application sets to achieve best overall
system throughput. For write-back caching, add the need to figure out when
and how often to flush them (Unix typically flushes automatically by default
at 30-second intervals, and some variants tweak this mechanism to attain
improved on-disk file contiguity and eliminate disk writes entirely for
files deleted before they're flushed). If you're willing to use 'locate
mode' access to operate on the buffer contents directly instead of through
RMS's normal record interface (not sure you could do this for write access
in a global buffer, though) you could assert that this saves a copy
operation, and in any event you save a system call per 'record' - but
crossing the system interface is a lot less expensive than it used to be.

Too many capabilities to
> start a list here.
>
> As for the evolution of VMS, the earliest systems in 1978 were more suited
to
> scientific computing. It was the input of the business users that helped
VMS
> grow into an enterprise class OS. Things like BACKUP, extensive
print/batch
> queue capabilities, the data integrity you don't seem to care about.

I can't remember a time, even in 1978, when anyone could have said that VMS
wasn't as concerned about data integrity as it presumably remains today.
And while backup utilities had a checkered history dating back to the 11, it
wasn't for lack of trying to make them as solid as possible (some
implementors were just less experienced than later ones).

It's true that VMS became *more* suited to business computing in areas other
than data integrity as time went on. But I don't see that it ever became
*less* suited to scientific computing - save in the area of platform
price-competitiveness, subsequent lapse into unfamiliarity, and then, due to
the resulting lack of market interest, a less-than-aggressive attitude
toward matching new features developed elsewhere.

- bill

JF Mezei

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a
> good-but-not-literally-optimal approach

I disagree significantly. There is a HUGE basin of available VMS expertise.
The problem is that due to lack of VMS work (because customers have left or
stopped improving their VMS systems over the years), most have found work
elsewhere and don't advertise their VMS capabilities nor seek work in VMS
because of its "Palmer is killing VMS" image. VMS is still seen as a "legacy"
expertise while "NT" is seen as "hire anyone who has written "NT" in their CV".

Once you have a VMS system that is tuned and operating nicely with no software
upgrades, there isn't much work needed to keep it running, and few very
experienced folks would be interested in such work anyways.

But setup a challenging VMS shop with serious applications
deployment/development, and you might get many of those ex-VMS experts out of
the woodwork.

John E. Malmberg

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a
> good-but-not-literally-optimal approach

Actually the skill set required to optimize an application on any platform
is in short supply.

Most places find it faster or cheaper to buy faster servers or add more
servers instead of actually paying for quality improvements to their
systems. And proper optimization can take time that you need the
applications running.

The exception to this is when you hit a wall were you can not purchase
faster hardware. Then you must make it work. That is were the real
expertise is needed.


J.F. Mezei wrote:

> I disagree significantly. There is a HUGE basin of available VMS
expertise.
> The problem is that due to lack of VMS work (because customers have left
or
> stopped improving their VMS systems over the years), most have found work
> elsewhere and don't advertise their VMS capabilities nor seek work in VMS
> because of its "Palmer is killing VMS" image. VMS is still seen as a
"legacy"
> expertise while "NT" is seen as "hire anyone who has written "NT" in their
CV".

Companies with good pay and work environments tend to retain top people.
They can take the time to hire promising beginners and train them. Thus
they do not need to hire "experts", they keep them.

At the sites I have been at, for VMS work, we never recruited specifically
VMS people, we hired programmers, and with little work oriented them on VMS.

The experienced people that we hired or contracted were always from "word of
mouth", not professional recruiting.

And there are plenty of advertisements that I have seen for NON-VMS work
where they state that experience on VMS is one of the prefered credential.
Very easy to find on any job board.

-John
wb8...@qsl.network

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Definitely a valid point, unlike JF's, which in asserting that getting VMS
expertise into a *single* shop might not be too difficult avoided addressing
the point he was supposedly disagreeing with, which was that the *absolute*
supply of VMS expertise is considerably smaller than that of Unix expertise,
no matter how you slice it.

Not to mention the related point that you need *more* expertise to get good
performance out of VMS than out of Unix, which gives most applications good
performance right out of the box.

And of course that last dovetails neatly with your own observation that
people seldom bother with much performance optimization: for such typical
non-performance-optimized applications, Unix therefore makes more efficient
use of the hardware than VMS does, hence is correctly perceived as more
cost-effective.

- bill

John E. Malmberg <wb8...@qsl.net> wrote in message
news:066e01bfc8e6$4c0ac230$020a...@xile.realm...

David A Froble

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Not going to let that one slide by.

Bill Todd wrote:
>
> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided addressing
> the point he was supposedly disagreeing with, which was that the *absolute*
> supply of VMS expertise is considerably smaller than that of Unix expertise,
> no matter how you slice it.
>
> Not to mention the related point that you need *more* expertise to get good
> performance out of VMS than out of Unix, which gives most applications good
> performance right out of the box.

On what do you base this claim? My perspective is that VMS has a better
development environment, thus allowing better applictions with less expertize.
Good VMS applications give good performance right out of the box. Since the
development environment is friendly, good VMS applications are rather easy to
produce.

> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.

Again, a rationalization with no supporting facts. On what do you base the
claim that Unix makes more efficient use of the hardware? Your posts are
starting to sound like wishful opinion, or outright trolls. For someone who
catches others making claims without substantiating them, you seem to be
following right in their footsteps.

You're probably going to now find a post of mine that did this. Fine. I'm
probably guilty. However, I will issue this challenge. An application that
does not favor either environment, if there is such, will be more easily and
quickly developed on VMS than on Unix. I'm willing to bet a buck on it, and do
the VMS side.

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Apologies - it's been 13+ years since I use RMS, and considerably longer
since I've thought about some of its more obscure options.

Turns out locate mode is not applicable to global buffers at all, read or
write - in fact, it only works for read access even when using local
buffers. So if you want shared buffers, you can't avoid a copy operation,
whether you're using VMS or Unix.

RMS does make use of global buffers fairly easy, though, even if it isn't
transparent and the space can't be used as flexibly as a central system
cache can.

And returning to an earlier point, the documentation for
read-ahead/write-behind states that it applies only to non-shared Sequential
files (my recollection, which I couldn't verify in a quick search, is that
you still may be able to provide multiple buffers for indexed files and
obtain some LRU caching value - e.g., for upper index levels - but not
read-ahead operation).

- bill

Bill Todd <bill...@foo.mv.com> wrote in message
news:8grolb$jg1$1...@pyrite.mv.net...


> David A Froble <da...@tsoft-inc.com> wrote in message
> news:392EB8D4...@tsoft-inc.com...
>
> ...
>
> > The problem is not with VMS, but with C programs written to use Unix
> > capabilities. Should the same application be written to use VMS's
> capabilities,
> > it should match the performance of T64, and many times exceed T64.
>
> Despite appearances, I'm enough of a VMS bigot myself to be inclined to
> agree that an optimized application on VMS should usually be able to match
> or exceed the performance of an optimized application on <pick your Unix>.

> However, the skill set required to optimize an application on VMS is in
> radically shorter supply than the skill set required to optimize an
> application on most Unixes, and if you're willing to settle for a

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

David A Froble <da...@tsoft-inc.com> wrote in message
news:3931972C...@tsoft-inc.com...

> Not going to let that one slide by.
>
> Bill Todd wrote:
> >
> > Definitely a valid point, unlike JF's, which in asserting that getting
VMS
> > expertise into a *single* shop might not be too difficult avoided
addressing
> > the point he was supposedly disagreeing with, which was that the
*absolute*
> > supply of VMS expertise is considerably smaller than that of Unix
expertise,
> > no matter how you slice it.
> >
> > Not to mention the related point that you need *more* expertise to get
good
> > performance out of VMS than out of Unix, which gives most applications
good
> > performance right out of the box.
>
> On what do you base this claim?

I'm afraid that in your zeal to defend the honor of VMS, you seem to have
lost sight of the context in which this part of the discussion evolved (even
though you participated in it yourself): the very specific area of Unix's
automated file caching vs. VMS's lack of same.

That this gives typical applications better performance on Unix is a
no-brainer. It has other consequences that some people here may believe are
pernicious, but lack of performance is not one of them.

My perspective is that VMS has a better
> development environment, thus allowing better applictions with less
expertize.

There are areas in which I would not dispute this assertion, but file-system
performance is not one of them (at least in comparison with Unixes that
safely defer meta-data updates as well as user data updates, whether by use
of logs or 'soft update' mechanisms).

> Good VMS applications give good performance right out of the box.

'Good' is a subjective term, so let's just say that default, unoptimized
Unix file system performance is noticeably better than default, unoptimized
VMS file system performance for typical applications, mostly due to Unix's
default system caching (my vague recollection is that VMS can be configured
to read-cache at the disk level, though possibly only in non-clustered
environments, though that may no longer be a restriction, which helps some -
though the path to such a cache is considerably longer than that to a
file-level cache - but in any case doesn't cover write-back caching).

And at least some Unix file systems (SGI's and Veritas' - don't happen to
know details about others in this area) allow the cache to be bypassed
('direct I/O') when desired - e.g., to avoid copying overheads on large
transfers. Veritas, in fact, does this automatically above a specifiable
transfer size (256 KB by default), and SGI's XFS may as well, avoiding any
special application coding to cover this case.

Since the
> development environment is friendly, good VMS applications are rather easy
to
> produce.
>
> > And of course that last dovetails neatly with your own observation that
> > people seldom bother with much performance optimization: for such
typical
> > non-performance-optimized applications, Unix therefore makes more
efficient
> > use of the hardware than VMS does, hence is correctly perceived as more
> > cost-effective.
>
> Again, a rationalization with no supporting facts. On what do you base
the
> claim that Unix makes more efficient use of the hardware?

I obviously should have repeated, about every other sentence, the fact that
these observations were made in the context of the file-caching comment
originated in David Mathog's post. In that context, Unix indubitably makes
more efficient default use of the hardware in any environment where file I/O
is significant to performance.

Your posts are
> starting to sound like wishful opinion, or outright trolls. For someone
who
> catches others making claims without substantiating them, you seem to be
> following right in their footsteps.

No, I'm just letting David Mathog's observations about relative performance
stand. One specific example he gave was compilation performance, but I
suspect he could supply others (and not being a Unix user I can't, though I
know enough about what's happening under the covers not to have any
hesitation about drawing conclusions about performance from it, given even
moderate external confirmation).

If you want to learn something, pay attention. If you'd rather just call
names, go ahead.

- bill

>
> You're probably going to now find a post of mine that did this. Fine.
I'm
> probably guilty. However, I will issue this challenge. An application
that
> does not favor either environment, if there is such, will be more easily
and
> quickly developed on VMS than on Unix. I'm willing to bet a buck on it,
and do
> the VMS side.
>

John E. Malmberg

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

Bill Todd <bill...@foo.mv.com> wrote
in message news:8gs3ga$jbg$1...@pyrite.mv.net...

> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided
addressing
> the point he was supposedly disagreeing with, which was that the
*absolute*
> supply of VMS expertise is considerably smaller than that of Unix
expertise,
> no matter how you slice it.

Ok, let's address it. Computer systems being physical machines are still
ultimately bound by the laws of physics. What kind of expert would be
better?

One that knows those laws and but not VMS, but has the disipline to RTFM and
actually understand them.

Or one that knows VMS from experience, but does not actually understand the
physics.

> Not to mention the related point that you need *more* expertise to get
good
> performance out of VMS than out of Unix, which gives most applications
good
> performance right out of the box.

Yes UNIX can give most (not all) applications good performance. But when
you understand why, it gives you a better perspective on when to recommend
each platform.

However *more* expertise is a term I would not use. Most of the techniques
I used to use with VMS tuning came from formulas in an IBM VM/SP tuning
guide. That was when it was possible to get data to/from your paging disks
faster than the CPU could use it. Now CPUs are outrunning the disks by too
much.

Most of the tuning parameters of OpenVMS are well documented. Someone
familiar with their application and computer systems can RTFM and find this
out.

I went to several UNIX experts to find out why all systems of a particular
brand suddenly died one day. They did not know. I found out why by using a
ethernet monitor.

It turned out that it was the equivalent of a non-page pool exhausted. It
turned out a default configuration was causing it to download the routing
tables from the corporate router. I guess for "performance" it stored them
in the non-page pool.

Yes, I talked to the vendor's support line. For my efforts I got sent
copies of two articles that had nothing to with the problem statement.

I contend that true tuning expertise in other platforms is as rare as it is
in VMS. A higher market share for UNIX and M$SOFT attracts more pretenders.

> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more
efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.

I do not know if that perception is always correct. :-)

-John
wb8...@qsl.network


Keith Brown

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
>
> Apologies - it's been 13+ years since I use RMS, and considerably longer
> since I've thought about some of its more obscure options.
>
> Turns out locate mode is not applicable to global buffers at all, read or
> write - in fact, it only works for read access even when using local
> buffers. So if you want shared buffers, you can't avoid a copy operation,
> whether you're using VMS or Unix.
>
> RMS does make use of global buffers fairly easy, though, even if it isn't
> transparent and the space can't be used as flexibly as a central system
> cache can.
>
> And returning to an earlier point, the documentation for
> read-ahead/write-behind states that it applies only to non-shared Sequential
> files (my recollection, which I couldn't verify in a quick search, is that
> you still may be able to provide multiple buffers for indexed files and
> obtain some LRU caching value - e.g., for upper index levels - but not
> read-ahead operation).
>
> - bill
>
> Bill Todd <bill...@foo.mv.com> wrote in message
> news:8grolb$jg1$1...@pyrite.mv.net...

> > David A Froble <da...@tsoft-inc.com> wrote in message
> > > Dave
> > >
> > > --
> > > David Froble Tel: 724-529-0450
> > > Dave Froble Enterprises, Inc. Fax: 724-529-0596
> > > 170 Grimplin Road E-Mail: da...@tsoft-inc.com
> > > Vanderbilt, PA 15486
> >
> >
> >

Bill,

If you use HSxx controllers on your systems whether it be Unix
or VMS (we use them on both at my shop) the file caching issue
is mute because the controller does it.

--
Keith Brown
kbro...@usfamily.net

Keith Brown

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
Bill Todd wrote:
>
> Definitely a valid point, unlike JF's, which in asserting that getting VMS
> expertise into a *single* shop might not be too difficult avoided addressing
> the point he was supposedly disagreeing with, which was that the *absolute*
> supply of VMS expertise is considerably smaller than that of Unix expertise,
> no matter how you slice it.
>
> Not to mention the related point that you need *more* expertise to get good
> performance out of VMS than out of Unix, which gives most applications good
> performance right out of the box.
>
> And of course that last dovetails neatly with your own observation that
> people seldom bother with much performance optimization: for such typical
> non-performance-optimized applications, Unix therefore makes more efficient
> use of the hardware than VMS does, hence is correctly perceived as more
> cost-effective.
>
> - bill
>
> John E. Malmberg <wb8...@qsl.net> wrote in message
> news:066e01bfc8e6$4c0ac230$020a...@xile.realm...
> > Bill Todd wrote:
> > > However, the skill set required to optimize an application on VMS is in
> > > radically shorter supply than the skill set required to optimize an
> > > application on most Unixes, and if you're willing to settle for a
> > > good-but-not-literally-optimal approach
> >

At my shop we can't seem to find good Unix people, we have
looked for years. I can't say the same for VMS. I'm sure will
respond that we must not offer enough $ but we do offer what the
VMS guys get.

--
Keith Brown
kbro...@usfamily.net

Bill Todd

unread,
May 28, 2000, 3:00:00 AM5/28/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:3931BE9C...@usfamily.net...

...

> Bill,
>
> If you use HSxx controllers on your systems whether it be Unix
> or VMS (we use them on both at my shop) the file caching issue
> is mute because the controller does it.

Hardware to the rescue! But as always that's a relatively expensive way to
obtain performance when software can do the job.

Not that it always can: when you want guaranteed persistence coupled with
fast writes at high volume, stable write-back caching is the only real
solution (as long as there are lulls during which you can dump the data to
disk). But you need to double up on the controllers, each with stable write
cache, and have them communicate to avoid losing dirty data due to a cache
failure (assuming you're using some kind of redundancy at the actual disk
level to guard against single points of failure), and that gets expensive
compared to software alternatives that require no special hardware (save for
a sliver of NVRAM somewhere to make RAID recovery fast after an
interruption - which you actually only need if you can't tolerate the
latency of writing a log record for each write request that can't be
batch-logged with other contemporaneous write requests).

Until hardware RAID (especially including copious amounts of stable cache in
a no-single-point-of-failure dual configuration) prices come *'way* down,
systems that ensure file system integrity while optionally trading off
up-to-the-second persistence for overall better performance will remain
popular for applications that demand no stronger guarantees. This may not
make a great deal of sense, given how completely the cost of operating and
managing systems dwarfs the cost of purchasing them, but up-front system
price shows little indication of becoming irrelevant in typical environments
in spite of this.

- bill

>
> --
> Keith Brown
> kbro...@usfamily.net

Bill Todd

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:3931BF74...@usfamily.net...

...

> At my shop we can't seem to find good Unix people, we have
> looked for years. I can't say the same for VMS. I'm sure will
> respond that we must not offer enough $ but we do offer what the
> VMS guys get.

Interesting observation. Without pretending that it constitutes more than
anecdotal evidence, one might guess at a couple of reasons.

1) Someone professing to be competent in VMS is more likely to be close to
the truth than someone professing to be competent in Unix (hell, virtually
*everyone* who's ever used any form of Unix more than truly superficially
likely thinks they can profess to competence, whereas a casual VMS user may
have a bit more respect for what 'competence' really means).

2) Supply and demand dictate that the market for competence is hotter for
Unix people than for VMS people (by now, people who *want* to work with VMS
may be getting happy to find *any* reasonable position - note the comment
elsewhere that turnover in one company's VMS staff is virtually nil).

Bill Todd

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

John E. Malmberg <wb8...@qsl.net> wrote in message
news:sj3b2g...@corp.supernews.com...

...

> Yes UNIX can give most (not all) applications good performance. But when
> you understand why, it gives you a better perspective on when to recommend
> each platform.

Of course. But for *most* applications, as you note, Unix provides an
environment that does not demand such understanding (or any particularly
significant expertise) to get relatively good performance. (See also David
Mathog's recent thread on 'RMS tuning versus file caching' for a specific
example in addition to the compilation speed issue he raised earlier.)

...

> Most of the tuning parameters of OpenVMS are well documented. Someone
> familiar with their application and computer systems can RTFM and find
this
> out.

But in most cases on Unix they don't have to.

The point (which I overlooked in my initial comments about the relative
availability of 'expertise', but which you brought up yourself) is that
developers with *minimal* expertise can create typical (disk-bound - this
discussion started as a file system issue) applications that perform better
on Unix than equivalent applications created by similarly-inexpert VMS
developers perform running on VMS, due to the difference in default file
system approaches taken by the two systems.

So the competition is this: people who were exposed to Unix during their
schooling can create Unix applications that perform well without having to
RTFM, whereas to create an application on VMS a developer *first* must get
at least minimally acquainted with the system (since s/he likely did not
encounter it in school) and *then* must RTFM - after first searching out the
right one(s) in the 5-foot shelf - if s/he wants the application to perform
as well as it would on Unix.

Which system are developers likely to gravitate toward?

>
> I went to several UNIX experts to find out why all systems of a particular
> brand suddenly died one day. They did not know. I found out why by using
a
> ethernet monitor.
>
> It turned out that it was the equivalent of a non-page pool exhausted. It
> turned out a default configuration was causing it to download the routing
> tables from the corporate router. I guess for "performance" it stored
them
> in the non-page pool.
>
> Yes, I talked to the vendor's support line. For my efforts I got sent
> copies of two articles that had nothing to with the problem statement.

The above is an interesting commentary, but seems completely irrelevant to
application development (which if you'll look back through the earlier posts
in this thread is the context of the discussion, especially as related to
obtaining good performance from the file systems).

It is related to the question of 'expertise' (though in *system*-level
issues), but I think you have convinced me that 'expertise' is not
particularly important to the application development discussion, except as
something that most developers lack.

>
> I contend that true tuning expertise in other platforms is as rare as it
is
> in VMS. A higher market share for UNIX and M$SOFT attracts more
pretenders.

Possibly. *Application* tuning expertise is arguably less necessary to
obtain a given level of performance in Unix environments than in VMS
environments, so the rarer its existence is, the worse VMS looks (since it
needs it more than Unix).

*System* tuning is a rather different animal, and my guess would be that
real experts are extremely rare for virtually any system (though
self-professed Unix and NT experts may not be). If (as per your example)
Unix systems are *frequently* prone to handle unexpected situations less
than gracefully, then this represents a real VMS strength (since lacking
such expertise VMS continues to run where Unix does not), just not one that
is relevant to this particular thread.

- bill

>
> > And of course that last dovetails neatly with your own observation that
> > people seldom bother with much performance optimization: for such
typical
> > non-performance-optimized applications, Unix therefore makes more
> efficient
> > use of the hardware than VMS does, hence is correctly perceived as more
> > cost-effective.
>

Rob Young

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
In article <3931BF74...@usfamily.net>, Keith Brown <kbro...@usfamily.net> writes:

>
> At my shop we can't seem to find good Unix people, we have
> looked for years. I can't say the same for VMS. I'm sure will
> respond that we must not offer enough $ but we do offer what the
> VMS guys get.
>

VMS folks are hard to find but I also know Unix admins are
very difficult to find also. I know two contractors that are
in at a decent sized shop (multiple IBM RS/6000 S80s) and management
would prefer full-time sysadmins but can't find them. A lot of that
going around. Want to flush admins out of the weeds regardless
of platform? Contract for them.

Could they get full-time sysadmins? Sure. Unfortunately, the
salary that requires puts their sysadmins pay higher than
management/directors. That can't happen or you get unhappy
directors/management, so the only way out of the catch-22 is to
hire contractors and hide the cost in POs.

Rob


Larry Kilgallen

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
In article <8gspck$pr3$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:

> Until hardware RAID (especially including copious amounts of stable cache in
> a no-single-point-of-failure dual configuration) prices come *'way* down,
> systems that ensure file system integrity while optionally trading off
> up-to-the-second persistence for overall better performance will remain
> popular for applications that demand no stronger guarantees. This may not
> make a great deal of sense, given how completely the cost of operating and
> managing systems dwarfs the cost of purchasing them, but up-front system
> price shows little indication of becoming irrelevant in typical environments
> in spite of this.

Organizational voodoo dictates in many cases that capital and personnel
expenditures come from separate buckets. Thus it makes sense on the
micro scale (following company procedure so the individual manager
can advance) but not on the macro scale (good of the company overall).

Larry Kilgallen

unread,
May 29, 2000, 3:00:00 AM5/29/00
to

So we have the free market economy in action, scarcity/demand drives
prices, etc. Some directors/management are psychologically unwilling
to accept this, since they thought _they_ were supposed to be the ones
in demand. But the company they represent pays the prevailing price
anyway, or does without.

Life is tough when techies prevail.

Goku

unread,
May 29, 2000, 3:00:00 AM5/29/00
to
Hey guys

I do infact work as a VMS op for compaq.
They aren't getting rid of this but just not extending with new
companies.

Thanks


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


steven...@quintiles.com

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

Bill,
Your comments are (like alot of what we all say here in comp.os.vms/info-vax)
very subjective.

Whilst it may be tricky to get the last gasp of performance out of a VMS system,
it is likely the same on _all_ platforms. This is why we are going through the
present cycle of "we need more horsepower to do this job so we'll buy another/a
bigger box".

This is also yet another reason why bloatware from a certain company is accepted
by the industry. It needs more CPU and more disk so the PHM's view is that
they'll just buy more disks and faster systems.

Nobody in my experience goes to get the last bit of performance out of any
application which they have bought in. A very limited number of operating
system writers or application writers probably go the extra bit to get the
maximum performance. One also has to balance out :
- the cost in terms of manpower to tune the application that final little bit;
- the cost of that faster system or that extra system;
- the cost in the future of losing something because you've taken a shortcut to
get performance and it's compromised either the system or the application data.

Most managers in most companies are forced to go for the bigger system.

Steve.

Bill Todd wrote:
>>>Not to mention the related point that you need *more* expertise to get good
performance out of VMS than out of Unix, which gives most applications good
performance right out of the box.

And of course that last dovetails neatly with your own observation that

David Mathog

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
In article <8gs3ga$jbg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
>And of course that last dovetails neatly with your own observation that
>people seldom bother with much performance optimization: for such typical
>non-performance-optimized applications, Unix therefore makes more efficient
>use of the hardware than VMS does, hence is correctly perceived as more
>cost-effective.

Exactly. And it all boils down to (essentially) a SINGLE difference
between the OS's. On a typical lightly loaded, memory rich, workstation,
Linux (and probably most other Unices, but I can't say for sure)
automatically utilize the unused portions of memory to cache file
operations. This results in huge increases in write performance and a
substantial increase in read performance in a typical workstation job mix,
which consists of a lot of small files being written, read, and then often
deleted. Only when file sizes reach up into the hundreds of megabytes or
the system becomes heavily loaded do the returns for this automatic system
break down. At that point RMS tuning on OpenVMS can outperform the Unix
workstations. However, in the normal mix of things for lightly loaded
machines, the "out of the box" Unix configurations do "disk I/O" about an
order of magnitude faster than VMS does. That's in quotes because
oftentimes the data never hits the disk. You can tune VMS processes and
programs to give better performance, but on the lightly loaded systems at
best you can get really close on the read speeds and you can never come
anywhere close on the write speeds (defining write as "program wrote data
to 'disk' successfully, without worrying about if the data ever hit the
physical disk). And you had to work at it. Whereas on Linux it just
happened automatically. Consequently, something as simple as

% tar xf whatever.tar
% cd whatever
% make

runs blindingly fast on Unix, and crawls on VMS. And here I'm talking
about two nearly identical DS10s, one running RedHat 6.2 and the other
OpenVMS 7.2-1. (It doesn't help that the Elsa graphics card and DECterms
don't scroll very quickly - sometimes you're just waiting for the text to
scroll through.)

Typically in these "lightly loaded" environments, the safety of having the
data hit the disk is of little importance. So VMS really is slow and Linux
(Unix) is fast for typical programmer/technical workstation workloads.
And Linux gets this boost without anybody having to twiddle RMS parameters,
which is a good thing, because, well, WHY NOT make good use of the unused
memory in a system? Lord knows you pay enough for it on a DS10!

Rob Young

unread,
May 30, 2000, 3:00:00 AM5/30/00
to

So things aren't moving fast enough for you?

Caching gets much improved in the next go round of VMS (unless
I've mixed up roadmaps or am misremembering). How did this happen?
Senior engineer at a DECUS explained that: "remember, VIOC was just
a stop-gap measure... it wasn't supposed to be around this long"
or something similar. Relase notes showed new sysgen parameters
for write behind caching and write delay. Something to look forward
to. So how did this happen? Fork in the road called Spiralog from
what I understand.

So maybe in a year or less, we put the limited caching behind us
for those that are at 7.3 and higher and maybe move on to complaining
that VMS is so primitive because a lot still has to be done
at a command line. How primitive. MS-DOS is dead... I want things
to look and feel just like Windows.

NOT! Cut my pinkies off first.

So what is the next complaint we can moan about so I can get
practiced up?

Rob


Keith Brown

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
> Regards,
>
> David Mathog
> mat...@seqaxp.bio.caltech.edu
> Manager, sequence analysis facility, biology division, Caltech
> **************************************************************************
> * RIP VMS *
> **************************************************************************

I was going to respond sooner but Linux crashed and I was busy
doing an fschk.
Seriously, this is the trade off. You can argue that Unix uses
the better default config and it may be appropriate for your
environment but again it may not be for others. At my site we
tend to run bigger (than Workstations) OpenVMS machines that
have HSxxx controllers that do the writeback caching and we use
them on the 3 DU boxes too BTW, but write performance does
become a non-issue with the HSxxx controllers. I know, I know,
Bill Todd already railed me for suggesting a hardware solution
but we felt that the HSxxx's were a better solution to serve
external disks than a SWXR or some such thing like that. As you
pointed out earlier David, you can get a significant I/O boost
by tweeking RMS and even you pointed out that it was a 1 line
tweek. I think even NT people could manage that. I don't mean to
be blunt but my point is that nothing is perfect all the time.


--
Keith Brown
kbro...@usfamily.net

David Mathog

unread,
May 31, 2000, 3:00:00 AM5/31/00
to
In article <aTT5NM...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:
>In article <8h0n8i$d...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu (David Mathog) writes:
>> In article <8gs3ga$jbg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>>>
>>>And of course that last dovetails neatly with your own observation that
>>>people seldom bother with much performance optimization: for such typical
>>>non-performance-optimized applications, Unix therefore makes more efficient
>>>use of the hardware than VMS does, hence is correctly perceived as more
>>>cost-effective.
>>
>> Exactly. And it all boils down to (essentially) a SINGLE difference
>> between the OS's. On a typical lightly loaded, memory rich, workstation,
>> Linux (and probably most other Unices, but I can't say for sure)
>> automatically utilize the unused portions of memory to cache file
>> operations.

<SNIP>

>
> So things aren't moving fast enough for you?
>
> Caching gets much improved in the next go round of VMS (unless
> I've mixed up roadmaps or am misremembering). How did this happen?
> Senior engineer at a DECUS explained that: "remember, VIOC was just
> a stop-gap measure... it wasn't supposed to be around this long"
> or something similar. Relase notes showed new sysgen parameters
> for write behind caching and write delay. Something to look forward
> to. So how did this happen? Fork in the road called Spiralog from
> what I understand.
>

I'm not counting any of those chickens before they hatch. Remember the
buildup for Spiralog, and how many of us ended up using that? Meanwhile,
back at the farm, for the sorts of applications which I (and everybody else
in my field run) Linux and Tru64 "out of the box" outperform VMS "out of
the box" on identical hardware by a wide margin on systems which are
"lightly loaded". This is true for virtually every application which "runs
faster on Unix than on OpenVMS". Moreover, it's likely that much of the
weakness we see in FTP server and SMB server performance on OpenVMS are due
to this single difference (with the rest being due to TCP/IP stack
incompatibilities with the client systems.)

The point I'm trying to make is that virtually all of the difference in
performance one sees between Unix and OpenVMS is due to the presence of
file caching on the former and its absence on the latter. This is a
problem which is easily identified and SHOULD BE RECTIFIED. (Really it
should have been addressed many years ago but that's another harangue.)
Right now, as others have said, the only Compaq supplied product which
could improve the situation is an HSZ or some other dedicated storage
controller - but who can afford those for a DS10?

Nor am I saying that OpenVMS file caching need be exactly like that on Unix
- it just needs to give most of the benefits, and do so without the need
for case by case RMS twiddling. Clearly it must preserve the capability
for doing all of the things RMS does now, which really are appropriate and
useful on heavily loaded systems where there is no extra RAM around for
file caching.

> So maybe in a year or less, we put the limited caching behind us
> for those that are at 7.3 and higher and maybe move on to complaining
> that VMS is so primitive because a lot still has to be done
> at a command line. How primitive. MS-DOS is dead... I want things
> to look and feel just like Windows.
>

Oh come on. It's perfectly fair to point out that on small systems under
typical loads for such systems the file caching mechanisms used by Unix
(and WNT) do result in real increases in system performance, and it's
equally fair to point out that many pieces of software ported from Unix
implicitly assume this behavior and so run less efficiently on OpenVMS than
they do on Unix. This thread has nothing to do with GUIs vs. command line,
it's about the real 2X to 3X performance boost that you get with file
caching.

Dan Sugalski

unread,
May 31, 2000, 3:00:00 AM5/31/00
to
At 03:33 PM 5/31/00 +0000, David Mathog wrote:
>This thread has nothing to do with GUIs vs. command line,
>it's about the real 2X to 3X performance boost that you get with file
>caching.

Just out of curiosity, do you see any sorts of speedup from doing this:

$ SET RMS/BUFFER=255/BLOCK=127

before running the programs that perform less than wonderfully?

Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk

Alan Greig

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h0n8i$d...@gap.cco.caltech.edu>,

mat...@seqaxp.bio.caltech.edu wrote:
> WHY NOT make good use of the unused
> memory in a system? Lord knows you pay enough for it on a DS10!

VMS 7.2-1?

$ HELP SET FILE/CACHING

Of course you need the dead end Spiralog for this to work
but with it you can set writebehind caching on a file or
directory basis and supposedly the best bits of Spiralog
will turn up again sometime.

--
Alan Greig


Sent via Deja.com http://www.deja.com/
Before you buy.

Dave Weatherall

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
On Sun, 31 May 3900 15:33:26, mat...@seqaxp.bio.caltech.edu (David
Mathog) wrote:

..


> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than

> they do on Unix. This thread has nothing to do with GUIs vs. command line,


> it's about the real 2X to 3X performance boost that you get with file
> caching.

David
you keep quoting this 2/3 READ performance boost and I
seem to remember you posted some figures to illustrate the point.
However, I also seem to remember a post by Arne (I think) showing that
read performance across VMS, Unix and NT to be approximately equal.

Now what I think you're pointing out is that for an application which
creates lots of small files and then reads them back again, the
Read/Write back caching mechanisms of NT/Unix provide an advantage.
That I can understand and appreciate. However, is that the typical VMS
scenario? In our environment, my users use the
cross-assembler/link/build suite to build an OFP (downlable to the
target computer). This process consists of 2 passes across up to 230
source files, writing the object files (and list files for each when
required), insertion of the files into the object libraries and then 2
passes across the objects to create an image/map file.

All the tools use tuned RMS to access the source/object/image files
and 10 years ago the process took
50/55 minutes on an 8820 (VAX), now its 6 on a 2100 (AXP). However,
even on a 4000/108 (VAX) it'sstill only about 8/9 mins (IIRC).
Certainly as fast as our 8 year old AXP 4000 (600 I think).

The biggest performance boosts I gained by program changes were :-

Using hashing to do the symbol table management instead of the
binary search/insert that I inherited.
Using the VMS library routines to enable my linker to need to only
open one or two files when linking.
Using vitual memory to build by loadable image instead of a file.
Specifying my initial file size to the value I know it's going to be
when I create the down-loadable file.

The second speeded up linking quite a bit. Mainly because it avoided
the File open overhead of RMS. I've always understood this to be a
by-product of VMS security. I've no complaint.

no. 3 - well it just made more sense :-)

no 4. - again simple common sense 'cos it avoids the penalties of
$EXTEND.

Now the point I'm leading to is that the largest amount of READ i/o I
do is the source files and I'm not convinced that UNIX per se would be
particularly quicker than VMS here and still provide me with the same
level of C2 security. Similarly, as the same source files can be in
shared use by a concurrent build on another node maybe I'm better off
with the caching being done by the controller anyway.

Horses for courses.

RMS has its disadvantages over the UNIX/DOS cooked/raw options. It is
utimately slower measured in pure record access but RMS means I don't
have to manage the records anymore. It gives me Indexed, sequential,
direct access, variable/fixed length records etc. That gives me
performance gains in my applications and makes them easier to
maintain.

Cheers - Dave.


Bill Todd

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

Dave Weatherall <djw...@attglobal.net> wrote in message
news:DTiotGxQ0bj6-pn2-hpj8rWJ7JpRI@localhost...

...

> Now the point I'm leading to is that the largest amount of READ i/o I
> do is the source files and I'm not convinced that UNIX per se would be
> particularly quicker than VMS here

Might not be. But if the source files are large, Unix's ability to
pre-fetch automatically could help.

and still provide me with the same
> level of C2 security.

C2 security is C2 security, it doesn't come in levels. Some Unixes provide
it (don't happen to know which). Seems unlikely caching has much to do with
it. Larry might know.

Similarly, as the same source files can be in
> shared use by a concurrent build on another node maybe I'm better off
> with the caching being done by the controller anyway.

But you wouldn't be if VMS had ever done distributed caching right (last I
knew, they were finally heading in the direction of node-to-node
data-sharing as part of the caching enhancements Rob mentioned elsewhere).

>
> Horses for courses.
>
> RMS has its disadvantages over the UNIX/DOS cooked/raw options. It is
> utimately slower measured in pure record access

Needn't be. Counted records certainly have some performance advantages over
delimited records, and there's no intrinsic reason why any RMS processing at
all comparable to Unix/DOS non-record processing should take too many more
instructions.

Most of the perceived slowness of RMS is likely related to the caching
issues discussed elsewhere, though a certain amount of pure code bloat has
likely accumulated as well over the past 22 years. The other major problem
with RMS is that it takes so many hundreds of pages to describe how to use
all its options, but that's a programmer-performance rather than a
processing-speed issue.

but RMS means I don't
> have to manage the records anymore. It gives me Indexed, sequential,
> direct access, variable/fixed length records etc.

There are similar relatively standard packages available on Unix, but they
aren't integrated with the OS (nor need they be, since for the most part
they're process-level code - just like RMS on the 11 - though providing them
with the system would certainly help ensure inter-operability across
applications).

- bill

Larry Kilgallen

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h6505$k76$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
> Dave Weatherall <djw...@attglobal.net> wrote in message
> news:DTiotGxQ0bj6-pn2-hpj8rWJ7JpRI@localhost...
>
> ...
>
>> Now the point I'm leading to is that the largest amount of READ i/o I
>> do is the source files and I'm not convinced that UNIX per se would be
>> particularly quicker than VMS here
>
> Might not be. But if the source files are large, Unix's ability to
> pre-fetch automatically could help.
>
> and still provide me with the same
>> level of C2 security.
>
> C2 security is C2 security, it doesn't come in levels. Some Unixes provide
> it (don't happen to know which). Seems unlikely caching has much to do with
> it. Larry might know.

When an operating system is evaluated at the C2 level (or any other
level) it may include some particular require parameter settings
(the VMS system parameter SECURITY_POLICY is an example).

If any Unix operating system got a C2 evaluation that require caching
be turned off, it would say so in the evaluation report. Beyond
that NSA considers all C2 evaluated systems to be at the same level.
There is no more-C2-than-thou concept.

From my perspective, the main hazards of caching have to do with the
opportunity to scramble your data on disk by crashing at the wrong
moment. That seems more a denial-of-service than a C2 issue.

Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

It's on this PPT Roadmap on slide 18:

"Extended File Cache V 1.0

- Read ahead caching on sequential files
- Greater than 100 files in cache
- Larger cache size"

With a 2001 timeframe. That was the Feb Roadmap. If you look
at the one found there today, they also have added:

" Extended File Cache V 2.0

- Write sharing in a cluster
- Write behind caching
- User Controls
- SMP Performance boost
- Galactic Common Memory usage"

All part of the base OS. Spiralog was a bolt-on. Yes it went
away and a few may have been saddened. But your point about
the caching isn't a good one... it isn't "just a build up" and
even a jaded reading of the roadmaps would indicate it must
be something they are working on with XFC V 1.0 slated for
VMS version 7.3.

>
> The point I'm trying to make is that virtually all of the difference in
> performance one sees between Unix and OpenVMS is due to the presence of
> file caching on the former and its absence on the latter. This is a
> problem which is easily identified and SHOULD BE RECTIFIED.

Well no kidding! Let's backup a tiny bit:

>>
>> So things aren't moving fast enough for you?
>>

Maybe not for others either. But we could wish in one hand
and spit in the other hand and we would have the same results
concerning this. No change.

(Really it
> should have been addressed many years ago but that's another harangue.)
> Right now, as others have said, the only Compaq supplied product which
> could improve the situation is an HSZ or some other dedicated storage
> controller - but who can afford those for a DS10?
>

So? Run Linux on it then. Better caching isn't going to
show up faster just because we wish it would.

> Nor am I saying that OpenVMS file caching need be exactly like that on Unix
> - it just needs to give most of the benefits, and do so without the need
> for case by case RMS twiddling. Clearly it must preserve the capability
> for doing all of the things RMS does now, which really are appropriate and
> useful on heavily loaded systems where there is no extra RAM around for
> file caching.
>

Exactly like? How about better when it gets here. Think
about the last line for a bit there:

- Galactic Common Memory usage

Tell me what that means to you.

>> So maybe in a year or less, we put the limited caching behind us
>> for those that are at 7.3 and higher and maybe move on to complaining
>> that VMS is so primitive because a lot still has to be done
>> at a command line. How primitive. MS-DOS is dead... I want things
>> to look and feel just like Windows.
>>
>

> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than
> they do on Unix. This thread has nothing to do with GUIs vs. command line,
> it's about the real 2X to 3X performance boost that you get with file
> caching.
>

No kidding. My point was plainly stated in the very first line:

>>
>> So things aren't moving fast enough for you?
>>

It isn't as if they are sitting around up there over a cup
of coffee in an a.m. meeting *last week* and a hand went up in
the back of the room:

"Hey, I gotta an idea. Howsa 'bout we fix up our caching.
seems we should be able to do better than a hundred files
at a time and maybe do like some of the Unix boxes I have
heard about that has better caching."

It's a work in progress. Get it?????

Is there anything else we can moan about? I really need to
get practiced up...

Rob


Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <xxAEDA...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:

>
> It's on this PPT Roadmap on slide 18:
>

Which roadmap? This roadmap:

http://WWW.OPENVMS.DIGITAL.COM/openvms/roadmap/openvms_roadmaps.htm


Rob Young

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
In article <8h6h2o$c4h$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
>
> Rob Young <you...@eisner.decus.org> wrote in message
> news:xxAEDA...@eisner.decus.org...

>> In article <8h3bc6$2...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu
> (David Mathog) writes:
>
> ...

>
>> > I'm not counting any of those chickens before they hatch. Remember the
>> > buildup for Spiralog, and how many of us ended up using that?
>>
>> It's on this PPT Roadmap on slide 18:
>>
>> "Extended File Cache V 1.0
>>
>> - Read ahead caching on sequential files
>> - Greater than 100 files in cache
>> - Larger cache size"
>>
>> With a 2001 timeframe. That was the Feb Roadmap. If you look
>> at the one found there today, they also have added:
>>
>> " Extended File Cache V 2.0
>>
>> - Write sharing in a cluster
>> - Write behind caching
>> - User Controls
>> - SMP Performance boost
>> - Galactic Common Memory usage"
>>
>> All part of the base OS. Spiralog was a bolt-on. Yes it went
>> away and a few may have been saddened. But your point about
>> the caching isn't a good one... it isn't "just a build up"
>
> I'd be curious to know why you believe that in some way Spiralog was not as
> seriously-integrated an effort as the EFC work is: I would have
> characterized Spiralog as both considerably more ambitious and requiring
> considerably more extensive integration.
>

Oh it was a serious effort and perhaps we are parsing terms
incorrectly again. XFC I imagine will be turned on by default
with some minimum settings (maybe, maybe not). Easy to understand
and whatnot. After all, AIX (Unix I am most familiar with) caches
files in available memory... no setup necessary. I am sure
something similar will be the default with XFC.

Spiralog required a deliberate effort to setup, wasn't the default
and once something was a Spiralog volume requires work to undo.

> So while we can certainly hope that the EFC work turns out to be more
> worthwhile, I don't see David's comment as inappropriate.
>

What do you mean, specifically? i.e. cite an example of what
he said that you don't disagree with and I will let you know
if I don't disagree with it either. After all, much of
my comments to his comment began with: "No Kidding" followed
by a counter-point.

> ...


>
>> > Nor am I saying that OpenVMS file caching need be exactly like that on
> Unix
>> > - it just needs to give most of the benefits, and do so without the need
>> > for case by case RMS twiddling. Clearly it must preserve the capability
>> > for doing all of the things RMS does now, which really are appropriate
> and
>> > useful on heavily loaded systems where there is no extra RAM around for
>> > file caching.
>> >
>>
>> Exactly like? How about better when it gets here. Think
>> about the last line for a bit there:
>>
>> - Galactic Common Memory usage
>>
>> Tell me what that means to you.
>

> Not much, save in those atypical (though not truly rare) cases in which
> multiple nodes are sharing the same data, in which it primarily makes more
> efficient use of total box memory than a (good) distributed cache where that
> shared data would instead be replicated on a per-partition basis. Of
> course, you pay the price of 3x slower access (and additional
> inter-partition synchronization overhead) on *all* references to such
> centrally-cached data, whether the data in which you're interested is shared
> or not.
>
> While the listed EFC work should certainly be a general improvement over the
> current performance state w.r.t. this particular limitation, the real
> question for most users will be whether it does as good a job in a
> single-system environment as a Unix-style cache does - and for that, as
> David says, we'll just have to wait and see, or at the very least wait until
> design details are released.
>

single system? That's easy. But what about a 16 processor
system that would in normal Unix world be a single system but
does better as 2 VMS systems (separate VMS instances sharing
resources)?

>>
>> >> So maybe in a year or less, we put the limited caching behind us
>> >> for those that are at 7.3 and higher
>

> Not unless the EFC V2 release follows so quickly on the heels of EFC V1 that
> it makes it into 7.3: write-back caching isn't listed for V1, and that's a
> *big* part of the difference (especially in current VMS environments that
> have configured below-file-system-level read cache).
>

It's there on a roadmap. I didn't applaud timelines.

> ...


>
>> Is there anything else we can moan about? I really need to
>> get practiced up...
>

> My suspicion is that at least a portion of David's annoyance stems from the
> knee-jerk reactions to the effect that "VMS don't need no stinkin' Unix
> features! It's better by definition, and that's all there is to it!" when
> he presumed to suggest that Unix had performance advantages in certain areas
> that VMS might do well to consider. Poring over Compaq OpenVMS Web site
> material is not a pre-requisite for participation in comp.os.vms, the lack
> of any mention of the road map information until now (when he started
> talking about this issue months ago) is sufficient indication that it was
> not exactly foremost in the minds of other people either, and even if he had
> been aware of it not only are the details insufficient to indicate whether
> it will be comparable to the Unix facilities but he should not be blamed for
> wondering, on the basis of past future plans for VMS enhancements, whether
> it would appear on time (one who should know has suggested to me that it may
> originally have been slated for 7.2) and in full regalia.
>

Well...

Specifically:

"VMS don't need no stinkin' Unix
features! It's better by definition, and that's all there is to it!"

That would be a mis-charecterization of my criticisms of his
comments. Sticking back in what you trimmed which would
of course show I am not of that ilk is seen in this section:

---

> Oh come on. It's perfectly fair to point out that on small systems under
> typical loads for such systems the file caching mechanisms used by Unix
> (and WNT) do result in real increases in system performance, and it's
> equally fair to point out that many pieces of software ported from Unix
> implicitly assume this behavior and so run less efficiently on OpenVMS than
> they do on Unix. This thread has nothing to do with GUIs vs. command line,
> it's about the real 2X to 3X performance boost that you get with file
> caching.
>

No kidding. My point was plainly stated in the very first line:

>>
>> So things aren't moving fast enough for you?
>>

It isn't as if they are sitting around up there over a cup
of coffee in an a.m. meeting *last week* and a hand went up in
the back of the room:

"Hey, I gotta an idea. Howsa 'bout we fix up our caching.
seems we should be able to do better than a hundred files
at a time and maybe do like some of the Unix boxes I have
heard about that has better caching."

It's a work in progress. Get it?????

--

Throwing interpretation on that to make sure I'm not
misunderstood, I mean that to mean:

"VMS caching is lacking.. I am sure that VMS engineering
is well aware of how others do caching. Roadmaps
show that caching development is well underway."

>>
>> So things aren't moving fast enough for you?
>>

Maybe they aren't moving fast enough for others either.
Me? I've got all the write-back caching I need in controllers.
But others aren't as fortuante.

Rob


Glenn C. Everhart

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to
The new VMS caching system has been in the works for quite a
while now and is much more heavily into the VMS kernel than
Spiralog ever was. There are a number of I/O system projects
however which it must not break, and there is some concern not
to break 3rd party apps. When you think of the virtual disks,
remote caching systems, 3rd party cachers, multipath failover,
shadow drivers, software RAID, and a bunch more whose code must
not be broken, and various bits that are still in the works which
need also not to break, you may begin to see some of the complexity
involved. Spiralog was after all able to use a process space
for cache. It is better, but harder to get right, in kernel.

I'm glad to hear the wait is NEARLY over.

Bill Todd

unread,
Jun 1, 2000, 3:00:00 AM6/1/00
to

Rob Young <you...@eisner.decus.org> wrote in message
news:xxAEDA...@eisner.decus.org...
> In article <8h3bc6$2...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu
(David Mathog) writes:

...

> > I'm not counting any of those chickens before they hatch. Remember the
> > buildup for Spiralog, and how many of us ended up using that?
>
> It's on this PPT Roadmap on slide 18:
>
> "Extended File Cache V 1.0
>
> - Read ahead caching on sequential files
> - Greater than 100 files in cache
> - Larger cache size"
>
> With a 2001 timeframe. That was the Feb Roadmap. If you look
> at the one found there today, they also have added:
>
> " Extended File Cache V 2.0
>
> - Write sharing in a cluster
> - Write behind caching
> - User Controls
> - SMP Performance boost
> - Galactic Common Memory usage"
>
> All part of the base OS. Spiralog was a bolt-on. Yes it went
> away and a few may have been saddened. But your point about
> the caching isn't a good one... it isn't "just a build up"

I'd be curious to know why you believe that in some way Spiralog was not as


seriously-integrated an effort as the EFC work is: I would have
characterized Spiralog as both considerably more ambitious and requiring
considerably more extensive integration.

So while we can certainly hope that the EFC work turns out to be more


worthwhile, I don't see David's comment as inappropriate.

...

> > Nor am I saying that OpenVMS file caching need be exactly like that on
Unix
> > - it just needs to give most of the benefits, and do so without the need
> > for case by case RMS twiddling. Clearly it must preserve the capability
> > for doing all of the things RMS does now, which really are appropriate
and
> > useful on heavily loaded systems where there is no extra RAM around for
> > file caching.
> >
>
> Exactly like? How about better when it gets here. Think
> about the last line for a bit there:
>
> - Galactic Common Memory usage
>
> Tell me what that means to you.

Not much, save in those atypical (though not truly rare) cases in which


multiple nodes are sharing the same data, in which it primarily makes more
efficient use of total box memory than a (good) distributed cache where that
shared data would instead be replicated on a per-partition basis. Of
course, you pay the price of 3x slower access (and additional
inter-partition synchronization overhead) on *all* references to such
centrally-cached data, whether the data in which you're interested is shared
or not.

While the listed EFC work should certainly be a general improvement over the
current performance state w.r.t. this particular limitation, the real
question for most users will be whether it does as good a job in a
single-system environment as a Unix-style cache does - and for that, as
David says, we'll just have to wait and see, or at the very least wait until
design details are released.

>


> >> So maybe in a year or less, we put the limited caching behind us
> >> for those that are at 7.3 and higher

Not unless the EFC V2 release follows so quickly on the heels of EFC V1 that


it makes it into 7.3: write-back caching isn't listed for V1, and that's a
*big* part of the difference (especially in current VMS environments that
have configured below-file-system-level read cache).

...

> Is there anything else we can moan about? I really need to
> get practiced up...

My suspicion is that at least a portion of David's annoyance stems from the


knee-jerk reactions to the effect that "VMS don't need no stinkin' Unix
features! It's better by definition, and that's all there is to it!" when
he presumed to suggest that Unix had performance advantages in certain areas
that VMS might do well to consider. Poring over Compaq OpenVMS Web site
material is not a pre-requisite for participation in comp.os.vms, the lack
of any mention of the road map information until now (when he started
talking about this issue months ago) is sufficient indication that it was
not exactly foremost in the minds of other people either, and even if he had
been aware of it not only are the details insufficient to indicate whether
it will be comparable to the Unix facilities but he should not be blamed for
wondering, on the basis of past future plans for VMS enhancements, whether
it would appear on time (one who should know has suggested to me that it may
originally have been slated for 7.2) and in full regalia.

- bill

>
> Rob
>

Larry Kilgallen

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com> writes:
> Talking to a brick wall sometimes seems more effective: at least one's
> expectations can be set reasonably up front.
>
> David's comment "I'm not counting any of those chickens before they hatch"
> seems, at least to me, to have a very obvious, and reasonable,
> interpretation: Spiralog was planned, was hyped, actually shipped, and -
> flopped. The same thing could happen with EFC, and its appearance on the
> road map, in which you place so much faith, means approximately as much as
> Spiralog's did (at least I assume Spiralog appeared on a road map at some
> point).

Appearance on a Roadmap is not sufficient. The reader must examine
the characteristics described and decide whether they provide any
benefit for the reader's own circumstances.

People I know who saw Spiralog on the Roadmap universally had the
reaction "it is interesting that they have a file system alternative
for those with write-mostly applications, but my applications are not
write-mostly".

When I read the pain and anguish about future VMS disk caching not
having the fullest support for write at the start, I feel it meets
my needs quite well. My application is running compilers. They
take in many source files and produce fewer object files. Changes
happen to approximately one source file before compilation, so if
all the other source files were still cached in memory I would be
a happy camper. Or maybe it flushes the cache when I close the
file, in which case I will not be helped. Can anyone answer that ?
It would be much more interesting to me than discussion of what
Linux does.

Bill Todd

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Rob Young <you...@eisner.decus.org> wrote in message
news:Uz8oKE...@eisner.decus.org...
> In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com>
writes:

> > Talking to a brick wall sometimes seems more effective: at least one's
> > expectations can be set reasonably up front.
> >
> > David's comment "I'm not counting any of those chickens before they

hatch"
> > seems, at least to me, to have a very obvious, and reasonable,
> > interpretation: Spiralog was planned, was hyped, actually shipped,
and -
> > flopped. The same thing could happen with EFC, and its appearance on
the
> > road map, in which you place so much faith, means approximately as much
as
> > Spiralog's did (at least I assume Spiralog appeared on a road map at
some
> > point).
> >
>
> Ah, now we are in the realm of "it happened before it
> can happen again." Sure. But then the question becomes:
>
> "Do you think it will?"
>
> Is it worth a wager to you?

I've already said that I don't *expect* EFC to be the kind of failure
Spiralog was (though it could happen). And I can't remember the last time I
wagered on anything, though I'm reasonably sure it was well before VMS
appeared on the scene: I don't have anything against wagering, it's just
not of interest to me (if it were, I suspect I'd spend my time working the
market instead of this kind of thing).

However, my expectation is that, while it should improve current default
performance considerably, default file system performance with EFC V2 will
likely still fall somewhat short of Unix default file system performance (at
least for some of the better implementations, like SGI's): some of the
optimizations involve things like cooperating with the file system to defer
specific space allocation until time of actual disk write (there's no
indication I know of that EFC is getting into stuff like that), while others
aren't cache-related at all (e.g., the use of a log to allow small
synchronous writes to be made persistent immediately in an efficient
manner).

- bill

>
> Rob
>

IanPercival

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
I don't read the news group very often as I'm usually busy doing other
things - but there does seem to be an awful lot of discussion about VMS and
file data caching. I don't particularly want to get involved in any of the
various discussions - but there are an awful lot of misconceptions being
bandied about. Feel free to email me at ian.pe...@compaq.com if you have
serious questions or discussion.

VMS has two sets of data caching.
1. RMS. It can use Local or Global Buffers. Some manual setiing and control
is required in order to use these if the application isn't using them
automatically.

2. System wide data caches.
First of all I agree that the system wide central data cache that VMS
has/had known variously as VCC or VIOC which was first implemented on VAX
V6.0 a number of years ago, is pretty poor by many performance metrics. It
actually got worse, as functionality such as dynamic memory sizing got
dropped during the port to Alpha. This is the reason there are a myriad
third party implementations of caches (OK maybe not quite that many - but
quite a few!) all of which worked better than VCC.

A new system wide file data cache called XFC (stands for extended file
cache) has been developed. It is a true 64 bit cache - can store Tb of
data if you have the memory.
It can store over 100 CLOSED files (the VCC limitation was that it would
only store up to 100 closed files - not files in total!). XFC has no
limitation on number of closed files it will store - apart from the obvious
one of memory usage.
It can cache I/Os larger than 34 blocks (VCC was hard limited to 34)
It can perform readahead where appropriate.
It provides performance statistics such as absolute I/O response times in
microseconds.
It is dynamic or static or both in terms of memory usage.
My Alpha boots much quicker when using it! Many benchmarks are significantly
better when using it.

XFC V1.0 is about to enter the field test cycle. By the time it is
released, it should have some other major performance features added -
making it perform even better for users of medium to large machines.

Hope this helps a bit!

Ian Percival
XFC Project Leader
OpenVMS Engineering
(writing from home on my son's birthday!)

Dave Gudewicz

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Thanks for this update Ian and more importantly in the long run, Happy
Birthday to your son!!

Dave...

IanPercival <IanPe...@email.msn.com> wrote in message
news:#WeqE3Lz$GA.347@cpmsnbbsa08...

David Mathog

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
In article <PjT1Qf...@eisner.decus.org>, you...@eisner.decus.org (Rob Young) writes:
>
>"I'm not counting any of those chickens before they hatch."
>
> In fairness to David, maybe he didn't know about filesystem
> caching futures BUT I do know that has been mentioned more than
> once in this forum. So in examination of the evidence his
> response more than reveals his bent. He's a "glass is half full"
> kinda guy

Actually, I'm a "I can't load it now, and will believe it only when I can"
kind of guy. Also a "what in the hell could have been a higher priority
all these years than improving disk IO throughput 10X?" kind of guy.

Besides, regarding disk IO, my performance tests for the programs we run
here under our typical load indicate that the glass is way below half full,
in some tests, it's barely even moist. However, for CPU bound jobs it's
100% full. For security it's about 400% full.

>
> But it is a lose lose proposition for those that want to use it
> today. No wonder some of those folks are "glass half full" kinda
> folks.

It's also going to be a big loser if DII-COE goes in before an improved
caching file system does. The benchmarks/test programs are all written
to run, essentially, on a Solaris system. There is little tuning that can
be done to improve them, and direct access to RMS would seem to be out of
the question (in terms of compliance with the standard). Since they will
only use write() and fprintf() many of the IO intensive ones will run like
dogs on OpenVMS as it is now.

Jojimbo

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
at the risk of getting back to the original subject...

Having just spent the best part of a week understanding, so I
can modify it, a completely async AST driven IO program, I think
I understand a bit more why the kids like U***X.

Because it's simple!

No need to do async IO, the operating system does it. Just
read and write, so race conditions, buffers to manage until the
IO completes, etc... are not an issue. At least to the
programmer! Now the users and the customers and the operators
might be in big trouble, but the programmer is done and gone!

Sigh,

Jim

p.s.
By the way. "I have a Java Class that will do all that, although
you may have to change the interface a bit". (heard over the
cubicle wall this afternoon) Double sigh.

Bill Todd

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

Jojimbo <jgesslin...@yahoo.com.invalid> wrote in message
news:020eceba...@usw-ex0102-084.remarq.com...

> at the risk of getting back to the original subject...
>
> Having just spent the best part of a week understanding, so I
> can modify it, a completely async AST driven IO program, I think
> I understand a bit more why the kids like U***X.
>
> Because it's simple!
>
> No need to do async IO, the operating system does it.

That's only true in some cases, automatic multi-buffering being perhaps the
most significant.

Some more cases are handled, though not transparently, by using multiple
threads (each using synchronous I/O) in a single process where, in the days
before system-supported threads existed, you would have had to use the
single-threaded/asynchronous-I/O whirling dervish approach. This works
pretty well when the required per-thread overhead is reasonable and the
operations performed by the individual threads are essentially sequential in
nature (but interact in ways that can be synchronized - and shared - more
efficiently within a single process than across multiple processes, which
was the old traditional Unix substitute for asynchrony and/or
multi-threading).

But when it comes to rats-nests like databases or distributed file systems,
where disk accesses and distributed network operations pop up asynchronously
and multiple times within the course of a single operation, there's still no
comparably-efficient alternative to having just enough threads to keep all
the available physical processors busy and having each such thread work its
little tail off asynchronously. Of course, such (often kernel)
'applications' nicely hide such complexity from their clients (just like the
Unix file system itself does, though I'm not sure how common real
down-and-dirty asynchrony is even in Unix kernels: it's so - unUnixy), but
the best (or at least most powerful) implementations pass support for
asynchrony up to the application level so that the applications can do the
same kind of thing when they really need to.

So the real question is whether your 'asynch AST driven IO program' could
have benefited from using threads to avoid its asynchrony or avoided it by
virtue of Unix's automatic system support for multi-buffering. If the
former, doesn't VMS also support kernel-based threads? If the latter, RMS
provides such mechanisms, albeit you have to invoke them.

In either case, VMS gives you the means to avoid those nasty ASTs (at least
any you can see) and still get the same performance you can get with Unix -
it just doesn't do this *by default*, which, while important for the casual
user (who no way is going to act differently anyway, but will just complain
about performance), is not the same as *forcing* you into the depths of
complexity you've been exploring. And if the application you're fiddling
with *really needed* to be AST-driven on VMS, then it would have needed to
be coded in just about the same manner in a Unix environment - except that
you'd be hard-pressed to find a Unix environment with as comprehensive
support for asynchrony (when you really need it) as VMS has.

- bill

Keith Brown

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to
Bill Todd wrote:
>
> Rob Young <you...@eisner.decus.org> wrote in message
> news:Uz8oKE...@eisner.decus.org...
> > In article <8h7l78$jpg$1...@pyrite.mv.net>, "Bill Todd" <bill...@foo.mv.com>

As you are fond of saying about NT, "It may not be perfect but
it is good enough"

--
Keith Brown
kbro...@usfamily.net

Keith Brown

unread,
Jun 2, 2000, 3:00:00 AM6/2/00
to

How difficult is this in PL/I. What can be simpler?

OPEN FILE (MY_OUTFILE) TITLE ('MY_OUTFILE.dat') OUTPUT
ENVIRONMENT (BUFFERS(255),WRITEBEHIND);

WRITE FILE (MY_OUTFILE) FROM (BUFFER));

CLOSE (MY_OUTFILE);

--
Keith Brown
kbro...@usfamily.net

Larry Kilgallen

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
In article <8h9dpt$k...@gap.cco.caltech.edu>, mat...@seqaxp.bio.caltech.edu (David Mathog) writes:

> It's also going to be a big loser if DII-COE goes in before an improved
> caching file system does. The benchmarks/test programs are all written
> to run, essentially, on a Solaris system. There is little tuning that can
> be done to improve them, and direct access to RMS would seem to be out of
> the question (in terms of compliance with the standard). Since they will
> only use write() and fprintf() many of the IO intensive ones will run like
> dogs on OpenVMS as it is now.

VMS has to meet the standard, but I don't think the standard has
a performance requirement.

That is not to say that performance is irrelevant, but that the
order of file caching vs. DII-COE support may not be critical.

We did have a post from someone working on file caching, but none
from someone working on DII-COE :-).

Bill Todd

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:39388193...@usfamily.net...

...

> How difficult is this in PL/I. What can be simpler?

1) Using a language that the average developer (who, after all, is exactly
the developer who *won't* do much if anything in the way of optimization) is
likely to select.

2) Making the desired behavior transparent rather than requiring that it be
specified (though making it easy to obtain, albeit explicitly, is certainly
better than keeping it truly obscure).

As for your other comment elsewhere, if VMS performance is 'good enough' for
you even if it doesn't match Unix performance, rejoice and be happy. But
don't assume that's 'good enough' for everyone, or even a majority -
especially given purchasing situations where VMS is typically the system
that must justify being considered against competition that sets the
expectations by virtue of its industry acceptance.

- bill

Keith Brown

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

> Bill Todd wrote:

> 1) Using a language that the average developer (who, after all, is exactly
> the developer who *won't* do much if anything in the way of optimization) is
> likely to select.
>

I am always amazed at the number of people who choose C (the
language from hell) and then complain that they can't get any
work done. Yes, I known that C is popular, yet that fact does
not does not always make it the best choice. As many have said
before me, "we have a standardized language now, too bad it is
C".

>
> 2) Making the desired behavior transparent rather than requiring that it be
> specified (though making it easy to obtain, albeit explicitly, is certainly
> better than keeping it truly obscure).
>

We certainly can't expect software developers to RTFM can we :)
. Bill, I've been using VMS for over 16 years. I never saw VMS
while I was in school BTW, it all came after I started working.
In the last 2 years I have been spending significant time
learning Linux. RingTFM is what I have to do. I do not find it
intuitive as you would imply, but do see many similarities to
VMS. I like Linux but do find that it has some funky features
that require some learning and as many times as not they are
more difficult to deal with than on VMS. I also find it to be no
easier to learn than VMS, which was easy BTW. If we are to be
dependent on SW developers that can only write code for the OS
they saw in school we will never get anywhere will we? My point
is that SW developers need to RTFM for ANY system they code on.
I they don't we won't buy their SW will we?

>
> As for your other comment elsewhere, if VMS performance is 'good enough' for
> you even if it doesn't match Unix performance, rejoice and be happy. But
> don't assume that's 'good enough' for everyone, or even a majority -
> especially given purchasing situations where VMS is typically the system
> that must justify being considered against competition that sets the
> expectations by virtue of its industry acceptance.
>

What my comment elsewhere said was that our VMS performance is
AS GOOD as Unix due to the use of external controllers. Note
also that even though Unix does have a default performance edge
on I/O we still chose to use HSZxx controllers on the Unix
systems for reliability reasons as we did on VMS. There is no
free lunch. What Unix gains in I/O performance it looses in
reliability. Go ahead, ask about the AdvFS restore we did a few
months back after DU crashed before flushing the cache.


--
Keith Brown
kbro...@usfamily.net

Bill Todd

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:39394525...@usfamily.net...

>
> > Bill Todd wrote:
>
> > 1) Using a language that the average developer (who, after all, is
exactly
> > the developer who *won't* do much if anything in the way of
optimization) is
> > likely to select.
> >
>
> I am always amazed at the number of people who choose C (the
> language from hell) and then complain that they can't get any
> work done. Yes, I known that C is popular, yet that fact does
> not does not always make it the best choice. As many have said
> before me, "we have a standardized language now, too bad it is
> C".

Whether you are amazed is completely irrelevant. Wake up and smell the
coffee: people use C/C++ in preference to other languages whether or not
you approve, and users evaluate system performance based in large part on
the applications such people create ('cause those are the applications they
run).

>
> >
> > 2) Making the desired behavior transparent rather than requiring that it
be
> > specified (though making it easy to obtain, albeit explicitly, is
certainly
> > better than keeping it truly obscure).
> >
>
> We certainly can't expect software developers to RTFM can we :)

Once again, your own inclinations in this area are are irrelevant if they
don't reflect what most developers do in the real world. Pat yourself on
the back all you want, but don't presume that this makes any difference to
the way the rest of the world works (hint: a great many application
developers are in it for the money, and time spent learning a relatively
obscure system so that their ported application will perform better there
may well not be recouped by increased sales due to that improved performance
in that relatively unimportant - for them - environment).

And if developers may be willing to RTFM to use Linux (or any Unix) but not
to use VMS, well, that's a reality VMS has to accept (and adjust to if
feasible), 'cause it's behavior that's existed for the past decade-plus and
short of paying the entire software development community to attend VMS
familiarization classes there's no likelihood it's going to change any time
soon.

But in the particular area under discussion, the more important point is
that developers *don't* have to RTFM to get good file system performance out
of Unixes whereas they *do* on VMS.

> . Bill, I've been using VMS for over 16 years. I never saw VMS
> while I was in school BTW, it all came after I started working.
> In the last 2 years I have been spending significant time
> learning Linux. RingTFM is what I have to do. I do not find it
> intuitive as you would imply, but do see many similarities to
> VMS. I like Linux but do find that it has some funky features
> that require some learning and as many times as not they are
> more difficult to deal with than on VMS. I also find it to be no
> easier to learn than VMS, which was easy BTW. If we are to be
> dependent on SW developers that can only write code for the OS
> they saw in school we will never get anywhere will we?

That's exactly how VMS got into the position it enjoys today. And -
surprise! - the world still moves on, even if VMS doesn't keep pace with it
(in areas like acceptance and market share). And the systems that
developers encountered in school are (belatedly) starting to offer some of
the VMS features (first extent-based file systems, then asynchrony, albeit
sometimes limited, and most recently primitive clustering) that the market
actually seems to value.

My point
> is that SW developers need to RTFM for ANY system they code on.
> I they don't we won't buy their SW will we?
>
> >
> > As for your other comment elsewhere, if VMS performance is 'good enough'
for
> > you even if it doesn't match Unix performance, rejoice and be happy.
But
> > don't assume that's 'good enough' for everyone, or even a majority -
> > especially given purchasing situations where VMS is typically the system
> > that must justify being considered against competition that sets the
> > expectations by virtue of its industry acceptance.
> >
>
> What my comment elsewhere said was that our VMS performance is
> AS GOOD as Unix due to the use of external controllers.

Not the comment I was referring to (11:39 P.M. EDT 6/2/00), which asserted
that EFC V2 performance (in the context of otherwise default environments)
would be 'good enough' (in general, not specifically for you - and since I
don't recall any other comment of yours to that effect, your confusion on
this point seems curious, though I do remember a post of yours some time
back, which I can't find in the recent ancestry of this thread, that
indicated you used hardware write-back caching). Perhaps what you mean is
that this was what you had in your head when you wrote (in response to the
statement of my own that precedes it) what I reproduce below:

---

>EFC V2 will
> likely still fall somewhat short of Unix default file system performance
(at
> least for some of the better implementations, like SGI's)

As you are fond of saying about NT, "It may not be perfect but
it is good enough"

--
Keith Brown

---

Note
> also that even though Unix does have a default performance edge
> on I/O we still chose to use HSZxx controllers on the Unix
> systems for reliability reasons as we did on VMS.

I'm curious what you mean by the above: write-back caching in stable memory
(vs. writing to disk) is purely a performance optimization, it has
absolutely nothing to do with reliability.

There is no
> free lunch. What Unix gains in I/O performance it looses in
> reliability.

This is pure bullshit, and I'm getting tired of hearing it. Even if some
Unix write-back-cache implementations may have been buggy (hell, some still
may be - I have my doubts about Linux's ext2fs), that does not reflect any
limitation of the architecture. I'm not familiar enough with the full range
of implementations to state that any are as bug-free as ODS-2 likely is, but
there are certainly good ones out there (Veritas at least has an excellent
reputation, and its use of a log allows it to provide good performance for
synchronous writes as well).

Most applications do not depend for their integrity on writes making it to
disk immediately. For the few that do, Unix provides mechanisms to ensure
this behavior (or provide 'synch' points, which is an intermediate strategy
that can be a win for a third class of applications); for the rest, Unix
default mechanisms provide good performance with *no* decrease in
reliability.

About the only positive aspect of VMS's behavior is that an application that
*does* depend upon the ordering and timing of disk writes but *doesn't*
understand the fact that it depends on them may luck out and work correctly
after an interruption, whereas it may be less likely to on Unix. But I
don't think 'reliability' is the right word to apply to such a situation.

Go ahead, ask about the AdvFS restore we did a few
> months back after DU crashed before flushing the cache.

Submit a bug report and get on with your life: this is not a conceptual
deficiency, just an implementation error.

- bill

>
>
> --
> Keith Brown
> kbro...@usfamily.net

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Bill Todd wrote:
>
> Keith Brown <kbro...@usfamily.net> wrote in message
> news:39388193...@usfamily.net...
>
> ...
>
> > How difficult is this in PL/I. What can be simpler?
>
> 1) Using a language that the average developer (who, after all, is exactly
> the developer who *won't* do much if anything in the way of optimization) is
> likely to select.

Stuff a sock in it Bill. If you're talking about C, then your average developer
is a mistake just waiting to happen, so what does it matter how fast the mistake
occurs?

> 2) Making the desired behavior transparent rather than requiring that it be
> specified (though making it easy to obtain, albeit explicitly, is certainly
> better than keeping it truly obscure).

When I transistioned from RSTS/E to VMS in the late 70s, RSTS was perceived by
it's users to be extremely user friendly, and VMS was just soooooo complicated.
I soon discovered why RSTS was so user friendly. There were few options, just
one way to do things. Fortunately, in many cases the developers made good
design decisions and the 'one way' was a rather good way. I soon found that if
I kept an open mind (I know, you'll doubt that) that VMS was a much better
environment because I, and every other user/programmer/designer could choose
from many options and if they choose well, the result would be a better
application.

Not everyone is running Unix design applications that benefit from file caching
on a single user workstation with lots of extra memory. 'Desired behavior'
isn't always easily defined.

Enjoy reading your posts.

Dave

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
170 Grimplin Road E-Mail: da...@tsoft-inc.com
Vanderbilt, PA 15486

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Bill Todd wrote:
>
> just an implementation error.
>
> - bill

Oh, I see. You're talking about Unix and C. :-)

David A Froble

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
Larry Kilgallen wrote:

>
> In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> > Bill Todd wrote:
> >>
> >> just an implementation error.
> >>
> >> - bill
> >
> > Oh, I see. You're talking about Unix and C. :-)
>
> No. Case-sensitive filenames are not an implementation error,
> they are an error of design (or lack thereof).

Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
of Unix and C was an error. :-)

Dan Sugalski

unread,
Jun 3, 2000, 3:00:00 AM6/3/00
to
On Sat, 3 Jun 2000, David A Froble wrote:

> Larry Kilgallen wrote:
> >
> > In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> > > Bill Todd wrote:
> > >>
> > >> just an implementation error.
> > >>
> > >> - bill
> > >
> > > Oh, I see. You're talking about Unix and C. :-)
> >
> > No. Case-sensitive filenames are not an implementation error,
> > they are an error of design (or lack thereof).
>
> Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
> of Unix and C was an error. :-)

C'mon, that's not fair. Both Unix and C have quite a few very nice
features. That they're so badly applied (and have a number of rather
glaring design flaws) doesn't detract from those areas that they are good
at. Pity nobody ever redid either from scratch and got them right, but
it's not like VMS doesn't have its share of quirks. (Granted they're not
of the "let's open up my system because I have lousy security granularity
and a system that has no string handling capabilities" type, but they are
there...)

Dan

Larry Kilgallen

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to

Bill Todd

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to

David A Froble <da...@tsoft-inc.com> wrote in message
news:3939766D...@tsoft-inc.com...

> Bill Todd wrote:
> >
> > Keith Brown <kbro...@usfamily.net> wrote in message
> > news:39388193...@usfamily.net...
> >
> > ...
> >
> > > How difficult is this in PL/I. What can be simpler?
> >
> > 1) Using a language that the average developer (who, after all, is
exactly
> > the developer who *won't* do much if anything in the way of
optimization) is
> > likely to select.
>
> Stuff a sock in it Bill. If you're talking about C, then your average
developer
> is a mistake just waiting to happen, so what does it matter how fast the
mistake
> occurs?

Fortunately (or not), there are so many of those developers out there that
natural selection does an adequate (as defined by the market - your opinion
may differ) job of narrowing down the field to a still-large number of C
applications that work well enough (again, as defined by the market) to
define the performance standard by which VMS is unfortunately often judged.

The world really doesn't care about a group of self-styled (or even
occasionally real) cognoscenti who look down their noses at C and Unix,
because their numbers are too negligible to be noticed, let alone listened
to. But the world does notice how fast their applications (the ones that
work, hence become popular) run.

>
> > 2) Making the desired behavior transparent rather than requiring that it
be
> > specified (though making it easy to obtain, albeit explicitly, is
certainly
> > better than keeping it truly obscure).
>
> When I transistioned from RSTS/E to VMS in the late 70s, RSTS was
perceived by
> it's users to be extremely user friendly, and VMS was just soooooo
complicated.
> I soon discovered why RSTS was so user friendly. There were few options,
just
> one way to do things. Fortunately, in many cases the developers made good
> design decisions and the 'one way' was a rather good way. I soon found
that if
> I kept an open mind (I know, you'll doubt that) that VMS was a much better
> environment because I, and every other user/programmer/designer could
choose
> from many options and if they choose well, the result would be a better
> application.

Since I came from an RSX environment, I never had to make that step, but
appreciate that it was a hard one for many RSTS people to take (I liked
RSTS, just thought of it as limited - but it had a wonderfully gung-ho and
experienced set of developers). But the next step (one that took me a long
time to take, and that you don't yet seem to have taken) is that of
appreciating that packaging up a full-function environment such that its
*default* choice of behaviors is as well-thought-out as RSTS's was (and
perhaps exposed in a simplified interface layer above the nitty-gritty
full-function layer) is just as important in making a full-function system
acceptable to the masses (who appear largely still to be in the place you
were before your VMS conversion occurred, and don't seem to have any
particular reason to make the kind of jump into complexity that you were
somewhat forced to make).

VMS's default choice of file system behavior wasn't at all bad in the
limited-memory, etc., environment of the late '70s, though it lacked any
sub-set interface level as approachable as RSTS's (or, some would say, the
10/20 environment). It still lacks such a layer (the POSIX environment
might have qualified, but my understanding is that it was limited both in
scope and in performance), and the file system/RMS defaults have become much
poorer choices over time (yes, there are compatibility issues - one way to
avoid them neatly would be to create a new, simpler interface layer and
limit the new defaults to that environment).

'Way back when, I thought of the choice between RSX and RSTS as an either/or
decision, and to some degree that was reasonable, 'cause in the 11
environment the overhead of creating multiple, layered personas for a system
was likely excessive. But that's not been true in the VMS environment for a
long time: you don't have to give up a single bit of the VMS you know and
love to make it more approachable (and more effective) for a much wider
audience than it currently enjoys.

>
> Not everyone is running Unix design applications that benefit from file
caching
> on a single user workstation with lots of extra memory. 'Desired
behavior'
> isn't always easily defined.

No, but Unix does a pretty good job of making sure that the most
commonly-desired behavior (good performance for the majority of applications
that just aren't concerned about *exactly* when writes occur) is the default
behavior, plus makes it easy to obtain intermediate but still good
performance for applications that only care about what's on disk at
particular points in their processing (by 'synching' - flushing - dirty data
to disk on a per-file basis), plus offers write-through operation when it's
needed.

The utility of this isn't limited to single-user workstations with lots of
memory (that's just one environment that David Mathog happens to be using as
an example): Unix caches (NT's too, I think) grow and shrink dynamically
according to overall system memory use - just like VMS's can (at least
that's planned for the EFC work, and may already exist in other VMS cache
areas) - and are one of the tools used by the system to get the best overall
performance out of whatever physical memory is available - just as VMS does.

The bottom line is that RMS exhibits default write-back behavior in its
internal buffers (so much for the 'better reliability' line of argument),
but just doesn't do so very effectively (default sizes are too small, and
default behavior may well be single-buffered, though I'm not certain of that
last). And RMS exhibits default bulk-read behavior also in its buffering,
but - again - not effectively compared to a system that uses larger buffers
and allows their contents to be shared automatically among multiple
processes (along with some access-pattern recognition for
read-ahead/write-back operation). We're not talking major semantic changes
here: Unix just does a better job (today - it might well not have been a
better choice 20+ years ago) of doing the things RMS is at least to some
extent already doing (plus some Unix implementations have file system
meta-data update optimizations that the VMS file system lacks).

>
> Enjoy reading your posts.

I hope someone does - wouldn't want this to be a complete waste of my time.

As I've intimated before, I take a hard line because my experience is that
if you give a VMS bigot an inch (an inch that they don't deserve, anyway: I
try to be fair) they'll grab it and use it to assert that your entire
argument has no basis and that VMS couldn't possibly be at a comparative
disadvantage in any area. And as I've said, that kind of bunker mentality
is somewhat understandable, but I really believe it's counter-productive:
in what medium if not this one should people explore how to improve areas
that hinder VMS's wider acceptance in the marketplace? And how can that
exploration (and one hopes resulting improvements) be anything but
beneficial both to VMS and to the people who depend on its continued
viability?

- bill

>
> Dave

Dave Weatherall

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to
On Sun, 1 Jun 3900 13:06:09, "Bill Todd" <bill...@foo.mv.com> wrote:

>
> and still provide me with the same
> > level of C2 security.
>
> C2 security is C2 security, it doesn't come in levels. Some Unixes provide
> it (don't happen to know which). Seems unlikely caching has much to do with
> it. Larry might know.

Sorry, I appear to have expressed myself badly. That should read 'same
level of security (we require C2).' I certainly wasn't trying to imply
it was a caching issue. Only that there is an overhead in some i/o
operations brought about by the security features.

Similarly, I suspect that some of the perceived slowness, especially
with David's small file scenario, is actually not the record access at
all; instead I think the main cause is the file
creation/opening/closing overhead. We 'know' this is higher than most
of the other OS's. Partly because of security, but mainly because it
is the cost of things we appreciate about VMS, even if its only
backward compatibility.

After a two gap I appear to have lost my thread...

Cheers - Dave.

d.w...@mdx.ac.uk

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to
In article <8hbrd8$dr$1...@pyrite.mv.net>,


In the last few years the only systems developers have met in schools
are microsoft systems.

It used to be that University courses would do a lot of programming on
our VMS and unix systems. That changed.
Most programming is now done in a Windows Environment.

Our VMS cluster is on Deccampus so some programming is still done with
the compilers provided by this. However the only compiler on our SUN
systems is gcc - it's free. It's main use - to build Apache.

Linux may change this but looking around at the moment I don't see
much sign of this.

So if developers only want to use the system they used at
school/university then forget the DOJ Microsoft has already won.

David Webb
VMS and Unix team leader
CCSS
Middlesex University


Sent via Deja.com http://www.deja.com/
Before you buy.

David A Froble

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to
Bill Todd wrote:
>
> cognoscenti

????? I'm laughing so hard I don't know whether or not to feel insulted. :-)

> But the next step (one that took me a long
> time to take, and that you don't yet seem to have taken) is that of
> appreciating that packaging up a full-function environment such that its
> *default* choice of behaviors is as well-thought-out as RSTS's was (and
> perhaps exposed in a simplified interface layer above the nitty-gritty
> full-function layer) is just as important in making a full-function system
> acceptable to the masses (who appear largely still to be in the place you
> were before your VMS conversion occurred, and don't seem to have any
> particular reason to make the kind of jump into complexity that you were
> somewhat forced to make).

Oh, I do appriciate such a system. It's what I and other developers apply on
top of VMS to make the final system user friendly. And with VMS, each system
can be 'tuned' to best support the particular application. Does it require some
rather extensive abilities to do this? You bet! That is the difference between
a user attempting to set up his own system, and the user paying a 'professional'
to set up a system that best fits the user's needs.

This type of 'service' is not unique. How many (and there are some) users start
with a pentium chip? With a compiler? With any of the stages of a computer
system as it's being put together. So, what we're then discussing is the
step/stage where the 'default' behavior of the system is set up.

I feel that VMS allows this 'default' behavior to be specified at a later step,
and thus allows more flexibility. The fact that some users bypass people like
myself and then cry because the default behavior isn't to their liking in no way
is the fault of the various operating systems. For each default behavior that
would benefit a particular user, there is an opposite default behavior that
benefits another user. Now before you start specifying the percentages of users
that one default would suit vs other defaults, let me state that that is not as
important as the capability of specifying specific behaviors, and while I'm not
all-knowing (gee, I didn't know THAT), it's been my experience that VMS is quite
good at allowing many differing behaviors. I haven't seen too much about not
allowing file caching on Unix, as one example, and no, I don't know Unix and
don't know if such is possible or not, just commenting on prior posts. Damn,
your tendency for long run-on sentences is contagious. :-)

> > Not everyone is running Unix design applications that benefit from file
> caching
> > on a single user workstation with lots of extra memory. 'Desired
> behavior'
> > isn't always easily defined.
>
> No, but Unix does a pretty good job of making sure that the most
> commonly-desired behavior (good performance for the majority of applications
> that just aren't concerned about *exactly* when writes occur) is the default
> behavior, plus makes it easy to obtain intermediate but still good
> performance for applications that only care about what's on disk at
> particular points in their processing (by 'synching' - flushing - dirty data
> to disk on a per-file basis), plus offers write-through operation when it's
> needed.

So, are you saying that 'the majority of applications' just aren't concerned
about *exactly* when writes occur, or that Unix provides good behavior for the


majority of applications that just aren't concerned about *exactly* when writes

occur? I'll ask for confirming statistics for the former, and for the latter,
wonder why it's not good for all such applications.

I notice your mention of offering write-through operations. Is this easily
configured?

> The utility of this isn't limited to single-user workstations with lots of
> memory (that's just one environment that David Mathog happens to be using as
> an example): Unix caches (NT's too, I think) grow and shrink dynamically
> according to overall system memory use - just like VMS's can (at least
> that's planned for the EFC work, and may already exist in other VMS cache
> areas) - and are one of the tools used by the system to get the best overall
> performance out of whatever physical memory is available - just as VMS does.

Hey, if you check prior posts, you'll not find me one of those who feel that "if
VMS doesn't have it, it's not needed" types. If there are good ideas out there,
then VMS should implement them such that the VMS implementation matchs if not
beats the best examples. My feeling is that VMS is just the beginning, not the
ending.

> > Enjoy reading your posts.
>
> I hope someone does - wouldn't want this to be a complete waste of my time.
>
> As I've intimated before, I take a hard line because my experience is that
> if you give a VMS bigot an inch (an inch that they don't deserve, anyway: I
> try to be fair) they'll grab it and use it to assert that your entire
> argument has no basis and that VMS couldn't possibly be at a comparative
> disadvantage in any area. And as I've said, that kind of bunker mentality
> is somewhat understandable, but I really believe it's counter-productive:
> in what medium if not this one should people explore how to improve areas
> that hinder VMS's wider acceptance in the marketplace? And how can that
> exploration (and one hopes resulting improvements) be anything but
> beneficial both to VMS and to the people who depend on its continued
> viability?

I've always felt that construtive criticism is good. However, as a VMS bigot, I
feel that I deserve any inches that I want to grab. Nor do I have a bunker
mentality, but rather subscribe to the philosophy of counter-attack and take no
prisoners. So, while your 'hard line' is fine, be careful with assertions
without substantiating evidence unless the assertion is obviously true, and I
reserve the role of sole judge of such. :-)

Have a nice day.

Keith Brown

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to
Dan Sugalski wrote:
>
> On Sat, 3 Jun 2000, David A Froble wrote:
>
> > Larry Kilgallen wrote:
> > >
> > Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
> > of Unix and C was an error. :-)
>
> C'mon, that's not fair. Both Unix and C have quite a few very nice
> features. That they're so badly applied (and have a number of rather
> glaring design flaws) doesn't detract from those areas that they are good
> at. Pity nobody ever redid either from scratch and got them right, but
> it's not like VMS doesn't have its share of quirks. (Granted they're not
> of the "let's open up my system because I have lousy security granularity
> and a system that has no string handling capabilities" type, but they are
> there...)
>
> Dan


Sorry Dan, I didn't mean to imply that Unix and C are bad and
do not have their place. There are obviously many things they do
very well. I was responding to Bill Todd's assertion that
because they are more popular they are always the right tool to
use and we should not consider a better way when there is one.

--
Keith Brown
kbro...@usfamily.net

Bill Todd

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to

Keith Brown <kbro...@usfamily.net> wrote in message
news:393A9D06...@usfamily.net...

...

> Sorry Dan, I didn't mean to imply that Unix and C are bad and
> do not have their place. There are obviously many things they do
> very well. I was responding to Bill Todd's assertion that
> because they are more popular they are always the right tool to
> use and we should not consider a better way when there is one.

I think you need to practice up on your parsing skills: at no point did I
say, or imply, anything of the kind. Rather, I said that since C/C++ *is*
in fact popular, saying that VMS needn't worry about it because you happen
to feel some other language is more appropriate is <pick your own pejorative
adjective>.

Bill Todd

unread,
Jun 4, 2000, 3:00:00 AM6/4/00
to

David A Froble <da...@tsoft-inc.com> wrote in message
news:393A934A...@tsoft-inc.com...

I was tempted to snip the above, but then figured that if I can run on at
the length I do, then I shouldn't be too quick to assume that when someone
else does it it's not as relevant in its entirety.

The question never was that VMS didn't provide almost as extensive
facilities as anyone could need, the question was whether it provided the
particular facilities in question (involving efficient file access) as
accessibly as Unix does. (There's also the issue that it can't quite match
Unix central cache efficiency in some respects no matter how hard a third
party massages said facilities, but since you can get fairly close I won't
press this point.)

It's bit disingenuous to say that third-party developers should be
responsible for providing suitable default file system behavior on VMS:
central system facilities exist because they either provide efficiencies
unobtainable on a per-application basis or provide inter-application
standardization (also avoiding unnecessary replication of code and coding
effort), and making their default behavior useful is the system's
responsibility, not the developer's. Glenn has pointed out (it may have
been privately) that third-party VMS caching products exist (demonstrating
the need, if the EFC effort under way didn't already), and that they suffer
from a lack of integration with the system.

Your assertion that VMS needn't bother with approachability (as long as it
provides full facilites, no matter how obscurely) because developers like
you will take care of it may be good from your point of view (because it
makes you pretty indispensible) but will not help VMS be competitive with
systems that are not so dependent on the dedication and competence of others
to be usable (especially for development) by mere mortals. And when there's
a dominantly popular example of such an approachable environment out there
that VMS could provide as a layer with supporting internal mechanisms,
arguing that third parties should create it on their own - possibly for easy
use by *fourth* parties developing simple end-user applications - is silly.

The former: that the majority of applications just aren't concerned about
*exactly* when writes - at least *most* writes - occur.

Since I'm a systems kind of guy who sees applications from their underbelly,
I'm not in a great position to provide examples. But you I suspect can.
Just how many of the write operations your applications perform have to make
it to disk before you move on to something else? Don't forget that
sequential file writes are often buffered up by RMS, just not all that
efficiently (by default). And don't include those writes that have to get
to disk by *some* known point: that's what Unix-style 'synch' operations
are for (though the system has often already performed the write by the time
the 'synch' is requested). (A lot of indexed file writes could be deferred
without any impact on persistence, let alone internal integrity, if RMS
employed logging mechanisms, but that's another discussion.)

Now you may say that the writes you are unnecessarily forcing to disk don't
slow you down enough to matter. But that's on a stand-alone system with one
user: put that user into a workgroup environment with a central server, and
those unnecessary writes start limiting that server's user capacity - as do
the unnecessary writes being performed by all the other applications in use
by the workgroup. Write-back caching isn't just a latency issue: it's a
throughput issue too.

>
> I notice your mention of offering write-through operations. Is this
easily
> configured?

While I'd prefer to see it available on a per-write-request basis (Veritas'
'discovered direct I/O' decides on a per-request basis, based on a size
threshold, whether to change a normally cached write into a 'direct' one),
it may well not be part of the standard C or even underlying system write
function (e.g., it's not part of Win32 WriteFile). But in practice, one can
just 'synch' the file immediately after any write that needs to be forced to
disk. It's likely also possible to specify on file open (or with an fcntl
operation) that all writes to a particular file should be written through,
but I'd have to scrounge around to find out.

>
> > The utility of this isn't limited to single-user workstations with lots
of
> > memory (that's just one environment that David Mathog happens to be
using as
> > an example): Unix caches (NT's too, I think) grow and shrink
dynamically
> > according to overall system memory use - just like VMS's can (at least
> > that's planned for the EFC work, and may already exist in other VMS
cache
> > areas) - and are one of the tools used by the system to get the best
overall
> > performance out of whatever physical memory is available - just as VMS
does.
>
> Hey, if you check prior posts, you'll not find me one of those who feel
that "if
> VMS doesn't have it, it's not needed" types.

Yes, I did notice. You do seem to keep your ears open even while beating me
up.

If there are good ideas out there,
> then VMS should implement them such that the VMS implementation matchs if
not
> beats the best examples.

My point exactly - except that in at least the most significant cases I
think VMS should try to match (if not beat) the best examples in
approachability as well as in performance.

- bill

Dave Weatherall

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
Thanks for the explanation Ian. Much appreciated.

Cheers - Dave.

steven...@quintiles.com

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to

David Webb wrote:
>>>In the last few years the only systems developers have met in schools
are microsoft systems.

It used to be that University courses would do a lot of programming on
our VMS and unix systems. That changed.
Most programming is now done in a Windows Environment.

<trim>


So if developers only want to use the system they used at
school/university then forget the DOJ Microsoft has already won.<<<

Sad, but absolutely true I fear.

Besides, isn't this the reason that Wes Melling had a team of developers in
Scotland that worked on Windows to do their development work for VMS layered
products which the guys in the US then took and translated into VMS? This was
certainly how he described it back in 1998 at the UK DECUS conference. Wes
couldn't get enough VMS people to keep Digital going....

Steve.


Jan Vorbrueggen

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
"IanPercival" <IanPe...@email.msn.com> writes:

> XFC V1.0 is about to enter the field test cycle. By the time it is
> released, it should have some other major performance features added -
> making it perform even better for users of medium to large machines.

We were also/mainly talking about small systems, e.g., workstations.
What about their improvements - you alluded to booting faster...

Kan

Alan Greig

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
In article <802568F5.0...@qedilc01.qedi.quintiles.com>,
steven...@quintiles.com wrote>

>
> Besides, isn't this the reason that Wes Melling had a team of
developers in
> Scotland that worked on Windows to do their development work for VMS
layered
> products which the guys in the US then took and translated into VMS?
This was
> certainly how he described it back in 1998 at the UK DECUS
conference. Wes
> couldn't get enough VMS people to keep Digital going....

While there might be some truth in this I met several of the
VMS engineering staff from DEC Livingston (Scotland) during the
Spiralog development and they were extremely VMS positive - I
have all the t shirts to prove it. One day I phoned up and discovered
they'd all gone. Which is a pity because I could do with some
new VMS t-shirts, polo shirts, cotton bags etc...

I just wonder if the Livingston contingent were used as fall guys for
several hard decisions. It sees a strange way to sort out a problem of
lack of VMS talent by shutting down a group whose existence boosted
OpenVMS locally.

I recently met someone who still works for Compaq who was one
of the Scottish designers of the MicroVAX 3100 and a VMS bigot.
He has very little to do with VMS any more and I don't think this
is through choice.
--
Alan Greig

Wayne Sewell

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
In article <3939B685...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> Larry Kilgallen wrote:
>>
>> In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
>> > Bill Todd wrote:
>> >>
>> >> just an implementation error.
>> >>
>> >> - bill
>> >
>> > Oh, I see. You're talking about Unix and C. :-)
>>
>> No. Case-sensitive filenames are not an implementation error,
>> they are an error of design (or lack thereof).
>
> Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
> of Unix and C was an error. :-)

Boy, I hear that. I use C now, but I would never have learned it if I had not
been forced to by customers. I always wrote in Pascal if given a choice.
Admittedly, the original standard Pascal was pretty much useless, but VAX
Pascal (which has been renamed to DEC Pascal and then Compaq Pascal for obvious
reasons, i.e. Alpha) was *always* an industrial strength compiler. Back in my
E-Systems days, we wrote millions of lines of production code in it. People
complain about the pickiness of Pascal and the rigorous syntax, but if you can
get a program to compile, it will probably run. It catches many errors at
compile-time that manifest themselves at run-time when using C.

ANSI C is much better about that than the original Kernighan and Ritchie crap,
and DEC C better yet, but it still lets you shoot yourself in the foot on
occasion. Every now and then I still get bit by an error that would have been
impossible with Pascal.

Extended Pascal has lots of neat things that go far beyond the original
Standard Pascal and eliminate many of the things people complained about, such
as the rigidly defined character strings and such. I was pretty familiar with
it, since I was the DECUS representive for the ANSI/IEEE Joint Pascal Standards
Committee in the late eighties. Since John Reagan, the DEC Pascal guy, was
also on the committee (may still be, for all I know), many if not all of the
Extended Pascal features appeared in DEC Pascal fairly rapidly. Schema types,
for instance.

--
===============================================================================
Wayne Sewell, Tachyon Software Consulting (281)812-0738 wa...@tachysoft.xxx
http://www.tachysoft.xxx/www/tachyon.html and wayne.html
change .xxx to .com in addresses above, assuming you are not a spambot :-)
===============================================================================
Jake Blues: "Sell me your children! How much for the little girl?"

David A Froble

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
Wayne Sewell wrote:
>
> In article <3939B685...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> > Larry Kilgallen wrote:
> >>
> >> In article <393977DD...@tsoft-inc.com>, David A Froble <da...@tsoft-inc.com> writes:
> >> > Bill Todd wrote:
> >> >>
> >> >> just an implementation error.
> >> >>
> >> >> - bill
> >> >
> >> > Oh, I see. You're talking about Unix and C. :-)
> >>
> >> No. Case-sensitive filenames are not an implementation error,
> >> they are an error of design (or lack thereof).
> >
> > Sorry Larry, I wasn't explicit enough. What I meant was that the implementation
> > of Unix and C was an error. :-)
>
> Boy, I hear that. I use C now, but I would never have learned it if I had not
> been forced to by customers. I always wrote in Pascal if given a choice.
> Admittedly, the original standard Pascal was pretty much useless, but VAX
> Pascal (which has been renamed to DEC Pascal and then Compaq Pascal for obvious
> reasons, i.e. Alpha) was *always* an industrial strength compiler. Back in my
> E-Systems days, we wrote millions of lines of production code in it. People
> complain about the pickiness of Pascal and the rigorous syntax, but if you can
> get a program to compile, it will probably run. It catches many errors at
> compile-time that manifest themselves at run-time when using C.

Exactly! This is what makes C inappropriate for applications development. It
will let a coding mistake get by that most other compilers will catch. You
quickly see the mistake, correct it, and get on with the job. In C the error is
allowed into the executable, it causes a problem and many hours are spent
finding and fixing a small typo, or it doesn't cause a problem at that time, and
gets released into production, and can do great harm before it's known to exist.

The syntax checking in a compiler exists for one good reason. A computer will
always be better at finding such errors than a programmer reading code. Why do
we use computers if not to perform such tasks better and faster?

Dan Sugalski

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
At 11:59 AM 6/5/00 -0400, David A Froble wrote:
>Exactly! This is what makes C inappropriate for applications development.

C was never designed as an app development language, and it's a pretty
pathetic one. It lacks the features you'd usually want in an application
development language, and its standard library's badly suited for it as
well. C's a low-level systems language, meant for writing compilers, device
drivers, OS kernel bits, and suchlike things, and it's not bad at that. It
leans a bit too heavily on the speed side of the speed/safety tradeoff for
my tastes, but it was developed on hardware that is, at this point,
terribly slow.

Don't knock C for being a bad app language, since it isn't one. Knock the
twits that insist on using it as an app language--that's where the blame
rightly lies.

(I'm still not particularly fond of the language, mind, but its biggest
problem is its mis-application, not in any fundamental design flaws)

Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
d...@sidhe.org have teddy bears and even
teddy bears get drunk

Rob Young

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to

Jan,

From Ian's post ... it may be that he won't be back. But to
speculate on what he meant by "medium to large" may have to
do with memory sizes. After all, a typical workstation today
may have 256 MByte of memory and there isn't a whole lot you
can do with caching there if you have 60 MBytes free after running
up a serious application or two.

Booting faster? Since he also mentioned (or the URL does) read-ahead
caching on sequential files , perhaps that is the boost. PFCDEFAULT
helps grab bigger chunks .. but in a workstation case one could
speculate that read-aheads on DecWindows images (also reading in
larger images for install) would help a bit there.

Rob


David A Froble

unread,
Jun 5, 2000, 3:00:00 AM6/5/00
to
Dan Sugalski wrote:
>
> At 11:59 AM 6/5/00 -0400, David A Froble wrote:
> >Exactly! This is what makes C inappropriate for applications development.
>
> C was never designed as an app development language, and it's a pretty
> pathetic one. It lacks the features you'd usually want in an application
> development language, and its standard library's badly suited for it as
> well. C's a low-level systems language, meant for writing compilers, device
> drivers, OS kernel bits, and suchlike things, and it's not bad at that. It
> leans a bit too heavily on the speed side of the speed/safety tradeoff for
> my tastes, but it was developed on hardware that is, at this point,
> terribly slow.
>
> Don't knock C for being a bad app language, since it isn't one. Knock the
> twits that insist on using it as an app language--that's where the blame
> rightly lies.

Well, ya got me there, cause that's exactly the real problem. Unfortunately,
you cannot mandate intellegence. I guess you go with the momentum and maybe
enhance the language until it does a better job of taking care of the
applications programmers that insist on using it. Then the systems programmers
will bitch cause they lose flexibility.

> (I'm still not particularly fond of the language, mind, but its biggest
> problem is its mis-application, not in any fundamental design flaws)

Dave

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596

DFE Ultralights, Inc. E-Mail: da...@tsoft-inc.com
T-Soft, Inc. 170 Grimplin Road Vanderbilt, PA 15486


Wayne Sewell

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
In article <4.3.2.7.0.200006...@24.8.96.48>, Dan Sugalski <d...@sidhe.org> writes:
> At 11:59 AM 6/5/00 -0400, David A Froble wrote:
>>Exactly! This is what makes C inappropriate for applications development.
>
> C was never designed as an app development language,

Actually, it was never designed at all, as far as I can tell.

>and it's a pretty
> pathetic one. It lacks the features you'd usually want in an application
> development language, and its standard library's badly suited for it as
> well. C's a low-level systems language, meant for writing compilers, device
> drivers, OS kernel bits, and suchlike things, and it's not bad at that.


I would amend that to say it's meant for writing such stuff *in a hacker
fashion*. (I am of course using the term "hacker" in the original sense;
nothing to do with malicious intent.) While C does have low level access for
stuff, this is not to say that strongly-typed languages do not. I have written
compilers in Pascal and also kernel mode code. You have to be careful with
what you do and make sure that you don't reference any RTL routines, but this
would apply to C as well. I would say that Bliss can do *anything* that C can,
and probably more, but will tend to prevent stupid typos from crashing the
system.

I would say that using a real language is *more* important for kernel mode
stuff than for applications, because the cost of failure is so much greater,
i.e. crashing the entire system instead of just the application.


Speaking of compilers, they are no different from applications from the
viewpoint of the system and the machine code. They execute in user mode, read
an input file, and create an output file. You can write compilers in a
strongly typed language as easily as in C.

C was intended for people who don't want the compiler to "get in the way" and
prefer that it let them do anything, whether correct code is generated or not.
Strangely, they prefer to defer their errors to run time, where they are much
harder to diagnose. Sure the compile is faster. Good thing, because you have
to compile so many more times to correct errors that would have been trapped in
the first compile by Pascal or the Countess (see below).

Note that we are talking about differences in *compilation* speed, not
*execution* speed of the generated code. Depending on the compiler, a
strongly-typed language can generate code as good as or better than a sloppy
one such as C. Most of it depends on the back end. Admittedly, it may be
possible to generate slightly tighter code with a simpler (more primitive)
language such as C, but an industrial strength back end such as GEM makes up
for this to a large extent. Both Compaq C and Compaq Pascal use GEM on alpha.
I would be surprised if there is that much difference in the generated machine
code for a given algorithm.

>It
> leans a bit too heavily on the speed side of the speed/safety tradeoff for
> my tastes, but it was developed on hardware that is, at this point,
> terribly slow.

That's an excuse for the original sad state of C, but not for C today. The C
standards committee had a chance to fix this stuff. Admittedly, they did
improve the language somewhat, since ANSI C will at least do *some* checking,
unlike the K & R shit, but C still lets a lot of stuff slip through.


Admittedly, there *is* a run time performance hit for stuff such as array
bounds checking. This would fall into the speed/safety tradeoff mentioned
above. Seems like 95% percent of the C run time errors I have found are
related to exceeding a local stack-based string variable and wildwriting into
the stack. At least in Pascal you have the option of checking array bounds
during development, then you can turn it off when the program is released for
production.


>
> Don't knock C for being a bad app language, since it isn't one.

I consider it a bad language for any purpose. But I'm stuck with it.
Unfortunately, it's been so long since I've used Pascal that it would take time
to come back up to speed on it.

>Knock the
> twits that insist on using it as an app language--that's where the blame
> rightly lies.

Agreed.

> (I'm still not particularly fond of the language, mind, but its biggest
> problem is its mis-application, not in any fundamental design flaws)

Disagreed. It was fine for the hacker days when people were just playing with
computers, but not for production systems in *any* role. Admittedly, eunuchs
systems don't have anything to replace it for kernel code, but vms does.


The Countess referred to above is Augusta Ada Byron, Countess of Lovelace,
daughter of Lord Byron. Also known as just Ada. :-)

Arne Vajhøj

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to David A Froble
David A Froble wrote:

> Wayne Sewell wrote:
> > Boy, I hear that. I use C now, but I would never have learned it if I had not
> > been forced to by customers. I always wrote in Pascal if given a choice.
> > Admittedly, the original standard Pascal was pretty much useless, but VAX
> > Pascal (which has been renamed to DEC Pascal and then Compaq Pascal for obvious
> > reasons, i.e. Alpha) was *always* an industrial strength compiler. Back in my
> > E-Systems days, we wrote millions of lines of production code in it. People
> > complain about the pickiness of Pascal and the rigorous syntax, but if you can
> > get a program to compile, it will probably run. It catches many errors at
> > compile-time that manifest themselves at run-time when using C.
>
> Exactly! This is what makes C inappropriate for applications development. It
> will let a coding mistake get by that most other compilers will catch. You
> quickly see the mistake, correct it, and get on with the job. In C the error is
> allowed into the executable, it causes a problem and many hours are spent
> finding and fixing a small typo, or it doesn't cause a problem at that time, and
> gets released into production, and can do great harm before it's known to exist.
>
> The syntax checking in a compiler exists for one good reason. A computer will
> always be better at finding such errors than a programmer reading code. Why do
> we use computers if not to perform such tasks better and faster?

C is one of the most widely used languages today. And it certainly has
some
strong points, but it also got some weak points. It is not good at
finding coding bugs at compile time.

But we need to distinguish between:

1) the defects in the C language as defined in the ANSI C standard

2) the bad practices common among C programmers

It is my experience that #2 is more important than #1 in practice.

* there is a tradition among C programmers for using very short
names and it is generally considered good to write code in the
most compact style

* the compiler switches are used to make it easier for the programmers
to get a clean build instead of as a tool to improve code quality
(CC/STANDARD=ANSI/WARNING=ENABLE=ALL are a lot different from
CC/STANDARD=VAXC/WARNING=DISABLE=ALL !)

* many programmers today have learned C as their first language, which
often result in very poor programs - programmers that have learned
Pascal or Ada usully write better C programs, because they have
beed educated by a strict compiler

Arne

Dan Sugalski

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
At 10:39 AM 6/6/00 -0500, Wayne Sewell wrote:
>In article <4.3.2.7.0.200006...@24.8.96.48>, Dan Sugalski
><d...@sidhe.org> writes:
> > At 11:59 AM 6/5/00 -0400, David A Froble wrote:
> >>Exactly! This is what makes C inappropriate for applications development.
> >
> > C was never designed as an app development language,
>
>Actually, it was never designed at all, as far as I can tell.

I'd disagree. You (and I) may not agree with the design decisions, but it
wasn't thrown together.

> >and it's a pretty
> > pathetic one. It lacks the features you'd usually want in an application
> > development language, and its standard library's badly suited for it as
> > well. C's a low-level systems language, meant for writing compilers,
> device
> > drivers, OS kernel bits, and suchlike things, and it's not bad at that.
>
>
>I would amend that to say it's meant for writing such stuff *in a hacker
>fashion*. (I am of course using the term "hacker" in the original sense;
>nothing to do with malicious intent.) While C does have low level access
>for stuff, this is not to say that strongly-typed languages do not. I
>have written compilers in Pascal and also kernel mode code. You have to
>be careful with what you do and make sure that you don't reference any RTL
>routines, but this would apply to C as well. I would say that Bliss can
>do *anything* that C can, and probably more, but will tend to prevent
>stupid typos from crashing the system.

Pascal (at least a useful implementation of itO) is newer than C. C is an
old language, relatively speaking, and reflects the systems available at
the time. It was meant to be a simple language that you could write tight
code it, and one whose compiler was trivial (relatively speaking) to implement.

>I would say that using a real language is *more* important for kernel mode
>stuff than for applications, because the cost of failure is so much greater,
>i.e. crashing the entire system instead of just the application.

Now, sure. But now we've got cycles and megabytes up the wazoo. Not the
case when C was written, certainly not on the systems that K&R had handy
when they wrote it.

>Speaking of compilers, they are no different from applications from the
>viewpoint of the system and the machine code. They execute in user mode,
>read an input file, and create an output file. You can write compilers in a
>strongly typed language as easily as in C.

Sure, as far as the system is concerned it's all the same, and C's not
particularly great as a compiler language. (Oddly enough, except for the
speed issue I'd rather write a compiler in perl

>C was intended for people who don't want the compiler to "get in the way" and
>prefer that it let them do anything, whether correct code is generated or
>not. Strangely, they prefer to defer their errors to run time, where they
>are much harder to diagnose. Sure the compile is faster. Good thing,
>because you have to compile so many more times to correct errors that
>would have been trapped in the first compile by Pascal or the Countess
>(see below).

Now, yes. At the time C was created, no, since Pascal didn't exist. C's a
step up from assembly language, and a half-step down from most of the ALGOL
derivatives. For what it did, it beat ALGOL, COBOL, or Fortran. (And B, I
assume, though I've never used it)

>Note that we are talking about differences in *compilation* speed, not
>*execution* speed of the generated code.

*Now*, yes. When C was created there *were* no optimizing compilers to
speak of. They didn't come along for quite some time, like a decade or more.

>Depending on the compiler, a
>strongly-typed language can generate code as good as or better than a sloppy
>one such as C. Most of it depends on the back end. Admittedly, it may be
>possible to generate slightly tighter code with a simpler (more primitive)
>language such as C, but an industrial strength back end such as GEM makes up
>for this to a large extent.

Unoptimized C's tighter, but optimized is likely less tight.

>Both Compaq C and Compaq Pascal use GEM on alpha.
>I would be surprised if there is that much difference in the generated
>machine code for a given algorithm.

I wouldn't. I'd expect the Pascal code to be faster. Courtesy of pointers,
optimising C's a pain. (The Fortran compiler folks have commented on the
ease of optimizing Fortran vs C before too)

> >It
> > leans a bit too heavily on the speed side of the speed/safety tradeoff for
> > my tastes, but it was developed on hardware that is, at this point,
> > terribly slow.
>
>That's an excuse for the original sad state of C, but not for C today.

But the C of today is based on the C from the early '70s. It has to be,
otherwise it's a different language. (Probably a better one, but that's
neither here nor there)

>The C
>standards committee had a chance to fix this stuff.

No, they didn't, unfortunately. Adding in some of the fixes (like a real
string type) would've busted backwards compatibility. Politics played a
role too, of course, but it always does when people are involved.

>Admittedly, they did
>improve the language somewhat, since ANSI C will at least do *some* checking,
>unlike the K & R shit, but C still lets a lot of stuff slip through.

Yep, it does, and that's on purpose. FWIW, Dec C does check for a lot of
stuff (like array bounds) if you ask it to, though there's a limit to what
it can do when pointer casts are an intgral part of the language.

>Admittedly, there *is* a run time performance hit for stuff such as array
>bounds checking. This would fall into the speed/safety tradeoff mentioned
>above. Seems like 95% percent of the C run time errors I have found are
>related to exceeding a local stack-based string variable and wildwriting into
>the stack. At least in Pascal you have the option of checking array bounds
>during development, then you can turn it off when the program is released for
>production.

A /CHECK=BOUNDS on Dec C 6.0 and up will do runtime array bounds checking
at the expense of some speed.

> > Don't knock C for being a bad app language, since it isn't one.
>
>I consider it a bad language for any purpose. But I'm stuck with it.
>Unfortunately, it's been so long since I've used Pascal that it would take
>time to come back up to speed on it.

The original Pascal sucked too, though.

I'll admit that at this point I've found very few computer languages worth
much. I'm fond of perl and Forth, and Python looks reasonably nice though
at this point if a language makes me cater to the machine I don't much care
for it. I've got CPU power and memory to burn, and the languages can damn
well cater to *me*, thank you very much. :)

> >Knock the
> > twits that insist on using it as an app language--that's where the blame
> > rightly lies.
>
>Agreed.
>
> > (I'm still not particularly fond of the language, mind, but its biggest
> > problem is its mis-application, not in any fundamental design flaws)
>
>Disagreed. It was fine for the hacker days when people were just playing
>with computers, but not for production systems in *any* role.

Unix systems never were production machines, really. More CS academic toys
that got pumped up with steroids and put into production use.

>Admittedly, eunuchs systems don't have anything to replace it for kernel
>code, but vms does.

Sure. Now all we need are programmers who can use a language other than C
with any facility. :(

I'd also like to see something developed other than BLISS. I'm sure we can
do better these days.

Christopher Smith

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
On Tue, 6 Jun 2000, Dan Sugalski wrote:

> The original Pascal sucked too, though.

The bytecode was an interesting idea, though.

> >Admittedly, eunuchs systems don't have anything to replace it for kernel
> >code, but vms does.

> Sure. Now all we need are programmers who can use a language other than C
> with any facility. :(

I'd be perfectly happy to develop a large chunk of my code in pascal (or
modula-N, where N is some posative integer...), but unfortunately I
haven't been able to talk my employer into springing for a compiler -- if
I can even find a decent compiler to run on this billybox trash that I'm
writing apps for.. ;)

> I'd also like to see something developed other than BLISS. I'm sure we can
> do better these days.

I'd like to see some of the better things that we've got get more use...
once everyone's using something sane, in a sane manner, then we can worry
about making new, better languages.

It's a real shortcoming of most "programmers" that they have no idea how
to pick the best language for a given task, and a shortcoming of most
users that they can't pick the right computer for a given task...

That's life.

Regards,

Chris

===============================================================================
"My two cents" (http://rootworks.com/twocentsworth.cgi?128562)
Christopher Smith(chr...@pubserv.com) Prgramer^W Programmer
Prime Synergy of Champaign, IL.
-------------------------------------
"Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and
weighs 30 tons, computers in the future may have only 1,000 vacuum tubes
and weigh only 1.5 tons." -- Popular Mechanics, March 1949
-------------------------------------------------------------------------------

JF Mezei

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
Wayne Sewell wrote:
> Actually, it was never designed at all, as far as I can tell.

C perhaps not. But ANSI-C yes. They put all sorts of pedantic restrictions on
it (such as the inability to move an unsigned char to a signed char and vice
versa without a complaint from the compiler (or the use of the /unsigned
compiler switch).

> C was intended for people who don't want the compiler to "get in the way" and
> prefer that it let them do anything, whether correct code is generated or not.

But ANSI-C changed that by being much more pedantic at compile time. If you
want to do your own thing, you really have to try hard by type casting stuff
and using memcpy instead of the = sign.

Dann Corbit

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
OpenVMS is written mostly in C. I think that qualifies as a piece of robust
code.

Crappy code comes in all flavors. Fortran, Pascal, Ada, you name it.

C programmers care just as much about correctness as any other sort of
programmer, if not more [because they are forced to].

Bashing a language is like bashing a race of people or a religion. "It's
different than what I am or what I like and therefore inferior."

Sheesh.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm


Hoff Hoffman

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to

In article <vLf%4.390$tV4.42@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
:OpenVMS is written mostly in C.

Hmmm. OpenVMS is a mix of a variety of languages, and I don't think I'd
go as far as the use of "most" in the context of that sentence.

While a fair percentage of the new OpenVMS code is written in in C, a very
sizeable chunk of OpenVMS is in Bliss, Macro32, and other languages.

I have a recollection that the last time I ran a module language count on
the master packs, the big three (C, Macro, and Bliss) each comprised roughly
a third of the total number of modules. (This count did not include the
"barrage" of other languages used within OpenVMS. eg: PL/I, M64, SDL, DCL,
Fortran, C++, Ada, etc.)

That said, one of the salient differences around the use of C within
OpenVMS itself is the heavy use of descriptors (ASCID). While ASCIZ
null-terminated strings are used, most of the system APIs use ASCID,
with a few ASCIC and type-length-vector (itemlist) constructs around.
While ASCID and ASCIC are of course not a certain cure for the usual
sorts of buffer overruns seen, I've found more buffer-related problems
with ASCIZ constructs than with either ASCID or ASCIC. And obviously,
most other C platforms do not use the ASCID construct.

There are also organizational software development norms, and these
can make it easier or harder for bugs to exist -- bad code can be
written in any language.

--------------------------- pure personal opinion ---------------------------
Hoff (Stephen) Hoffman OpenVMS Engineering hoffman#xdelta.zko.dec.com


Dann Corbit

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
"Peter LANGSTOEGER" <ep...@kapsch.net> wrote in message
news:393d...@news.kapsch.co.at...

> In article <vLf%4.390$tV4.42@client>, "Dann Corbit"
<dco...@solutionsiq.com> writes:
> >OpenVMS is written mostly in C.
>
> Did you check it with the VMS source listing CDs ?
> I got the impression by people who have this listings (not me, sorry),
> that this is BULLSHIT.

Really. The Alpha required a rewrite so that the old MACRO had to be
ported.

> > I think that qualifies as a piece of
robust
> >code.
>

> No. Only newer pieces are written in C and they appear to be much more
> flaky than earlier VMS pieces (written in BLISS, ...)
>
> Look at PATHWORKS. Over 200 system crashes the last 2 years here...

Not my fault. I didn't write it.
;-)

> >Crappy code comes in all flavors. Fortran, Pascal, Ada, you name it.
> >
> >C programmers care just as much about correctness as any other sort of
> >programmer, if not more [because they are forced to].
> >
> >Bashing a language is like bashing a race of people or a religion. "It's
> >different than what I am or what I like and therefore inferior."
>

> Maybe. But the percentage (not count) of crappy C programs is WAY higher
> than with any other programming language.

Produce some statistics or admit that you are completely full of crap and
just spouting without a grain of knowlege. The Y2K problems (for instance)
were mostly COBOL. The greatest amount of defects is COBOL, but that is
largely because it is also the greatest volume of code. I have a feeling
you are a blowhard. Legacy programmers fear C programmers because they are
afraid of Unix (Tru64 included).

Defects per KLOC is pretty much a universal constant unless incredible
effort is made to ensure quality (E.g. space shuttle efforts, and things of
that ilk.)

> And so, I see it this way:
>
> The worst things win (like TCPIP vs OSI, VHS vs BETAMAX/Video2000, SGML vs
> ODA, M$ vs the rest, and so on)

Worst things win if they are more cost effective. People vote with their
wallets. Better sometimes costs more, for whatever reason. That does not
mean that the people who chose something inferior were stupid. In fact,
those who bought BetaMax instead of VHS, those who installed OSI instead of
TCP/IP, and those who invested in ODA are the ones with the cat in the bag.

Dan Sugalski

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
On Tue, 6 Jun 2000, Dann Corbit wrote:

> OpenVMS is written mostly in C. I think that qualifies as a piece of robust
> code.

Actually no, it's not. It's mostly in BLISS and Macro-32. Some of the
newer (and, alas, buggier) stuff is in C.

C, unfortunately, doesn't mesh well with VMS at a system level any better
than it does at a user-code level, and C has a number of unfortunate
features that make it less safe as a system language than BLISS, amongst
other things. It's got the safety of Macro coupled with the feeling of
safety of Pascal. A dangerous combo.



> C programmers care just as much about correctness as any other sort of
> programmer, if not more [because they are forced to].

Right, but being forced to care means that if you miss, you lose. Computer
languages are supposed to help the programmer avoid making mistakes. C
requires the use of some constructs that are terribly error-prone.



> Bashing a language is like bashing a race of people or a religion. "It's
> different than what I am or what I like and therefore inferior."

Believe me, I've plenty of experience in C. It's an OK language, but it's
always second best (at best). For any application you can find a better
language.

Dan

JF Mezei

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
"Brian Schenkenberger, VAXman-" wrote:
> I fear C programmer because of the shit that they typically produce. I
> code with C -- when I have too -- and take an extra effort to be certain
> that it is:


code is as good as the programmer (or the tool that produces such code).

Cobol more readable than C ? Depends who wrote it. Once used a tool that was
supposed to simplify everything and generate perfect COBOL code. I ended up
having to wait for the floor to be empty and stretch the listing to span the
whole floor and litterally had to run through the code to find out what it was
doing due to all the weird paragraph names etc etc. Of course, there was CICS
code in there too (which goes through a pre-processor to be converted into
weirder function calls).

Any code in any language can be made unreadable. Some programmers think that
fancy/complex C constructs result in faster programs, or impress their bosses
with fancy routines. But once compiled with a good compiler, I am not sure it
makes much of a difference.

JF Mezei

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
Dan Sugalski wrote:
> Right, but being forced to care means that if you miss, you lose. Computer
> languages are supposed to help the programmer avoid making mistakes. C
> requires the use of some constructs that are terribly error-prone.

requires ? ? ? ? ? ?

Perhaps you mean "C code that can be ported to Unix" or "C code that was
written on Windows". But I disagree that "C" by itself "requires" dangerous constructs.

Dan Sugalski

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
On Tue, 6 Jun 2000, JF Mezei wrote:

> Dan Sugalski wrote:
> > Right, but being forced to care means that if you miss, you lose. Computer
> > languages are supposed to help the programmer avoid making mistakes. C
> > requires the use of some constructs that are terribly error-prone.
>
> requires ? ? ? ? ? ?

Close enough as to make no difference. The two big ones are pointers and
null-terminated character arrays masquerading as strings.

> Perhaps you mean "C code that can be ported to Unix" or "C code that
> was written on Windows". But I disagree that "C" by itself "requires"
> dangerous constructs.

No, base C. It's certainly possible to work around it (witness both Perl
and Python, both written in C and avoiding the pitfalls) but it requires a
lot of work and discipline. Try writing code of any complexity without
using pointers or null-terminated character arrays. It pretty much makes
the language useless.

Dan

Wayne Sewell

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to

I agree that the ANSI version is better. I just think it should have gone
farther.

John Reagan

unread,
Jun 6, 2000, 3:00:00 AM6/6/00
to
Dan Sugalski wrote:

> >Both Compaq C and Compaq Pascal use GEM on alpha.
> >I would be surprised if there is that much difference in the generated
> >machine code for a given algorithm.
>
> I wouldn't. I'd expect the Pascal code to be faster. Courtesy of pointers,
> optimising C's a pain. (The Fortran compiler folks have commented on the
> ease of optimizing Fortran vs C before too)
>

Until recently, I'd agree. The Pascal coding often was a little tighter
due to C's pointers as you mentioned. Pascal has its own "problems" for
optimizing, namely, VAR parameters and uplevel referencing. Not as
wide-spred as pointers in C and not quite as hard to analyze...

However, the folks on the C compiler have spent lots of time over the
past couple of years improving the pointer alias analysis. Given the
current version of Compaq C vs Compaq Pascal it might be a dead heat
with the C compiler winning over the Pascal compiler. I haven't had the
"pressure" from getting good benchmark numbers like the C and Fortran
folks get.

--
John Reagan
Compaq Pascal Project Leader

P.S. Hi Wayne!

Larry Kilgallen

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to

> I'd be perfectly happy to develop a large chunk of my code in pascal (or
> modula-N, where N is some posative integer...), but unfortunately I
> haven't been able to talk my employer into springing for a compiler -- if
> I can even find a decent compiler to run on this billybox trash that I'm
> writing apps for.. ;)

Try Delphi. Inside it is based on Pascal.

Peter LANGSTOEGER

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <vLf%4.390$tV4.42@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
>OpenVMS is written mostly in C.

Did you check it with the VMS source listing CDs ?


I got the impression by people who have this listings (not me, sorry),
that this is BULLSHIT.

> I think that qualifies as a piece of robust
>code.

No. Only newer pieces are written in C and they appear to be much more


flaky than earlier VMS pieces (written in BLISS, ...)

Look at PATHWORKS. Over 200 system crashes the last 2 years here...

>Crappy code comes in all flavors. Fortran, Pascal, Ada, you name it.
>


>C programmers care just as much about correctness as any other sort of
>programmer, if not more [because they are forced to].
>

>Bashing a language is like bashing a race of people or a religion. "It's
>different than what I am or what I like and therefore inferior."

Maybe. But the percentage (not count) of crappy C programs is WAY higher
than with any other programming language. And so, I see it this way:

The worst things win (like TCPIP vs OSI, VHS vs BETAMAX/Video2000, SGML vs
ODA, M$ vs the rest, and so on)

--
Peter "EPLAN" LANGSTOEGER Tel. +43 1 81111-2651
Network and OpenVMS system manager Fax. +43 1 81111-888
FBFV/Information Services E-mail ep...@kapsch.net
<<< KAPSCH AG Wagenseilgasse 1 PSImail PSI%(0232)281001141::EPLAN
A-1121 VIENNA AUSTRIA "I'm not a pessimist, I'm a realist"
"VMS is today what Microsoft wants Windows NT V8.0 to be!" Compaq, 22-Sep-1998

Hoff Hoffman

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to

In article <6dg%4.393$tV4.185@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
:"Peter LANGSTOEGER" <ep...@kapsch.net> wrote in message
:news:393d...@news.kapsch.co.at...
:> In article <vLf%4.390$tV4.42@client>, "Dann Corbit"

:<dco...@solutionsiq.com> writes:
:> >OpenVMS is written mostly in C.
:>
:> Did you check it with the VMS source listing CDs ?
:> I got the impression by people who have this listings (not me, sorry),
:> that this is BULLSHIT.
:
:Really. The Alpha required a rewrite so that the old MACRO had to be
:ported.

Um, the Macro32 compiler was used. OpenVMS Alpha contains an extensive
amount of VAX Macro32 code -- what was once processed by the VAX assembler
is now processed by an Alpha compiler. The existing Macro32 code base was
NOT rewritten "wholesale".

Brian Schenkenberger, VAXman-

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <6dg%4.393$tV4.185@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
>"Peter LANGSTOEGER" <ep...@kapsch.net> wrote in message
>news:393d...@news.kapsch.co.at...
>> In article <vLf%4.390$tV4.42@client>, "Dann Corbit"
><dco...@solutionsiq.com> writes:
>> >OpenVMS is written mostly in C.
>>
>> Did you check it with the VMS source listing CDs ?
>> I got the impression by people who have this listings (not me, sorry),
>> that this is BULLSHIT.
>
>Really. The Alpha required a rewrite so that the old MACRO had to be
>ported.

Sorry Dann but the truth is that the "old MACRO" that had to be ported
was done by making a MACRO compiler for the Alpha. A few minor changes
to the sources and the MACRO32 assembler code runs as native Alpha code!

It seems to me that your 'solutionsiq' isn't very high!



>Produce some statistics or admit that you are completely full of crap and
>just spouting without a grain of knowlege. The Y2K problems (for instance)
>were mostly COBOL. The greatest amount of defects is COBOL, but that is
>largely because it is also the greatest volume of code. I have a feeling
>you are a blowhard. Legacy programmers fear C programmers because they are
>afraid of Unix (Tru64 included).

I fear C programmer because of the shit that they typically produce. I


code with C -- when I have too -- and take an extra effort to be certain
that it is:

1. readable
2. not cluttered with function(function(function(function()))));
3. not littered with the kludged/buggy C library calls

>Defects per KLOC is pretty much a universal constant unless incredible
>effort is made to ensure quality (E.g. space shuttle efforts, and things of
>that ilk.)

Clean your KLOK. It really take very little effort to create quality
software. It needs only a process and a desire to write quality s/w.

--
VAXman- OpenVMS APE certification number: AAA-0001 VAX...@TMESIS.COM

GNU Freeware -- What does the GNU *really* stand for? Garbage! Not Usable!

Obakesan

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
HiYa

just lurking on this (interesting) thread until ...

In article <009EB385...@SendSpamHere.ORG>, sys...@SendSpamHere.ORG
wrote:

>I fear C programmer because of the shit that they typically produce. I
>code with C -- when I have too -- and take an extra effort to be certain
>that it is:
>
> 1. readable
> 2. not cluttered with function(function(function(function()))));
> 3. not littered with the kludged/buggy C library calls
>

I think you have a bias that shows in what you've written so far...

>Clean your KLOK. It really take very little effort to create quality
>software. It needs only a process and a desire to write quality s/w.

precisely, quality standards & methodologies. Stick to them. Then it
really doesn't matter if you use BLISS or C does it ... unless you're
about to argue that the language of choice somehow obviates the needs
for quality procedures?

--

See Ya
(when the bandwidth gets better ;-)
Chris Eastwood Please remove undies for reply
Photographer, Stunt Programmer
Motorcyclist and dingbat

Arne Vajhøj

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to Christopher Smith
Christopher Smith wrote:
> On Tue, 6 Jun 2000, Dan Sugalski wrote:
> > The original Pascal sucked too, though.
>
> The bytecode was an interesting idea, though.

????

The original Pascal was developed by Wirth on a CDC Cyber
and did generate real executable code.

You are thinking of the UCSD Pascal and P-code concept.
That in many ways was based on the same ideas as Java
(but the hardware was just not fast enough back then).

Arne

Arne Vajhøj

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to Dann Corbit
Dann Corbit wrote:
> OpenVMS is written mostly in C. I think that qualifies as a piece of robust
> code.

????

AFAIK then very little of the VMS core is written in C (Macro-32 and
Bliss
are still dominant).

There must be tons of C code in DECWindows, UCX and PathWorks though.
But I
am not sure that I would characterize those components as the most
bug-free and reliable components !

Arne

Brian Schenkenberger, VAXman-

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <393DA8C9...@videotron.ca>, JF Mezei <jfmezei...@videotron.ca> writes:

>"Brian Schenkenberger, VAXman-" wrote:
>> I fear C programmer because of the shit that they typically produce. I
>> code with C -- when I have too -- and take an extra effort to be certain
>> that it is:
>
>
>code is as good as the programmer (or the tool that produces such code).

I'd have to agree.

>Cobol more readable than C ? Depends who wrote it. Once used a tool that was
>supposed to simplify everything and generate perfect COBOL code. I ended up
>having to wait for the floor to be empty and stretch the listing to span the
>whole floor and litterally had to run through the code to find out what it was
>doing due to all the weird paragraph names etc etc. Of course, there was CICS
>code in there too (which goes through a pre-processor to be converted into
>weirder function calls).

I've never found a need for COBOL. I've looked at code and I have provided
some simple system routines which COBOL programmers could call but that is
as close to the language as I dared venture.

>Any code in any language can be made unreadable. Some programmers think that
>fancy/complex C constructs result in faster programs, or impress their bosses
>with fancy routines. But once compiled with a good compiler, I am not sure it
>makes much of a difference.

That's it! I'm not saying all C programmers but I've come across enough C
code to find stuff that I truly disdain. Pointers represented by a single
character embedded in a statement with multiple function reference nestings
and many pre/post decrement/increment operations. It makes debugging dif-
ficult and understanding the construct even more difficult. In addition,
you're very likely NOT going to find a comment explaining the 'expected'
function of such code. Many a C programmer seems to be from a school that
C is a self-documenting language obviating the need for comments.

You can do tricky manipulations in many programming languages. I've done
my share of assembler 'hacks' but I've taken the time to document what I
have done in great detail. Some of my code has more comment than code by
a significant ratio.

Phillip Helbig

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <393d...@news.kapsch.co.at>, ep...@kapsch.net (Peter
LANGSTOEGER) writes:

>In article <vLf%4.390$tV4.42@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:

>>OpenVMS is written mostly in C.
>

>Did you check it with the VMS source listing CDs ?
>I got the impression by people who have this listings (not me, sorry),
>that this is BULLSHIT.

>No. Only newer pieces are written in C and they appear to be much more


>flaky than earlier VMS pieces (written in BLISS, ...)

I believe VMS on VAX was written in MACRO, BLISS, PL/I (at least
MONITOR), perhaps some Pascal, Fortran, C,.... For ALPHA, much was
rewritten in C. Perhaps this is related to MAIL being broken to the
extent that /OLD was added as a qualifier to get the "correct" behaviour
(without some new features). When will this be fixed, by the way?

Wayne Sewell

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <6dg%4.393$tV4.185@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
> "Peter LANGSTOEGER" <ep...@kapsch.net> wrote in message
> news:393d...@news.kapsch.co.at...
>> In article <vLf%4.390$tV4.42@client>, "Dann Corbit"
> <dco...@solutionsiq.com> writes:
>> >OpenVMS is written mostly in C.
>>
>> Did you check it with the VMS source listing CDs ?
>> I got the impression by people who have this listings (not me, sorry),
>> that this is BULLSHIT.
>
> Really. The Alpha required a rewrite so that the old MACRO had to be
> ported.
>

It had to be *ported*; it did not necessarily have to be *converted*. Very
little of the vms source code written in macro had to be converted to a
different language to support alpha, if any. Some of it had to be modified
because of vaxisms in the code, but it did *not* have to be converted.
Digital basically created a macro *compiler*, treating vax macro like any other
computer language, including even optimization. In other words, a single vax
instruction produces many alpha machine instructions. Contrast this to an
assembler, in which there is a one-to-one correspondence between source
instruction and machine instruction. From the viewpoint of a risc processor,
a cisc instruction set *is* a high level language. In any case, much of the
vms source code is *still* macro, built with the vax *assembler* and the alpha
*compiler* from basically the same source.


You accused someone else in this thread of pulling information out of their
ass. It appears you are doing the same. You simply assumed all of the macro
code had to be converted to another language, and then you assumed the
conversion was to C. If conversion had actually been required, it could have
just as easily been to Bliss.


Wayne

Bob Koehler

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <6dg%4.393$tV4.185@client>, "Dann Corbit" <dco...@solutionsiq.com> writes:
>
> Produce some statistics or admit that you are completely full of crap and
> just spouting without a grain of knowlege. The Y2K problems (for instance)
> were mostly COBOL. The greatest amount of defects is COBOL, but that is
> largely because it is also the greatest volume of code. I have a feeling
> you are a blowhard. Legacy programmers fear C programmers because they are
> afraid of Unix (Tru64 included).

See the original Java white paper for an intelligent discussion on the
relative merits and troubles of various languages. Or do you think
Sun and the inventors of Java are legacy programmers afraid of UNIX?

> Defects per KLOC is pretty much a universal constant unless incredible
> effort is made to ensure quality (E.g. space shuttle efforts, and things of
> that ilk.)

No. Things like having pointers in a language are directly mappable to
increased defects per LOC. That white paper has an excellent
bibliography pointing to the research that shows this. Java's inventors
can be credited for addressing real world issues.

----------------------------------------------------------------------
Bob Koehler | Computer Sciences Corporation
Hubble Space Telescope Payload | Federal Sector, Civil Group
Flight Software Team | please remove ".aspm" when replying

Michael D. Ober

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to

"JF Mezei" <jfmezei...@videotron.ca> wrote in message
news:393DA8C9...@videotron.ca...

> "Brian Schenkenberger, VAXman-" wrote:
> > I fear C programmer because of the shit that they typically produce. I
> > code with C -- when I have too -- and take an extra effort to be certain
> > that it is:
>
>
> code is as good as the programmer (or the tool that produces such code).
>
Agreed. I tried to go through some Ada code. After a few minutes, I
started wishing for a program that would replace all comments with spaces,
leaving end of line markers so I could put the source and the documented
source next to each other. The comments were so heavy that I literally
couldn't find the actual program code.

C's biggest problem is that is very, very easy to create write only code.
By that, I mean code that can't be easily read, and worse, is impossible to
modify. I know this group hates MS, but the MS C compilers have a switch
that will allow the compiler to warn you about many of the denser things C
can do, such as assignment and conditionals in one statement. I assume that
any decent C compiler today would have such a capability, which helps, but
doesn't replace good coding practices.

As to why C programs have more flaws than BLISS - simple - there's more code
in C than in BLISS. It would be interesting to know how much of the new
code in VMS is actually ported from Unix.

Mike Ober.

Larry Kilgallen

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <8hlb5p$qrq$1...@info.service.rug.nl>, hel...@astro.rug.nl (Phillip Helbig) writes:

> I believe VMS on VAX was written in MACRO, BLISS, PL/I (at least
> MONITOR), perhaps some Pascal, Fortran, C,.... For ALPHA, much was
> rewritten in C. Perhaps this is related to MAIL being broken to the
> extent that /OLD was added as a qualifier to get the "correct" behaviour
> (without some new features).

I would be the last one to defend C, but I believe most of the Mail
rewrite problems seen by those outside DEC were due to an inadequate
understanding of all the features built into Mail. That could happen
regardless of the language chosen for the new implementation. Consider
whether programs _you_ have written have enough design documentation
that someone who has never met you can implement the "same thing" in
another language.

That said, given that they had decided to rewrite Mail, they should
certainly have chosen a better language than C.

Larry Kilgallen

unread,
Jun 7, 2000, 3:00:00 AM6/7/00
to
In article <Zgs%4.3227$2X2.1...@newsread2.prod.itd.earthlink.net>, "Michael D. Ober" <mdo.@.wakeassoc.com.nospam> writes:

> Agreed. I tried to go through some Ada code. After a few minutes, I
> started wishing for a program that would replace all comments with spaces,
> leaving end of line markers so I could put the source and the documented
> source next to each other. The comments were so heavy that I literally
> couldn't find the actual program code.

The professional approach to this in recent years has been to use a
syntax-coloring editor. Perhaps the person who wrote the Ada you
were reading had used one and presumed that readers would use one too.

One that I know of is called "Alpha" (strangely enough) and is shareware
or something close. Ask in comp.lang.ada for details.

It is loading more messages.
0 new messages