By David Einstein
SILICON VALLEY. 4:20 PM EDT-The race to map the human genome is over, and
the winner is...Compaq. At least from a computing standpoint.
It took humungous computational power for scientists at Celera Genomics and
the Human Genome Project to slice and dice the 3.1 billion pairs of base
chemicals in the genetic code. And much of the work was done using Compaq
(nyse: CPQ) servers linked together to form supercomputers.
Celera's (nyse: CRA) supercomputing facility in Rockville, Md., contains
some $50 million worth of equipment from Compaq, its chief supplier. That
includes 150 four-way servers, as well as a server boasting 16 processors
and 64 gigabytes of memory that assembled all the genetic information.
Celera officials say the calculation to perform the assembly involved 500
million trillion base-to-base comparisons--a job that required more than
20,000 supercomputing hours.
To hold all the data, Celera has about 80 terabytes--that's 80 trillion
bytes--of Compaq-brand storage. And the database is growing by nearly 20
gigabytes per day, which is enough space to hold several full-length
Hollywood films.
Compaq also provided the supercomputers that the Sanger Centre in Cambridge,
England, and the Whitehead Institute at the Massachusetts Institute of
Technology used in the Human Genome Project.
The high-profile race to unravel the genome has established Houston-based
Compaq as a major player in supercomputing, putting it on the stage with
entrenched competitors IBM (nyse: IBM), Sun Microsystems (nasdaq: SUNW) and
Cray (nasdaq: CRAY).
It has also helped drive supercomputers into the mainstream of American
consciousness. Until now, the super-fast machines--which can cost millions
of dollars--have been used mostly for esoteric jobs with little practical
value. They monitor the nuclear stockpile, try to predict the weather and
beat the pants off the world chess champion. Last week, a Cray supercomputer
at the Lawrence Berkeley National Laboratory decided the universe is flat
and will expand forever. Nice, but not very useful.
Now, however, it's becoming clear that supercomputers hold the key to
creating new drugs, curing diseases and eventually letting us modify our own
DNA.
Compaq saw the promise early. In 1998 it inked a deal with Celera, which was
looking for technology capable of handling one of the biggest computing
projects of all time. "People asked me why we chose Compaq," says Marshall
Peterson, Celera's vice president of infrastructure technology. "The answer
is simple. We took a benchmark and gave it to all the vendors. Only two
could run it. One ran it in 87 hours. Compaq ran it in seven." Peterson
didn't disclose the name of the other vendor.
The Celera experience could be a major boost for the Alpha processors that
power Compaq's supercomputers. Compaq picked up the Alpha technology as part
of its 1998 acquisition of Digital Equipment, but the chips have been hard
to sell in a market where Compaq has had to compete against Sun, IBM,
Hewlett-Packard (nyse: HWP) and even Intel (nasdaq: INTC).
Compaq isn't the only computer maker panning for gold in the gene pool. Last
month, Sun announced it is providing the computers for an Internet database
of genomic information being compiled by Oakland, Calif., startup
DoubleTwist. And IBM is working on a supercomputer that will boast 1 million
processors and will be 500 times as powerful as today's fastest systems.
Code-named Blue Gene, the gentle giant will be used initially to determine
how proteins fold--a complex process called proteonics that requires
measuring the forces that hold molecules together.
Andrea Califano, who is program director at IBM's Computational Biology
Center, says that although Blue Gene is still five years from fruition, it
will be state-of-the-art when they finally fire it up. "Blue Gene is going
to be leapfrog technology," he says. "It will be five to seven years ahead
of everything else."
Naturally, Compaq disagrees. "Alpha machines will be as strong in proteonics
as they have been in genomics," says Jesse Lipcon, Compaq's vice president
of Alpha technology. "By the time Blue Gene is introduced, we will have won
all the proteonics deals."
We'll just have to wait and see, as one race ends and another begins.
-----Original Message-----
From: Boyle, Darren [mailto:boy...@bankofbermuda.com]
Sent: Wednesday, June 28, 2000 8:59 AM
To: Info...@Mvb.Saic.Com
Subject: RE: Compaq advertizes
Hi Kerry,
I don't have desktop web access, when the article comes out tomorrow
can you post it please.
Thanks,
Darren
> ----------
> From: Main, Kerry[SMTP:Kerry...@compaq.com]
> Sent: Tuesday, June 27, 2000 9:23 PM
> To: Info...@Mvb.Saic.Com
> Subject: RE: Compaq advertizes
>
> Also, big ad ran in Boston Globe..
>
> Note - Forbes article:
> http://www.forbes.com/tool/html/00/Jun/0626/mu7.htm
>
> <<< Will they ever market VMS? >>>
>
> Wait until tomorrows (June 28) announcements.
>
> Folks on this list should like what they hear .. well, ok maybe one or two
> folks might not ..
>
> :-)
>
> Regards,
>
> Kerry Main
> Senior Consultant,
> Compaq Canada
> Professional Services
> Voice : 613-592-4660
> FAX : 819-772-7036
> Email : kerry...@compaq.com
>
>
>
> -----Original Message-----
> From: koe...@eisner.decus.org [mailto:koe...@eisner.decus.org]
> Sent: Tuesday, June 27, 2000 4:12 PM
> To: Info...@Mvb.Saic.Com
> Subject: Compaq advertizes
>
>
>
> So who was it at Compaq that actually understood that they should
> make something of the Alphas used in the human geneome project?
>
> Full page add today (Washington Post).
>
> Does this spell the end of stealth marketing (is DEC really gone)?
>
> Will they ever market VMS?
>
>
**********************************************************************
This message and any files transmitted with it are confidential and
may be privileged and/or subject to the provisions of privacy legislation.
They are intended solely for the use of the individual or entity to whom
they
are addressed. If the reader of this message is not the intended recipient,
please notify the sender immediately and then delete this message.
You are notified that reliance on, disclosure of, distribution or copying
of this message is prohibited.
Bank of Bermuda
**********************************************************************
>> From: Main, Kerry[SMTP:Kerry...@compaq.com]
>> Sent: Tuesday, June 27, 2000 9:23 PM
>> To: Info...@Mvb.Saic.Com
>> Subject: RE: Compaq advertizes
>>
>> Also, big ad ran in Boston Globe..
>>
>> Note - Forbes article:
>> http://www.forbes.com/tool/html/00/Jun/0626/mu7.htm
>>
>> <<< Will they ever market VMS? >>>
>>
>> Wait until tomorrows (June 28) announcements.
>>
>> Folks on this list should like what they hear .. well, ok maybe one or two
>> folks might not ..
>>
Let me guess.. .. one of them might call soccer , football?
And perchance have eaten kippers on occasion? Surely eaten
fish and chips.
Spot on!
Rob
As of 12:50 CDT (US, 17:50 Zulu), a search of news on Forbes.com
returned only a single link:
http://www.forbes.com/tool/html/98/jul/0716/side1.htm
...and THAT is about Sega dreamcst, not a commercial operating system!
David J. Dachtera
And OpenVMS is the big loser - because none of these genomics folks chose
OpenVMS to run on a single box, going instead with Tru64 or Linux.
OpenVMS's poor performance for general computing (anything except massive
transaction processing), arbitrary and poorly handled 32k and 64k limits on
various operations, and rampant (and now completely pointless)
incompatibility with source code developed on Unix systems (which is the de
facto standard server OS now), made OpenVMS a noncontender in this
lucrative new market. Just as it will in all other new markets. But
that's just losing. What makes it the _BIG_ loser is that the vast
majority of the money that went into the development of the Alpha, Tru64,
and even parts of Linux/Alpha was derived from OpenVMS related sales.
So Compaq as a whole wins, but OpenVMS gets screwed.
Moreover, it isn't just what OpenVMS is now that caused it to lose this
competition. This is also the result of neglecting markets because they
weren't viewed as important or sufficiently lucrative. Over the last 10
years the types of software Celera and the others run have been under
constant development on _small_ machines almost entirely by people working
in academica. Too bad for OpenVMS that it has been a no show in this
market, and consequently, when the time came to run this code on larger
machines, it was all Unix based. I doubt any of these companies even
seriously considerd OpenVMS, since the guys doing the deciding have run
nothing but Unix for the last decade. And even if they had been open
minded enough to look at OpenVMS, it really would have been illogical for
them to choose it.
How did Tru64 get the job over another Unix, for instance, Solaris? It's
really quite simple. The code was all developed on Unix systems, so:
1. Tru64 is a Unix, so it can run that code without major modification.
2. Tru64 runs on Alpha, and that chip is faster than a Sparc.
3. Tru64 can make full use of the Alpha's speed.
Linux made quite an inroad as well, because, thanks now to the native
Compaq compilers, it passes all 3 criteria too.
OpenVMS washes out, failing criteria 1 and 3.
And guys, these first few genomics sales are just the beginning. There are
going to be some huge sales to the pharmaceutical industry. They will be
building/using tools to manipulate and analyze the vast amounts of data
present in the genomes of various organisms, gene expression patterns,
protein strucures and every other type of biological information. But this
isn't transaction processing, and it isn't even clear to me that Oracle
style relational databases are going to play a large role. So unless
OpenVMS gets head to head with Tru64 in performance, Unix compatibility,
and price, I don't see any future for it in this field. Or for that
matter, in any other new field that may come down the road.
Regards,
David Mathog
mat...@seqaxp.bio.caltech.edu
Manager, sequence analysis facility, biology division, Caltech
No one will argue that OpenVMS marketing suffered under the latter portion
of the old Digital regime or even the early Compaq days when things were
just settling in.
However -
Does the COE project discussed in Terry Shannons articles not address some
of the issues you raise in the attached ie. combining the good features of
OpenVMS with some of the core functions associated with UNIX OS's?
Would this not seem like the best of both worlds? The RASS (reliability,
availability, scalability and security) and unique features (like Galaxy) of
OpenVMS combined with the Linux/UNIX applications available.
In terms of marketing and applications, is not todays announcements for new
Oracle IAS (Internet Application Server) middleware software being made
available on OpenVMS a good start to making up for lost time?
Are not the recent endorsements from major Customers like E*Trade on the new
Alpha GS Series a good sign that Customers (and ISV's) are re-examining
their strategies around OpenVMS?
Reference:
<http://www.compaq.com/alphaserver/gs/quotes/etrade.html>
<http://www.openvms.digital.com/gsseries/quotes.html>
<http://www.openvms.compaq.com/openvms/brochures/>
I agree there is still much work to be done.
However, while perhaps not fast enough for readers of this list, surely the
events of the last few months can be seen as steps in the right direction?
[now donning my suit of armour ..]
:-)
Regards,
Kerry Main
Senior Consultant,
Compaq Canada
Professional Services
Voice : 613-592-4660
FAX : 819-772-7036
Email : kerry...@compaq.com
David Mathog wrote:
> In article <001301bfe10e$74e4b2c0$14b324a6@CJ4733A>, arturo saavedra <arturo....@wcom.com> writes:
> >Compaq A Winner In Gene Race
>
> And OpenVMS is the big loser - because none of these genomics folks chose
> OpenVMS to run on a single box, going instead with Tru64 or Linux.
>
>
> And guys, these first few genomics sales are just the beginning. There are
> going to be some huge sales to the pharmaceutical industry. They will be
> building/using tools to manipulate and analyze the vast amounts of data
> present in the genomes of various organisms, gene expression patterns,
> protein strucures and every other type of biological information. But this
> isn't transaction processing, and it isn't even clear to me that Oracle
> style relational databases are going to play a large role. So unless
> OpenVMS gets head to head with Tru64 in performance, Unix compatibility,
> and price, I don't see any future for it in this field. Or for that
> matter, in any other new field that may come down the road.
>
David, you make some very valid points. For non-mission critical scientific
computing where security, stability and scalability is not such and issue and
where system admin effort is essentially free (academia) then VMS lost its
footing in the late 80's.
However, surely this genome work should not be treated with kid
gloves, from the security and data integrity angle? As I understand it, this
"breakthrough" is not as major as it is being played up to be at the moment. Still, a lot
of work has to go on at the theoretical level, and at some point real experimentation,
will be necessary.
However, once you start having real people's genome data on your computer,
surely you have a data protection issue? Already I am hearing rumours that
insurance companies might want genetic samples, which is quite frightening.
It seems to me this sort of data is very personal and private and should
not be stored on a linux box set up by a grad student with only a limited grasp
of the security issues etc (the reason my old dept dumped Tru64 was because
exactly this happened, OK not with sensitive data, but the box got hacked
badly).
btw, you are always slagging off VMS performance for unix code compared
to unix. Did you try actually developing any applications for VMS using VMS's
strengths recently?
--
Tim Llewellyn, OpenVMS Infrastructure, Remarcs Project
MedAS at the BBC, Whiteladies Road, Bristol, UK.
Email tim.ll...@bbc.co.uk. Home tim.ll...@cableinet.co.uk
I speak for myself only and my views in no way represent those of
MedAS or the BBC.
Yup. I have heard this directly from Capellas, Heil, and Marcello.
>
> However -
>
> Does the COE project discussed in Terry Shannons articles not address some
> of the issues you raise in the attached ie. combining the good features of
> OpenVMS with some of the core functions associated with UNIX OS's?
(Terry here...) COE not only will equip OpenVMS with "Solaris-like" APIs, it
will guarantee that OpenVMS remains viable for a minimum of 15 years.
Probability Factor: 0.9999...
>
> Would this not seem like the best of both worlds? The RASS (reliability,
> availability, scalability and security) and unique features (like Galaxy)
of
> OpenVMS combined with the Linux/UNIX applications available.
>
> In terms of marketing and applications, is not todays announcements for
new
> Oracle IAS (Internet Application Server) middleware software being made
> available on OpenVMS a good start to making up for lost time?
Yep, as is the Tier One status now enjoyed by the OS!
>
> Are not the recent endorsements from major Customers like E*Trade on the
new
> Alpha GS Series a good sign that Customers (and ISV's) are re-examining
> their strategies around OpenVMS?
Seems to me that $100M in INCREMENTAL NEW OPENVMS business in 1FQ00 ought to
say something...
cheers,
terry s
It turns out it wasn't just the marketing, was it? Seems the engineering
lagged rather a lot too - nobody bothered to keep the OS competitive in
terms of IO performance. That's by far the bigger failing.
>
>However -
>
>Does the COE project discussed in Terry Shannons articles not address some
>of the issues you raise in the attached ie. combining the good features of
>OpenVMS with some of the core functions associated with UNIX OS's?
I don't know yet. The requirements for a competitive general purpose OS
right now are very simple - it must be able to build and run code developed
on Unix systems, with no more than a trivial amount of code changes (such
as that between Unixes) required. That's because all code that runs on
OpenVMS now comes from Unix, and anything that creates an incompatibility
is a bug, not a feature. OpenVMS has a chance of getting back in the game
once you a customer can do this:
$ ftp/user=anonymous/password="m...@here.edu" ftp.lot_o_software.com
ftp> binary
ftp> get pub/packages/program.tar.gz []program_tar.gz
ftp> quit
$ gunzip program_tar.gz
$ tar -xf program_tar.;
$ bash
bash> cd program/tru64source
bash> make
bash> ./newprogram -blah -BLAH
bash>
$ diff result.txt example_result.txt
no differences found
and it completes with no problems. That is, it builds with nothing more
than minor modifications required to the makefile, the program source code,
and when the binary runs it does everything at least as fast as Tru64 (or
Linux), with the output produced exactly the same as on Tru64 (including
especially when a "record" sent to a stream text file exceeds 32k.) And
yes, there really must be a bash and tcsh, because Makefiles and other
supporting "glue" for many packages will only work properly within those
shells. If OpenVMS can't do that, then for small users who are not crazed
by the thought of losing a few bytes of data, Linux and Tru64 will remain
the OS of choice for Alpha.
It would also be nice if Compaq finally realized that its clustering
technology could be used to sell more OpenVMS machines, but that it is
worth very little more than Unix NFS/NIS style solutions to most customers,
and since groups of OpenVMS machines basically don't work together well
unless in VMS clusters, they are presenting the customers with the choice
of inexpensive clusters (Unix) or with OpenVMS, either no clustering or
unaffordable clustering.
>
>Would this not seem like the best of both worlds? The RASS (reliability,
>availability, scalability and security) and unique features (like Galaxy) of
>OpenVMS combined with the Linux/UNIX applications available.
I don't ever expect to own a Galaxy class machine - it is IRRELEVANT for
shops smaller than data centers. The guys who wrote all the software that
grew up to run at Celera did it almost entirely on unix workstations.
OpenVMS RASS is not substantially better than the competition (except for
security, which really is a lot better than on Unix.) For 99% of
the market Unix style NIS/NFS is adequate, and the insanely high prices for
"real" OpenVMS clustering are an unjustifiable expense. Rather than pay
for OpenVMS security they go and hide behind a firewall, which is much less
expensive. If OpenVMS's features were worth as much as you wish they were
Compaq would be selling scads of small clusters everywhere, but it isn't,
the only sales we ever hear about are for datacenter style installations.
Now you may say, "Celera is running a huge, sort of datacenter, style
installation, so why shouldn't they consider OpenVMS?" Well, besides
their not wanting to rewrite their code, a lot of what they do involves the
generation and manipulation of a zillion small files, and OpenVMS is turtle
slow at that particular operation.
>In terms of marketing and applications, is not todays announcements for new
>Oracle IAS (Internet Application Server) middleware software being made
>available on OpenVMS a good start to making up for lost time?
No, because I don't use Oracle on my own machines, and there was nothing
else of interest there. I suppose it's good news for the data center guys,
and I'm happy for them, but its irrelevant for everybody else.
>
>Are not the recent endorsements from major Customers like E*Trade on the new
>Alpha GS Series a good sign that Customers (and ISV's) are re-examining
>their strategies around OpenVMS?
Come on guy, E*trade is a new customer, but it's the same old market. Read
R. Marcello's letter on the OpenVMS web site. The only markets mentioned
are process control (mostly done on chip assembly lines by VAXes) and large
scale transaction processing. Now Compaq may break it down into Health
care, stock exchanges, and so forth, but the bottom line is that they are
all doing basically the same thing on their machines, and E*trade is more
of the same. Nobody is saying that OpenVMS isn't a great OS for a
datacenter doing transaction processing - the problem is with the other 99%
of computing applications that it isn't addressing.
>However, while perhaps not fast enough for readers of this list, surely the
>events of the last few months can be seen as steps in the right direction?
Umm, let's see. A few months ago I had poor performance and lousy Unix
compatibility, and today I have the same poor performance and lousy Unix
compatibility, but hear that steps are being taken which may, or may not,
adequately remedy this situation, and which will dribble out over the next
(unspecified) time interval. There has been no official commitment
anywhere in precise and clear terms to deliver the improvements which
really are needed (see above.) In the meantime, I can assume that a good
fraction of the support costs we pay (admittedly, on the CSLG, much less
than business folks pay) will not go towards OpenVMS development, but will
instead by siphoned off to pay for development elsewhere in Compaq and to
prop up the PC side of the business.
Compaq may be taking steps internally, but externally, motion is
essentially undetectable.
Tim Llwellyn wrote:
>>>However, once you start having real people's genome data on your computer,
surely you have a data protection issue? Already I am hearing rumours that
insurance companies might want genetic samples, which is quite frightening.
It seems to me this sort of data is very personal and private and should
not be stored on a linux box set up by a grad student with only a limited grasp
of the security issues etc (the reason my old dept dumped Tru64 was because
exactly this happened, OK not with sensitive data, but the box got hacked
badly).<<<
Are you sure it was hacked badly, rather than hacked well? :-))
Anyway, to be serious, I would imagine that there is more than one issue here
for data security.
Obviously the genome data must have come from somewhere, but the individuals may
not be named or coded in any way. In many research markets though the time to
market or time to paper is a key differential. If company x gets their product
out six months ahead of company y then x will clean up and y will not do quite
so well. Similarly, the first man to publicise the atom bomb or the first man
to walk on the moon get remembered and get the kudos, the second and third there
don't.
Then there's the key issue (as with any system) to maintain data integrity.
There's no point in doing all this work on human genomes only to find that
something got corrupt in the first week and you've been working with cruddy
data......
...
> (Terry here...) COE not only will equip OpenVMS with "Solaris-like" APIs,
it
> will guarantee that OpenVMS remains viable for a minimum of 15 years.
> Probability Factor: 0.9999...
'Viable' is not a synonym for 'alive'. My (oldish) dictionary defines
'viable' as 'able to live and develop under normal conditions', 'able to
take root and grow'.
'Develop' and 'grow' being the operative words, IMO. To what extent does
COE guarantee this, rather than just that VMS, while it may stagnate, won't
disappear?
- bill
In some ways I agree whole-heartedly, but I'm still inclined to say,
"But...".
The VMS-engineering-paths-not-taken are subtler than the
VMS-marketing-paths-not-even-looked-at. I've heard that a book called 'The
Innovator's Dilemma' addresses the problems of listening mostly to your
existing customer base when soliciting direction: you never hear where the
far wider group you'd *like* to have as customers wants to go.
So while I believe that VMS won't thrive without considerably more
development work than is scoped out in its road map, I suspect that it may
be difficult to get many people here to understand why that may be the case:
they're pretty satisfied with VMS as it stands, and the already-scheduled
future development is largely based on any areas they're not quite happy
with.
...
> It would also be nice if Compaq finally realized that its clustering
> technology could be used to sell more OpenVMS machines, but that it is
> worth very little more than Unix NFS/NIS style solutions to most
customers,
> and since groups of OpenVMS machines basically don't work together well
> unless in VMS clusters, they are presenting the customers with the choice
> of inexpensive clusters (Unix) or with OpenVMS, either no clustering or
> unaffordable clustering.
[Insert yet another plug for a heterogeneous SAN file system that allows VMS
systems to sort-of-cluster, inexpensively, with Unix and NT systems, at
least as far as sharing data - and perhaps some simple IPC and distributed
lock management - goes.]
>
> >
> >Would this not seem like the best of both worlds? The RASS (reliability,
> >availability, scalability and security) and unique features (like Galaxy)
of
> >OpenVMS combined with the Linux/UNIX applications available.
>
> I don't ever expect to own a Galaxy class machine - it is IRRELEVANT for
> shops smaller than data centers.
You might have said something similar about SMPs 15 years ago. But if you
*do* ever own a Galaxy-class machine, it may well be because it has become
as much of a commodity as SMPs are today, hence in no way a VMS
product-differentiator.
[Snipped the rest, all of which I agree with.]
- bill
David Mathog wrote:
> In article <910612C07BCAD1119AF4...@kaoexc4.kao.dec.com>, "Main, Kerry" <Kerry...@compaq.com> writes:
> >David,
> >
> >No one will argue that OpenVMS marketing suffered under the latter portion
> >of the old Digital regime or even the early Compaq days when things were
> >just settling in.
>
> It turns out it wasn't just the marketing, was it? Seems the engineering
> lagged rather a lot too - nobody bothered to keep the OS competitive in
> terms of IO performance. That's by far the bigger failing.
>
Not that stupid write thru/write back cache issue again, David? If people cannot
appreciate this issues involved in performance versus caching strategy, surely
they deserve evey lost/corrupted nibble they experience.
>
> >
> >However -
> >
> >Does the COE project discussed in Terry Shannons articles not address some
> >of the issues you raise in the attached ie. combining the good features of
> >OpenVMS with some of the core functions associated with UNIX OS's?
>
> I don't know yet. The requirements for a competitive general purpose OS
> right now are very simple - it must be able to build and run code developed
> on Unix systems, with no more than a trivial amount of code changes (such
> as that between Unixes) required. That's because all code that runs on
> OpenVMS now comes from Unix, and anything that creates an incompatibility
> is a bug, not a feature. OpenVMS has a chance of getting back in the game
> once you a customer can do this:
>
Yeah, well, it always amused me when HEP was jumping off VMS (and VM/CMS)
onto unix in the interests of code portability they did not work at all very hard
to make their unix code portable to VMS etc.
>
> $ ftp/user=anonymous/password="m...@here.edu" ftp.lot_o_software.com
> ftp> binary
> ftp> get pub/packages/program.tar.gz []program_tar.gz
> ftp> quit
> $ gunzip program_tar.gz
> $ tar -xf program_tar.;
> $ bash
> bash> cd program/tru64source
> bash> make
> bash> ./newprogram -blah -BLAH
> bash>
> $ diff result.txt example_result.txt
> no differences found
>
Didn't anyone tell you not to run code you found on the net :-) (joke).
> and it completes with no problems. That is, it builds with nothing more
> than minor modifications required to the makefile, the program source code,
> and when the binary runs it does everything at least as fast as Tru64 (or
> Linux), with the output produced exactly the same as on Tru64 (including
> especially when a "record" sent to a stream text file exceeds 32k.) And
> yes, there really must be a bash and tcsh, because Makefiles and other
> supporting "glue" for many packages will only work properly within those
> shells. If OpenVMS can't do that, then for small users who are not crazed
> by the thought of losing a few bytes of data, Linux and Tru64 will remain
> the OS of choice for Alpha.
David, and those few bytes of data could mean the difference between life
and death for someone with a terminal illness that some geneticists are trying
to help a few years down the road of the genome project, because they are
running on a windoze box without ECCram or whatever.
As someone who learned to do real life reatime science on RSX and VMS,
Unix and Windows (whatever variety) scared the shit out of me then (10 years ago)
as it does now.
Of course, if you really don't care about your data...
>
> It would also be nice if Compaq finally realized that its clustering
> technology could be used to sell more OpenVMS machines, but that it is
> worth very little more than Unix NFS/NIS style solutions to most customers,
> and since groups of OpenVMS machines basically don't work together well
> unless in VMS clusters, they are presenting the customers with the choice
> of inexpensive clusters (Unix) or with OpenVMS, either no clustering or
> unaffordable clustering.
>
Agreed, as Nigel Arnot I think claimed recently, to have a cluster you need
to buy at least two, probably 3 boxes, so why pay thru the nose for the cluster
licence when you've already doubled or tripled the vendor's hardware margin.
Cluster development costs at the lowend MUST have been recouped many times
now. OK, maybe some high end cannibalization might occur, but ultimately VMS
will be more competitive because more people are using it for a greater variety
of applications.
Billy boy and unix have made people think rebooting is a normal thing.
>
> >
> >Would this not seem like the best of both worlds? The RASS (reliability,
> >availability, scalability and security) and unique features (like Galaxy) of
> >OpenVMS combined with the Linux/UNIX applications available.
>
> I don't ever expect to own a Galaxy class machine - it is IRRELEVANT for
> shops smaller than data centers. The guys who wrote all the software that
> grew up to run at Celera did it almost entirely on unix workstations.
> OpenVMS RASS is not substantially better than the competition (except for
> security, which really is a lot better than on Unix.) For 99% of
> the market Unix style NIS/NFS is adequate, and the insanely high prices for
> "real" OpenVMS clustering are an unjustifiable expense. Rather than pay
> for OpenVMS security they go and hide behind a firewall, which is much less
> expensive. If OpenVMS's features were worth as much as you wish they were
> Compaq would be selling scads of small clusters everywhere, but it isn't,
> the only sales we ever hear about are for datacenter style installations.
> Now you may say, "Celera is running a huge, sort of datacenter, style
> installation, so why shouldn't they consider OpenVMS?" Well, besides
> their not wanting to rewrite their code, a lot of what they do involves the
> generation and manipulation of a zillion small files, and OpenVMS is turtle
> slow at that particular operation.
>
Application design issue. Don't apps developed on unix scare the shit out of you?
>
>
> Umm, let's see. A few months ago I had poor performance and lousy Unix
> compatibility, and today I have the same poor performance and lousy Unix
> compatibility, but hear that steps are being taken which may, or may not,
> adequately remedy this situation, and which will dribble out over the next
> (unspecified) time interval. There has been no official commitment
> anywhere in precise and clear terms to deliver the improvements which
> really are needed (see above.) In the meantime, I can assume that a good
> fraction of the support costs we pay (admittedly, on the CSLG, much less
> than business folks pay) will not go towards OpenVMS development, but will
> instead by siphoned off to pay for development elsewhere in Compaq and to
> prop up the PC side of the business.
>
> Compaq may be taking steps internally, but externally, motion is
> essentially undetectable.
>
All we can do is wait and see, it seems Compaq is certainly taking notice
of Alpha now (spotted an job for senior Alphaserver salebods in the UK today),
just how much of an extra push VMS gets, we'll have to see.
hmmm, you didn't mention the graphics hardware support issue either yet.
It would have been viable for 15 years even without the COE - but only in
the few niches where it is currently competitive. If the COE is to let
OpenVMS escape from those niches it must deliver BOTH:
1. effortless Unix compatibility
2. speeds equivalent to Tru64 and Linux/Alpha
Funny thing in a way that they are going for "solaris-like" APIs when Tru64
compatibility would in many ways be more natural. But they didn't make up
the COE standard, and that's that.
>Seems to me that $100M in INCREMENTAL NEW OPENVMS business in 1FQ00 ought to
>say something...
>
Care to hazard a guess as to the amount of that growth which is _not_
attributable to "transaction processing in data centers"? Or how about an
estimate of the market losses outside of that lucrative niche which are
being more then covered by income from growth within it?
OpenVMS seems to be doing well in that market and terribly everywhere else.
...
> Not that stupid write thru/write back cache issue again, David? If people
cannot
> appreciate this issues involved in performance versus caching strategy,
surely
> they deserve evey lost/corrupted nibble they experience.
Rather, if VMS bigots can't appreciate them, they deserve to see the system
they like go down the drain: integrity is not at issue here, but you appear
incapable of understanding that.
...
> David, and those few bytes of data could mean the difference between life
> and death for someone with a terminal illness that some geneticists are
trying
> to help a few years down the road of the genome project, because they are
> running on a windoze box without ECCram or whatever.
>
> As someone who learned to do real life reatime science on RSX and VMS,
> Unix and Windows (whatever variety) scared the shit out of me then (10
years ago)
> as it does now.
>
> Of course, if you really don't care about your data...
... you use RMS sequential files, which buffer writes just like Unix does,
just usually not for as long (so your *chances* may be a bit better, if
chance is what you're relying on rather than correct application design).
If you *do* care about your data, you use $FLUSH with RMS when appropriate,
or fsync on Unix (at pretty much the same points). The difference is that
you get rewarded with considerably better performance on Unix.
...
Well, besides
> > their not wanting to rewrite their code, a lot of what they do involves
the
> > generation and manipulation of a zillion small files, and OpenVMS is
turtle
> > slow at that particular operation.
> >
>
> Application design issue.
Not on Unix, it isn't. Are you saying that it's no problem that VMS often
performs like a pig for applications not specifically designed around its
limitations? Doesn't sound like a really good market-growth strategy to
me...
- bill
There are differing interests here, depending on the application and
the customer environment. Some customers need the reliability, some
need the performance. Most want both, but that gets expensive. One
example of performance and reliability involves, for instance, adding
batteries into the StorageWorks shelves.
XFC is a recent step in this area (and XFC write-behind caching support
is under development), and I know that one of the senior OpenVMS engineers
has taken an interest in locating the current file system performance
bottlenecks. This as part of locating good performance enhancements for
OpenVMS, and as part of the new file system work, and as part of scaling
into larger and larger storage configurations.
Given some of the relatively low I/O performance in several of the more
common tools -- one of the locals has a significantly faster version of
zip that gains its performance through larger transfers, for instance --
even basic SET RMS-level tuning can help with the I/O performance.
One of my pet peeves is the relatively poor settings for various RMS
defaults -- they were good back when memory was tight, but they're not
so hot with large-memory systems...
--------------------------- pure personal opinion ---------------------------
Hoff (Stephen) Hoffman OpenVMS Engineering hoffman#xdelta.zko.dec.com
VMS has its niches.
Tandem has its niches too.
Linux has its niches too.
VMS niches aren't growing fast enough for you?
Let's shed a tear for the Netware bigots. Got a few around me
here. The really sad fact is they are less than half the size of
VMS in annual revenue and their revenues are tailing off badly.
There's a dying OS if you want to pick on one.
Rob
*******************************************************************************
RIP NetWare
*******************************************************************************
Both Solaris and Tru64 are COE certified. So is HP-UX. Windows NT
is grandfathered in. I have no idea what versions of the COE standard
apply to these various systems.
Gack! The new kid on the block is "grandfathered in"??? On
what basis? Oh, I know, on the basis of BGinc's grip on everyone
else's short hairs... Yee gads, what a world.
-Ken
--
Kenneth H. Fairfield | Internet: Fair...@SLC.Slac.Stanford.Edu
SLAC, 2575 Sand Hill Rd, MS 46 | Voice: 650-926-2924
Menlo Park, CA 94025 | FAX: 650-926-3515
-----------------------------------------------------------------------------
These opinions are mine, not SLAC's, Stanford's, nor the DOE's...
Have any tips on how to calculate some better RMS tuning based on either
F$GETSYI( "MEMSIZE" ) or F$GETSYI( "VIRTUALPAGECNT" )?
--
David J. Dachtera
dba DJE Systems
http://home.earthlink.net/~djesys/
Unofficial Affordable OpenVMS Home Page and Message Board:
http://home.earthlink.net/~djesys/vms/soho/
What's really sad is that we both want basically the same thing. But, talk
about contradictions. "General purpose OS must be able to build and run code
developed on Unix systems"? Come on. Sounds a lot like when the Unix people
talked about 'open' systems, and meant Unix. So everything I developed over the
last 20 plus years that's running on VMS was developed on a Unix system, and I
was never aware of that. Darn Unix systems sure fooled me. Did a nice job of
acting like a VMS system. Then again, there's the possibility that you live in
a very small world, and refuse to acknowledge anywhere else exists. Now, if you
were to say that all the software YOU run on VMS comes from Unix, then I could
take you seriously. I'd also have a lot more sympathy for your problems and
needs.
> It would also be nice if Compaq finally realized that its clustering
> technology could be used to sell more OpenVMS machines, but that it is
> worth very little more than Unix NFS/NIS style solutions to most customers,
> and since groups of OpenVMS machines basically don't work together well
> unless in VMS clusters,
Hmmm.... and you've never seen DECnet work either.
> they are presenting the customers with the choice
> of inexpensive clusters (Unix) or with OpenVMS, either no clustering or
> unaffordable clustering.
>
> >
> >Would this not seem like the best of both worlds? The RASS (reliability,
> >availability, scalability and security) and unique features (like Galaxy) of
> >OpenVMS combined with the Linux/UNIX applications available.
>
> I don't ever expect to own a Galaxy class machine - it is IRRELEVANT for
> shops smaller than data centers. The guys who wrote all the software that
> grew up to run at Celera did it almost entirely on unix workstations.
> OpenVMS RASS is not substantially better than the competition (except for
> security, which really is a lot better than on Unix.) For 99% of
Here you need to substitute "the type of work I do" for the word 'market'.
> the market Unix style NIS/NFS is adequate, and the insanely high prices for
> "real" OpenVMS clustering are an unjustifiable expense. Rather than pay
> for OpenVMS security they go and hide behind a firewall, which is much less
> expensive. If OpenVMS's features were worth as much as you wish they were
> Compaq would be selling scads of small clusters everywhere, but it isn't,
> the only sales we ever hear about are for datacenter style installations.
> Now you may say, "Celera is running a huge, sort of datacenter, style
> installation, so why shouldn't they consider OpenVMS?"
Why should they? They apparently have a working solution. If it ain't broke,
don't fix it. Of course, they do now have an awful lot of data that competitors
might like to have, and probably will have even more in the future. Wonder if
the excellant security ov VMS interests them. Then they can just cut all
outside links, that works also.
> Well, besides
> their not wanting to rewrite their code, a lot of what they do involves the
> generation and manipulation of a zillion small files, and OpenVMS is turtle
> slow at that particular operation.
As use of this type of data becomes more common, I wouldn't be surprised if
someone produced a database product that specializes in storage and retrival of
this specific type of data.
> >In terms of marketing and applications, is not todays announcements for new
> >Oracle IAS (Internet Application Server) middleware software being made
> >available on OpenVMS a good start to making up for lost time?
>
> No, because I don't use Oracle on my own machines, and there was nothing
> else of interest there. I suppose it's good news for the data center guys,
> and I'm happy for them, but its irrelevant for everybody else.
Not sure if you're just stating a fact, or saying that if it's not relavent for
you, then it's not worth being done for VMS.
> >Are not the recent endorsements from major Customers like E*Trade on the new
> >Alpha GS Series a good sign that Customers (and ISV's) are re-examining
> >their strategies around OpenVMS?
>
> Come on guy, E*trade is a new customer, but it's the same old market. Read
> R. Marcello's letter on the OpenVMS web site. The only markets mentioned
> are process control (mostly done on chip assembly lines by VAXes) and large
> scale transaction processing. Now Compaq may break it down into Health
> care, stock exchanges, and so forth, but the bottom line is that they are
> all doing basically the same thing on their machines, and E*trade is more
> of the same. Nobody is saying that OpenVMS isn't a great OS for a
> datacenter doing transaction processing - the problem is with the other 99%
> of computing applications that it isn't addressing.
>
> >However, while perhaps not fast enough for readers of this list, surely the
> >events of the last few months can be seen as steps in the right direction?
>
> Umm, let's see. A few months ago I had poor performance and lousy Unix
> compatibility, and today I have the same poor performance and lousy Unix
> compatibility, but hear that steps are being taken which may, or may not,
> adequately remedy this situation, and which will dribble out over the next
> (unspecified) time interval.
Now let's see. Where's that magic wand that will work miracles in 2
nanoseconds? Drat! Can't find it. Guess I'll have to resort to the old
method. Hard work for however long it takes. So distasteful.
> There has been no official commitment
> anywhere in precise and clear terms to deliver the improvements which
> really are needed (see above.) In the meantime, I can assume that a good
> fraction of the support costs we pay (admittedly, on the CSLG, much less
> than business folks pay) will not go towards OpenVMS development, but will
> instead by siphoned off to pay for development elsewhere in Compaq and to
> prop up the PC side of the business.
Now that's a real valid bitch, and you gotta wonder where all the VMS related
revenue goes, cause it sure isn't into profits which would push the stock price
higher. Must be some red ink somewhere drinking up the revenue, and I'm real
sure it isn't VMS. Might be what's paying for all the AlphaServer ads that
feature T64. Grrrrrrrrr!!!!!!
I have good performance, and don't give a damn about Unix compatibility.
However I do understand that there is more than one use for a computer, and what
works well for me may not work well for another. I also understand that what
doesn't work so well for me may meet anothers needs quite well. I guess that
what rubs me the wrong way is your apparent indifference/denial of any type of
computing other than what you do.
Dave
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
DFE Ultralights, Inc. E-Mail: da...@tsoft-inc.com
T-Soft, Inc. 170 Grimplin Road Vanderbilt, PA 15486
Oh, we can appriciate them Bill, and acknowledge that VMS has fallen behind in
this area. It's just that the issue has been beat to death many times over, and
the VMS people have told us that they are working on a solution. I'm willing to
believe them. I'm also willing to believe that they will try to combine equal
or better performance with better reliability, cause it's the VMS way.
Yes, it's a problem! Yes, we acknowledge the deficiency, and feel that it never
should have been allowed to happen. No, it's not good that VMS doesn't handle
these applications better. Yes, it should be addressed and made much better.
No arguments! Complete agreement!
What we fail to appriciate is the continual flogging of the issue after it has
been hashed over countless times, and assurances have been delivered by the VMS
people that the issue is currently being addressed. So, how many times do we
have to hear about the same old problem before we move on to other issues while
waiting to see what the VMS people come up with? It's worse than spam!
So why don't you suggest some good defaults or a reasonable formaula in
the FAQ? Then you'll have a place to point people when they complain.
This could suffice until a future release of VMS that incorporates the
new and improved defaults.
.../Ed
--
Ed Wilts
Mounds View, MN, USA
mailto:ewi...@mediaone.net
Right. The thing that seems to make the most difference is setting extend
properly. For instance, if I run gunzip "vanilla" on the DS10 it makes a
frenetic "ticky-ticky-ticky" sound as the heads bounce around the disk for
each extend. What I usually do is eyeball the size of the .gz, multiply by
three, and set extend to that. It speeds things up a lot and _sounds_
better, just one quick "bzzt" or "blaaaat" as it writes.
When my users run various programs I hear a lot of that "ticky-ticky-ticky"
noise.
Anyway, that suggests that the manner in which file extends are handled
could use some work in the performance area. I'm guessing that it actually
has to write the disk bitmap and maybe the directory each time it extends,
and that's three 8 ms delays as the heads move to the bitmap, then the
directory, then back to the end of the file. I may have the details wrong
but you can really hear the heads jumping around on the drives at each
extend. (One of the "benefits" of having your DS10 sitting on a large flat
table which does an excellent imitation of a drum head.)
> One of my pet peeves is the relatively poor settings for various RMS
> defaults -- they were good back when memory was tight, but they're not
> so hot with large-memory systems...
>
At least for files you can twiddle the RMS parameters, pipes are even worse.
The only parameter you can tweak sets the general mailbox sizes, so it
affects all mailboxes on the system, and when I cranked those up Decwindows
failed on the next login!
The limit I keep running into is that no matter what I do RMS IO always
seems to come out 2.5 - 6x slower than on Linux, for any memory to memory
operation. That was the ratio for netpipe tests through localhost,
and that was the same ratio for a record based copy test using a Ramdisk
(versus file caching.) If you look at
http://seqaxp.bio.caltech.edu/www/vmstcpip/COMPARISON.GIF
and examine the localhost tests on various systems you'll find the
interesting result that the plot is linear up to the point it bends over
due to saturation and the slope in the linear regions of the plots is the
same for all OS's. It's a log log plot, so I guess that shift
indicates an overhead ratio which is proportional to the size of the size
of the data. So if I'm thinking straight, and y = log(xfer rate) and x =
log(buffer size) then y = ax + b
exp(y) = exp(ax+b)
xfer rate = buffer size * exp(b)
xfer rate = buffer size * constant
That is, the performance is proportional to the buffer size,
and the constant is likely inversely proportional to the number of times
each byte is scanned (or average operations per byte.) That's a bit too
simplistic - I should probably fit the curves better to pick up any other
nonlinear terms. Anyway, it fits a model where Linux uses say 2 operations
per byte, TCIP/IP services about 6, and Multinet about 10, (or multiply all
of the above by the same factor.)
>'Viable' is not a synonym for 'alive'. My (oldish) dictionary defines
>'viable' as 'able to live and develop under normal conditions', 'able to
>take root and grow'.
As a lover of words, I now always resort to Merriam-Webster. With a few
extensions to your definition, I would too have agreed with you. M-W now
includes the definition 3.2 "financially sustainable". Which I think serves the
purpose.
I concede that I do despair of some of these "new" accepted definitions, but we
dinasours must move with the times.
[OT, I recently had some discussion with my wife who studies horticulture. She
used the word "vector" as a pollinator. Other than the mathematical usage, I
had only known it as a disease carrying host. No other printed dictionary that
I have recourse to includes "pollinator". M-W now includes it -- "now" is of
indeterminate date :-). As with most discussions with my wife, I invariably
have to concede defeat -- even though her superiority is not always justified by
resorting to M-W, usually I have to concede because she "said so", with the "or
else" implication :-).]
Regards, Paddy
Paddy O'Brien,
Transmission Development,
TransGrid,
PO Box A1000, Sydney South,
NSW 2000, Australia
Tel: +61 2 9284-3063
Fax: +61 2 9284-3050
Email: paddy.o'br...@zzz.tg.nsw.gov.au
Either "\'" or "\s" (to escape the apostrophe) seems to work for most people,
but that little whizz-bang apostrophe gives me little spam.
This decision was made in Washington.
Did you have any further questions ?
P.S. to Andrew, individuals participating in this newsgroup are no more
in control of the US government than you are in control of the
UK government.
VMS used to own the scada/process control market, but that has been lost
due to neglect, as well as increased computing power in distributed control
systems such as Honeywell TDC 3000, and desire of the PHBs/MGMs to run on
Windows NT, whether it is reliable or not.
A DejaNews search of the sci.engr.control newsgroup for "vms" will get few
hits.
As for running chip assembly lines, the attached article sounds like
that's another market that's migrating from VMS.
--Jerry Leslie (my opinions are strictly my own)
==============================================================================
From: David Woodbury <r28...@email.sps.mot.com>
Newsgroups: comp.software-eng
Subject: Re: unix/nt
Date: Tue, 21 Mar 2000 13:57:04 -0700
Organization: Motorola Semiconductor Products Sector
Lines: 55
Message-ID: <38D7E220...@email.sps.mot.com>
References: <8auck5$812$1...@nnrp1.deja.com>
NNTP-Posting-Host: gw250-nt-135.sps.mot.com
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Trace: spsnews.sps.mot.com 953672277 11182 221.188.216.135 (21 Mar 2000 20:57:57 GMT)
X-Complaints-To: use...@spsnews.sps.mot.com
NNTP-Posting-Date: 21 Mar 2000 20:57:57 GMT
X-Mailer: Mozilla 4.61 [en]C-MOT45 (WinNT; I)
X-Accept-Language: en
Jeanie:
You may want to post your question on the GW Associates mailing list. Check
out gwa.com for information how to join. The folks on that list have
semiconductor automation backgrounds and many have NT and Unix experience.
My own impression is that the commercial suppliers of MES software
(Manufacturing Execution Systems) are moving to NT in a big way - companies
such as PRI, Brooks Automation, and Consilium. Their move to NT may be a
factor in your company's decision to migrate to NT. There's some history
here too. Years ago, VMS was the MES host system of choice, then along came
Unix. Unix now has a fairly large representation in semiconductor fabs, but
by no means has it replaced VMS. From a software developer's perspective,
its very tough to match the quantity and quality of tools available on NT.
(As well as price!) I'm talking about compilers, debuggers, libraries,
editors, integrated development environments, and configuration management
tools.
I also think that the biggest risk with a move to a new operating system has
as much to do with the maturity of the new applications as it does with the
OS itself. Regardless of what the new OS is, it'll be tough to duplicate
the reliability and stability of a system that's been in use for 5-10
years.
BTW: The comment that "90% of the worlds microprocessors being made on VMS"
refers to the fact the most semiconductor fabs use some VMS hosted MES for
tracking and controlling production activities. (PROMIS, Workstream)
Dave Woodbury
jeanie_...@my-deja.com wrote:
> The company I work for is doing a redesign/rewrite of the automation
> systems for a semiconductor fab. We are currently running HP-UX in a
> distributed environment. Most seem to be leaning towards moving the GUI,
> the business logic applications, statistical process control, the
> drivers etcetera to NT and I feel this is going to be a big,big,BIG
> mistake. No matter which path we choose, new hardware is going to be a
> necessity. We run 24x7 so 100% uptime is the goal. The arguments I hear
> in favor of NT are: It's cheaper. It's just as reliable as unix.
> There's more software for NT. Management supports it. If we limit what
> we run, we won't run into problems with the system files being
> overwritten. The list goes on and on.
>
> I have not hands-on work with NT and I would like to know what problems
> anyone out there has run across with NT. I know this is like discussing
> religion or politics but please respond with facts I can use to either
> support or refute the NT path. No arguments can be won if there are not
> facts to back up the position taken.
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
David A Froble wrote:
> > Well, besides
> > their not wanting to rewrite their code, a lot of what they do involves the
> > generation and manipulation of a zillion small files, and OpenVMS is turtle
> > slow at that particular operation.
>
> As use of this type of data becomes more common, I wouldn't be surprised if
> someone produced a database product that specializes in storage and retrival of
> this specific type of data.
>
Sure, the "lots of small files" approach is definitely unixy, and there are surely
better
ways to do this in VMS. I cans see the point of David's frustrations, but if you
want real performance from VMS surely you have to architect your code that way.
David A Froble wrote:
> Bill Todd wrote:
> >
> > Tim Llewellyn <tim.ll...@bbc.co.uk> wrote in message
> > news:395B88AC...@bbc.co.uk...
> >
> > ...
> >
> > > Not that stupid write thru/write back cache issue again, David? If people
> > cannot
> > > appreciate this issues involved in performance versus caching strategy,
> > surely
> > > they deserve evey lost/corrupted nibble they experience.
> >
> > Rather, if VMS bigots can't appreciate them, they deserve to see the system
> > they like go down the drain: integrity is not at issue here, but you appear
> > incapable of understanding that.
>
OK I have more data now on the applications in question and it appears the
problem is not just a few tens of percentage due to caching, as I assumed wrongly.
That to me is an application architecture and/or deployment issue.
What worries me (playing the VMS bigot cause someone has to sometimes)
is that VMS will get bogged down so much with all the Unix compatability
that the performance and VMS-specific design methoodogy will suffer.
Just one - does the COE have any real value as a standard promoting
application portability? Since Solaris sources are even less likely to
work out of the box on NT than they do on OpenVMS NOW, even before the COE
work, the value of this "standard" seems questionable indeed.
Which isn't to say that the COE work going on in OpenVMS won't be useful -
anything that makes porting from Unix easier is a plus.
If there were anything like complete agreement, this discussion would have
ceased long ago.
>
> What we fail to appriciate is the continual flogging of the issue after it
has
> been hashed over countless times, and assurances have been delivered by
the VMS
> people that the issue is currently being addressed. So, how many times do
we
> have to hear about the same old problem before we move on to other issues
while
> waiting to see what the VMS people come up with?
Exactly as many times as people like Tim keep posting comments making it
clear that they *still* don't understand that the issue is purely a VMS
performance deficiency, not some kind of trade-off involving integrity.
This is the VMS bigot's version of the 'big lie' (reverse snake oil,
anyone?), and it gets repeated so often that without vigorous refutation it
seems likely to continue to dominate the thinking not only of participants
here but within the development group - which will not do VMS much good in
the real world.
- bill
>>>> Both Solaris and Tru64 are COE certified. So is HP-UX. Windows NT
>>>> is grandfathered in. ...
>>> ^^^^^^^^^^^^^^^^
>>>
>>> Gack! The new kid on the block is "grandfathered in"??? On
>>> what basis?
>>
>>This decision was made in Washington.
>>Did you have any further questions ?
>
> Just one - does the COE have any real value as a standard promoting
> application portability? Since Solaris sources are even less likely to
> work out of the box on NT than they do on OpenVMS NOW, even before the COE
> work, the value of this "standard" seems questionable indeed.
As I understand it, the COE standard is like the Posix standard.
It is intended for the people who want to write directly to the
COE standard -- both of those people.
What is the definition of viable ?
The proprietary operating system and software that runs the shuttle is quite
viable. It runs on 4 nasa vehicles and a few simulators. But don't expect ANY
growth for it.
and when you consider that irish railway, lets assume it is running VMS 3.0,
then VMS 3.0 is going to remain viable for a very long period, until Compaq
stops maintaining VAX hardware.
COE protects existing customers. It helps alleviate the fears that pootential
new customers may have for VMS's lifetime. But it does nothing to attract new customers.
CEO definitely removes a brick from the Berlin wall, but until a lot more
bricks have been removed, you won't see many people crossing over to the land
of VMS.
Marketing and affordability remain to be tackled.
Are they moving in the right direction ? I would say a definite YES.
Can they rest on their laurels and watch the fruits of their labour: DEFINITE
NO.
They still have a long way to go.
> Yep, as is the Tier One status now enjoyed by the OS!
When, where was it annouced that VMS now had Tier 1 status with Oracle ? I had
heard that True64 had obtained that status (during wildfire launch), but never
heard VMS getting that status.
> > Are not the recent endorsements from major Customers like E*Trade on the new
> > Alpha GS Series a good sign that Customers (and ISV's) are re-examining
> > their strategies around OpenVMS?
If VMS focuses on the small markets that need 98% reliability but tons of
performance, such ass Etrade, it may have a rude awakening once Tandem is
ported to Alpha and can offer that 99.999% reliability on the same
architecture: Alpha.
Tandem is poised to take some of the VMS markets once it gets the fast Alpha
chip, and if VMS wants to survive, it must spread its wings and be the
scalable VMS that it was designed for.
Instead of trying to port Unix to VMS, why not do the opposite: port VMS to
UNIX ?
After all, Unix is gaining VMS's clustering abilities, so it would just be a
matter of porting the system services to Unix, porting DCL as a unix shell
with TPU etc, and be done with it once and for all. Run the darn thing on
UNIX, and truly make it easy for remaining VMS customers to migrate to unix.
At least this way, by providing a VERY EASY path from VMS to UNIX, customers
would retain Alpha hardware and remain Compaq customers.
But the minute the porting effort to UNIX requires , well, an effort, whether
you ort to true64, Linux or Solaris doesn't make much difference, and chances
are the customers won't want to remain with Compaq.
Simple, for a few reasons.
1) VMS isn't standing still. VMS clustering has been around
since 1983 has evolved greatly even in the last 2 years.
2) Related to 1, APMP for now is a VMS only feature (aka Galaxy
in its VMS form).
3) Already a Unix that's VMS like.. Tru64 with a shared filesystem.
BUT , a subset of VMS in several aspects (for now).
4) Different paradigms. Unix filesystem is a strange beast
for a VMS person and vice-versa.
5) You can't make a silk purse out of a sow's ear ... VMS is
demonstratibly a superset of Unix as VMS supported Posix
at one time. Maybe if they gutted some of the finer aspects
of VMS it could get it over to Unix.
> After all, Unix is gaining VMS's clustering abilities, so it would just be a
> matter of porting the system services to Unix, porting DCL as a unix shell
> with TPU etc, and be done with it once and for all. Run the darn thing on
> UNIX, and truly make it easy for remaining VMS customers to migrate to unix.
>
Oh.. you just want some of the utilities and other features? Okay.
Sector 7 does stuff like this, among others.
Ummm... what about the rest of the internals (see 5 above)? Do
we need a list?
> At least this way, by providing a VERY EASY path from VMS to UNIX, customers
> would retain Alpha hardware and remain Compaq customers.
>
> But the minute the porting effort to UNIX requires , well, an effort, whether
> you ort to true64, Linux or Solaris doesn't make much difference, and chances
> are the customers won't want to remain with Compaq.
Here would be a good indicator how easy it is to go in that
direction. When you see APMP showing up on a Tru64 roadmap,
let us know... otherwise we can *assume* it is a VERY hard
problem they are still working on or have pushed out far into
the future. By then, VMS would have had further engineering
applied, etc.
Rob
Rather play Russian Roulette with all chambers full!
If I wanted Unix, I'd have switched long ago. I didn't. I won't.
Typical troll.
> Let's shed a tear for the Netware bigots. Got a few around me
> here. The really sad fact is they are less than half the size of
> VMS in annual revenue and their revenues are tailing off badly.
Hardware costs excluding?
Kit.
kit # kits.net
> >Does the COE project discussed in Terry Shannons articles not address
some
> >of the issues you raise in the attached ie. combining the good features
of
> >OpenVMS with some of the core functions associated with UNIX OS's?
> I don't know yet. The requirements for a competitive general purpose OS
> right now are very simple - it must be able to build and run code
developed
> on Unix systems, with no more than a trivial amount of code changes (such
> as that between Unixes) required.
Why not the code developed on Windows systems? Because *you* like Unix more,
right?
> and it completes with no problems. That is, it builds with nothing more
> than minor modifications required to the makefile, the program source
code,
> and when the binary runs it does everything at least as fast as Tru64 (or
> Linux), with the output produced exactly the same as on Tru64 (including
> especially when a "record" sent to a stream text file exceeds 32k.) And
> yes, there really must be a bash and tcsh, because Makefiles and other
> supporting "glue" for many packages will only work properly within those
> shells. If OpenVMS can't do that, then for small users who are not crazed
> by the thought of losing a few bytes of data, Linux and Tru64 will remain
> the OS of choice for Alpha.
If these "small users" really want to type all this weird stuff instead of
just clicking on the links in their browsers. Otherwise they will prefer
Windows and ix86.
> For 99% of
> the market Unix style NIS/NFS is adequate, and the insanely high prices
for
> "real" OpenVMS clustering are an unjustifiable expense.
No. For 99% of the market Windows style is adequate. I don't want to say
what it is good for VMS (or even for me). What I want to say is that any
your point about Unix-style VMS is even more appealing one for Windows-style
VMS. So either your position about the matters is completely wrong or you
just sejected the wrong "role model" for VMS, in both cases it should not be
Unix.
[Sorry, I don't want to be offensive. Nothing personal, just my lame English
;) ]
> Now you may say, "Celera is running a huge, sort of datacenter, style
> installation, so why shouldn't they consider OpenVMS?" Well, besides
> their not wanting to rewrite their code, a lot of what they do involves
the
> generation and manipulation of a zillion small files, and OpenVMS is
turtle
> slow at that particular operation.
It reminds me of SETI@HOME. If someone uses "a zillion small files" and
doesn't want to rewrite their code, they don't really need very fast I/O. So
probably they shouldn't consider VMS ;)
> Nobody is saying that OpenVMS isn't a great OS for a
> datacenter doing transaction processing - the problem is with the other
99%
> of computing applications that it isn't addressing.
Actually, I am not sure that trying to "address the other 99% of computing
applications" is the right way to develop "a great OS for a datacenter".
IMHO, there will be a big risk of maiking it "not so great OS for a
datacenter" (for technical or marketing reasons) without the guarantees of
better-than-mediocre appearance for "the other 99%".
I don't want to say that all the "enhancements" for "the other 99%" will be
wrong. But don't forget to consider that some of them could actually *harm*
VMS's current position, from technical and/or marketing points of view.
As an example of spoiling VMS-like functionality for the sake to meet
mass-"style" technical and marketing reasons, you can look at Windows NT
Native (kernel) API vs. Windows NT Win32 API.
Kit.
kit # kits.net
>
That was exactly the same deal with POSIX, and then "poof" one day it's
gone and the only trace of its existence is the much despised "Open" in
OpenVMS.
At least while it was around Posix worked moderately well as you long as
you stayed inside that environment. However, the one thing it did not
address at all was the integration of OpenVMS machines into Unix
environments. That is, there was no NIS, and somewhat iffy NFS.
Will the COE work _finally_ add some reasonable way for a VMS machine to
become a NIS client (ie, login using the information it obtains from a NIS
passwd map) and/or allow an OpenVMS machine to serve NIS maps? If not,
then it isn't going to do much to reduce the HUGE energy barrier against
putting in an OpenVMS machine in a Unix environment, and that Everest of a
barrier is figuring out a way to make it play nicely with the Unix machines
in the same group. I've got methods for keeping my VMS/Unix and WNT
accounts in synch, but it's a _long_ way from plug and play, and I've never
been able to figure out any way to do things like "automount" on the
OpenVMS end. Conversely, a group with a bunch of Unix machines can just
buy another Unix machine, and know that making it available comes down to
not much more than telling the new machine where the NIS maps are and
telling the servers to let it see them.
So can anybody at Compaq tell us if COE provides NIS?
The COE at this point is all smoke and mirrors. Any chance we could get
some sort of a brief checklist of what it will have in it, and what it
won't?
Others including Accelr8 Technology at www.accelr8.com
The principal value of the standard IMHO is that it guarantees that Compaq
will support OpenVMS for a minimum of 15 years. The Q expects to garner over
$500M in incremental revenue from COE-ified OpenVMS over the next five
years, but anyone can make revenue projections...
I can't and won't speak to what Compaq plans to do to provide
COE compliance for OpenVMS, but the standards documents for
DII/COE compliance are available at:
It's somewhat interesting to me that, I believe, DII/COE
compliance _requires_ that a large set of POSIX interfaces
be supported. I don't know if this means the resurrection
of OpenVMS POSIX or some other way to meet this requirement
will be seen.
I do know that this is not _just_ checklist stuff. There
is a testing and certification program which requires that
a large body of configuration and management software
will run in a DII/COE compliant environment.
I don't know if NIS, etc. are included.
>
> Regards,
>
> David Mathog
> mat...@seqaxp.bio.caltech.edu
> Manager, sequence analysis facility, biology division, Caltech
>
-Jordan Henderson
jor...@greenapple.com
Bill Todd wrote:
> Exactly as many times as people like Tim keep posting comments making it
> clear that they *still* don't understand that the issue is purely a VMS
> performance deficiency, not some kind of trade-off involving integrity.
> This is the VMS bigot's version of the 'big lie' (reverse snake oil,
> anyone?), and it gets repeated so often that without vigorous refutation it
> seems likely to continue to dominate the thinking not only of participants
> here but within the development group - which will not do VMS much good in
> the real world.
>
OK, maybe I missed some of the discussion. As I understand it now, what David is
saying
is "Unix apps run slower on unix than on VMS, mainly for filesystem related
reasons." What I am saying is
"sure, I knew that, to get performance on VMS design your application for
VMS,especially, don't
use zillions of little files". Of course, if you can do configuration
management properly and understand
comupting rather than just a limited aspect of one particlar operating system,
it doesn't seem to
like that much hard work to make an application work efficiently with a
consistent, userfriendly
(non-unix) user interface on a range of platforms including VMS and unix. Oracle
seem to be
a good example.
David, how many of your "applications" were written by grad students and
postsdocs?
> - bill
>
> It's worse than spam!
Yeah, but at least there ain't much more than one percent of the trafiic you
create, Bill.
The current COE standards are established, and the OpenVMS work would
have to be evaluated as meeting these standards. The COE standards
are also evolving -- like most anything else in this business, it's
all a moving target.
:Any chance we could get some sort of a brief checklist of what it will
:have in it, and what it won't?
I am not aware of the availability of such a checklist as yet. Sorry.
(If you are in an area that has a direct interest in or requirement for
COE, I can certainly pass the request along.)
The first OpenVMS release with COE features will be limited -- the first
COE release looks much like the software equivilent of a limited hardware
release, and will likely have a release version number to match.
I would expect to then see more details and more discussions around an
OpenVMS release (containing COE features) after the OpenVMS V7.3 release.
V7.3 does not contain the COE features, those features will be in something
that may be called "V7.2-6C1", and then appearing "again" in an OpenVMS
release after V7.3.
I would expect that we will be talking rather more about the OpenVMS COE
work at the Fall CETS200 event in LA, and then more as the subsequent
OpenVMS release (with COE features) nears.
--------------------------- pure personal opinion ---------------------------
Hoff (Stephen) Hoffman OpenVMS Engineering hoffman#xdelta.zko.dec.com
> Anyway, that suggests that the manner in which file extends are handled
> could use some work in the performance area.
I liked Glenn Everhart's scheme, the details of which I don't quite remember,
but which are similar to this:
- take the higher of the amount requested and x% (e.g., x=10) of the amount
currently allocated to the file
- take the lower of the previous amount and some fraction (e.g., 1-3) of
free space left on the volume
- take the lower of the previous amount and some fixed, volume-dependent
maximum (e.g., 1% of the volume's total space)
Thinking through various scenarios, this handles all I could think of well.
One could also track the last request and automatically truncate to that
(implied) size on close if an explicit $TRUNCATE isn't performed.
Jan
With all due respect, I appreciate COMPAQs problems trying to balance
reliability, performance and expense for different groups of customers. I
acknowledge it does provide products suitable for typical disk needs. However,
COMPAQ does not offer a high performance disk solution for VMS. This is
FORCING segments of the industry to migrate off VMS. COMPAQ concentrates on
medium performance, high reliablity, huge capacity solutions at outlandish
prices with a peculiar emphasis on clusters. This is probably its biggest
customer base. But what good is a mini lacking high performance I/O?
I belong to a part of the industry COMPAQ is neglecting. We need high
performance disk solutions. We judge products by their sustained bandwidth
when the disk is spiraling. We stubbornly insist that bandwidth should be
linearly related to the spindle count. And assert that until you can spiral
well, you can not build efficient disk products. We realize most of COMPAQ's
customer base does not need high performance disk solutions. However, we
expect $600,000.00 units to be able to meet our needs without question. In
short we believe COMPAQ has got it exactly backwards.
While the changes you referred to will benefit all customers, VMS and COMPAQ
storage products have inherent problems no amount of RMS or cache tweaking will
resolve. In advance excuse my bluntness, I am normally positive about VMS, but
I am angry about what it takes to get decent disk I/O and I am constantly
confronted by programmers who suggest dumping VMS because of its poor I/O
performance.
So there is no confusion about what high performance means, our current goal is
20MB/second/spindle sustained all day. Ideally a disk controller should be
able to support 4 drives before saturating. We do not expect COMPAQ to
perfectly meet our goals. But we do expect them to at least try. This is what
my industry needs and there are others, such as video with similar
requirements.
In this area VMS lags behind other contenders. Frankly, the aging RAID
solutions available for VMS just plain suck on both cost and performance when
compared to competitors. ULTRA 3 is not even supported. Tacking fiber in
front of old products does not inspire confidence. Too many COMPAQ products
are UNIX or PC only. Even if they work under VMS (proliant hardware), they are
not qualified. The VMS low cost RAID hardware (KZP) needed serious redesign
years ago. A KZP trying to spiral is an engineering embarassment. The HSZ
does somewhat better, but is ridiculously expensive. The sales literature is
confusing at best (ULTRA2 like rates?). The sales people just don't know and
recommend products that can't cut it. There are no programs to exercise and
display Max I/O (shame on engineering, look whats available on the PC). The
software RAID can't get out of its own way and constantly makes an observer
question why it is doing what it is doing. Binding 2 HSZs makes a good case
for opensource and peer review. And then there are VMS limitations.
In this environment, a heavy VMS I/O user is forced to look at third party
hardware. There are a lot out there far cheaper and faster than COMPAQ
products. Many of these are aimed at the PC and UNIX markets. However,
maintenance concerns aside, when you plug them into your VMS mini you will find
their performance disappointing. Now remove the hardware and plug it into a
$400.00 PC running NT and watch the same hardware achieve sterling performance.
A clear indication there are VMS issues? In any case its a humiliating
demonstration for a VMS supporter to observe.
One well known VMS problem is split I/Os. VMS can't write more than 127 blocks
at a time to disk (CIs excluded). While VMS handles this situation fairly
well, 3rd party RAID hardware does not. NT and UNIX do not have a 127 block
barrier. Overcoming the 127 block barrier has been rumoured to be in the next
release of VMS for as long as I can remember. Don't even bother asking the
COMPAQ sales force or even a lot of the technical people about split I/Os. For
the skeptical out there, do the following...
$anal/sys
>sda show device dka100 #or any scsi disk
>format ucb
At the bottom of the screen you will find UCB$L_MAXBCNT. It will read FE00.
Thats the max number of bytes you can xfer to the device, 127 blocks. If you
want to do 80MB/sec too many splits are required. If I seem angry keep in mind
that I have experienced many jokes about offloading disk I/O VMS can not handle
to a cheap PC running NT. COMPAQ please, please fix the '127 block feature'
quickly.
So what can one do? COMPAQ RAID hardware won't cut it, its software RAID is
pitiful and 3rd party products have trouble under VMS. We are buying 2 disk
controllers (ULTRA 2 ) and 4 drives. We have written a small program that does
disk striping on a file basis for reads and writes only. No fancy RMS
features just QIOs in buckets of 127. No parity disk, hot swapping, shadows,
backups or anything else. A hack and kludge surely. But nothing COMPAQ makes
comes close to it in performance. And the cost is rock bottom. We were
willing, ready and eager to spend a million but could not find a product that
worked. As for the inevitable comments about software kludges, until there are
VMS products available that offer similar performance there are no good
alternatives.
We are certain COMPAQ does not understand our needs and wonder where they are
going. We do not understand why customers are left to qualify COMPAQ's
hardware for VMS or write their own striping software. We are worried no
other customers are complaining and wonder if COMPAQ realizes what its
competitors providing.
Sorry to dump on you,
Screaming not streaming,
DW
...
> While the changes you referred to will benefit all customers, VMS and
COMPAQ
> storage products have inherent problems no amount of RMS or cache tweaking
will
> resolve.
...
Storage has been one of Compaq's (and earlier DEC's) strengths for quite a
while, and if it's slipping that's serious. People in comp.os.vms don't
seem as sensitive to high storage performance as some (witness the reaction
to the caching discussion), so this may not be the best place to gain
corroboration for your assessment or generate the pressure on Compaq that
your description of the situation suggests is necessary.
Corroboration comes first: if Compaq can sell all the storage it wants to,
then it'll take a good deal more than one critical customer to make it
listen. But I already know of a bunch of reasons that large storage vendors
like Compaq and EMC ought to be looking over their shoulders and trying to
stay technically current, and this is another big one if the situation is
anywhere nearly as bad as you describe it.
...
One should point out that typical IDE controllers (at least historically -
it's possible that the situation has improved recently, but I think that
would have cropped up in recent discussions elsewhere if it had) support
only 16 or 17 memory segments per transfer, which in the NT environment
translates to 64 KB worth of 4 KB pages. So on NT at least IDE drives
suffer from the same split-I/O size limit that VMS has. And while SCSI does
not have a similar limitation, NT may handle both the same way (and at least
some Unix systems default the max disk transfer size to 128 KB or so, though
that can be bumped up IIRC).
But with SCSI drives, unless the adapter and drive themselves can't handle
the command-processing overhead generated by 64 KB chunks (which seems
unlikely), it's not clear why using that request size (and building up a
request queue at the disk to avoid missed rotations) should do anything more
than expend some additional system processing resources that larger chunks
could conserve. And indeed if you've achieved reasonable performance using
QIOs, that suggests that this is indeed the case.
So if QIOs are dramatically more effective than RMS in this situation,
either you weren't taking advantage of RMS, or RMS needs serious work. To a
first approximation, RMS block I/O facilities, using asynchronous
multi-buffering as presumably you are with the QIOs, should be competitive
with direct QIOs: did you try that and find that this is not the case? And
if you want RMS to handle the asynchronous multi-buffering for you, you can
use its read-ahead and write-behind mechanisms (though IIRC they're limited
to sequential-file use, and that might automatically bring in
record-processing overheads as well, which could make a noticeable
difference): did you try that?
If indeed you tried these things and still found dramatic differences
between RMS and QIOs, then that's important for Compaq to know about (and
there are people here who will likely make sure they find out) - so please
expand on the description of your experiments if you have time to.
>
> We are certain COMPAQ does not understand our needs and wonder where they
are
> going. We do not understand why customers are left to qualify COMPAQ's
> hardware for VMS or write their own striping software.
The specific deficiencies you found in the striping software (at least my
vague impression was that VMS supported RAID-0-style striping) would likely
be of interest to the development group, should you care to report them
here.
My understanding is that Compaq qualifies disks for VMS mostly to ensure
that they correctly support multi-initiator operation (for clustering),
which has historically been a problem even though the SCSI specification has
been unambiguous in this area for a long time. Since the typical VMS system
is clustered, and since most commodity SCSI drives seem to work with VMS
just fine in non-clustered operation (though it's been said that poor
support for some mode pages has caused problems in a few cases), this
attitude is at least somewhat understandable.
But it certainly wouldn't be unreasonable for Compaq to qualify at least its
own storage products on VMS.
We are worried no
> other customers are complaining and wonder if COMPAQ realizes what its
> competitors providing.
My guess would be that no other VMS customers are complaining because there
aren't many doing the kinds of things you are with VMS (which is far from
saying that this market shouldn't be of interest to VMS).
My other guess would be that Compaq is as clueless as to what its
competitors are providing as it is in other areas. The VMS development
group still has a bunch of competent people in it, but they are a bit
isolated and likely more than a bit shell-shocked as well after so many
years of neglect.
- bill
our expirience is, that the read performance of OpenVMS is not bad.
We have tested that with three OSes. WindowsNT, Sun Solaris 2.6,
OpenVMS 7.1-1H1. The disk was a 18.2GB IBM DGHS disk. We did measure
the read into the null device. On all three OSes we got a result
nearly the same (14.9MB/s...15.2MB/s). Other the write performance.
In case of disabled disk drive write cache the performance growth down
to 3.6MB/s under OpenVMS. On the other OSses we did see a little lower
performace (write a 128MB file) then the read performance (14.5MB/s).
I think this was the handling time (read from one disk, write to the
other). Bringing the OpenVMS disk in an unsecure state (write cache
enabled with the freeware tool RZDISK) growth up the performace to
the same value as the other OSses (14.5MB/s for an 128MB and 1024MB
file). The problem is, that the user can't decision (normaly) that
the write cache should be enabled or disabled.
Regards Rudolf Wingert
Jan
I try not to get involved in these discussions
but...
Have you tried writing a simple program
doing let's say, 32k transfers using $qio on
HSG80 with writeback caching (and a recent FC
adapter, the Emulex LP8000 aka KGPSA-CA)?
Hint, > 50mbyte/sec is attainable...
That doesn't mean there aren't problems with
RMS and/or directory performance (even after
the 7.2 enhancements.. Certainly a B-tree based
redesign with journaling for metaupdates would be
quite welcome)
but in many mission-critical environments
(we do custom TP systems, as an example) having
to use QIO (and or FAST_IO) directly isn't a
problem, we prefer it to using RMS...
/m
...
> I try not to get involved in these discussions
> but...
>
> Have you tried writing a simple program
> doing let's say, 32k transfers using $qio on
> HSG80 with writeback caching (and a recent FC
> adapter, the Emulex LP8000 aka KGPSA-CA)?
>
> Hint, > 50mbyte/sec is attainable...
This is not all that impressive a number, given that the last FC external
RAID I happen to have been involved with (a Ciprico 8 + 1 RAID-3 array,
about 2 years ago) would sustain about 85 MBytes/sec without write-back
caching enabled (not that write-back caching should be all that effective in
enhancing streaming video applications that use asynchronous
multi-buffering).
- bill
The normal approach to obtaining good streaming performance from a SCSI disk
is to use asynchronous multi-buffering such that the disk always has one or
more outstanding requests ready to be satisfied, and can just keep sending
or receiving the data without pause. This does not require enabling
write-back caching for writes to perform well. You can ask RMS to handle
this for you (using its read-ahead/write-behind options) in some cases.
If you weren't doing this, the reason you obtained decent streaming-read
performance was because the disk was automatically pre-fetching for you.
Depending on the intelligence of its algorithm, you might or might not have
been able to do somewhat better by doing your own multi-buffering.
- bill
>
> Regards Rudolf Wingert
>
>
Yes I am aware that there are controllers out there which have
better performance than the HSG80, however my point was simply that VMS
attained the documented max bandwith for large sequential transfers,
which can be found in some HSG document on the storageworks site).
50M/sec is certainly "good enough" for us, other factors such as
reliability (including multibus failover are much more important
in our particular context.
BZZZZZT. Wrong answer, try again.
For a UNIT text file operation, going RAMDISK to RAMDISK on OpenVMS, or
file cache to file cache on Linux, on identical DS10s, the result is that
the Linux system is 2.5-6.5 times faster. This is for an operation like
"read text record from input, write text record to output" - pure IO. It
doesn't matter if you do this in 1 file or in 1000 you're already starting
out with the Unix systems "lighter" text file handling mechanisms 3X faster
than those on OpenVMS. And it goes downhill from there, rapidly, because
the lack of file effective file caching on OpenVMS throws in another factor
of 10 advantage to Linux. The only time I can get similar performance
from my VMS box is when the IO is minimal (a CPU bound program) or the IO
is done differently, via memory mapping or some other mechanism to bypass
RMS.)
>
>David, how many of your "applications" were written by grad students and
>postsdocs?
Oh, most of them. That's the reality I've got to live with. However,
while the quality of the code may vary, I cannot lay the blame there. Their
programming approaches are reasonable and work well on other OS's. The
fault is with OpenVMS, which simply has atrocious IO performance when used
on small systems with reasonably small data files, even AFTER you've gone
to RAMdisks. Take a look at the benchmarks I posted a while back - the
only way to speed them up on OpenVMS is to avoid fprintf() and instead use
routines which bypass both the C RTL and RMS and go straight to the QIO
level. That is unacceptable - virtually all software now is written in C
(or C++, which I have not tested but assume has the same problems) and it
is completely unreasonable that the OpenVMS user should have to rewrite
reasonably written code so completely in order to achieve adequate
performance on this OS.
David Mathog
mat...@seqaxp.bio.caltech.edu
Manager, sequence analysis facility, biology division, Caltech
**************************************************************************
* RIP VMS *
**************************************************************************
Polite discourse would be much better received.
>>David, how many of your "applications" were written by grad students and
>>postsdocs?
>
> Oh, most of them. That's the reality I've got to live with. However,
> while the quality of the code may vary, I cannot lay the blame there. Their
> programming approaches are reasonable and work well on other OS's. The
> fault is with OpenVMS, which simply has atrocious IO performance when used
> on small systems with reasonably small data files, even AFTER you've gone
> to RAMdisks. Take a look at the benchmarks I posted a while back - the
> only way to speed them up on OpenVMS is to avoid fprintf() and instead use
> routines which bypass both the C RTL and RMS and go straight to the QIO
> level. That is unacceptable - virtually all software now is written in C
> (or C++, which I have not tested but assume has the same problems) and it
> is completely unreasonable that the OpenVMS user should have to rewrite
> reasonably written code so completely in order to achieve adequate
> performance on this OS.
You do seem to claim that your complaint is with VMS in general and
only incidentally does it slip out that your concerns are only with
C. You blame RMS and they reveal that you are actually calling RMS
through the C Runtime library.
Try some tests in Ada, Pascal or Fortran, or else make it quite
clear in your strident posts that yours is a C-only viewpoint.
...
> You do seem to claim that your complaint is with VMS in general and
> only incidentally does it slip out that your concerns are only with
> C. You blame RMS and they reveal that you are actually calling RMS
> through the C Runtime library.
>
> Try some tests in Ada, Pascal or Fortran, or else make it quite
> clear in your strident posts that yours is a C-only viewpoint.
This observation may be reasonable with respect to wording of the post to
which it responded, but should not be taken as any indication that default
performance using RMS does not have similar problems - since it does indeed
have very similar problems. They can be alleviated significantly by
explicitly taking advantage of optional RMS features, but the result still
often falls considerably short of the performance one can achieve on Unix
just using default settings (and without sacrifice in integrity, since in
both environments judicious use of $FLUSH or fsync is required at any point
where a guarantee that written data is actually on the disk is important).
- bill
> This observation may be reasonable with respect to wording of the post to
> which it responded, but should not be taken as any indication that default
> performance using RMS does not have similar problems - since it does indeed
> have very similar problems. They can be alleviated significantly by
> explicitly taking advantage of optional RMS features, but the result still
> often falls considerably short of the performance one can achieve on Unix
> just using default settings (and without sacrifice in integrity, since in
> both environments judicious use of $FLUSH or fsync is required at any point
> where a guarantee that written data is actually on the disk is important).
Certainly "default settings" cannot serve all. But sometimes I program
for my 10 MB MicroVAX II, and sometimes I program for customer machines
of unknown memory size, so resource consumption must always be part of
the design for me. I have heard of someone who ordered a Wildfire because
they had money in the budget (when perhaps they could have used a smaller
machine).
Believe it or not, some programs are actually written where the human
rather than the disk is the limiting factor.
While I am sure there are cases where default performance on some Unix
is better than tuned performance on VMS, I am also sure there are cases
(not necessarily disk-related) where the reverse is true. So DEQ should
make improvements to VMS limitations, but changing defaults is a cosmetic
change that should not be used to cover up a need for real improvement.
>For a UNIT text file operation, going RAMDISK to RAMDISK on OpenVMS, or
>file cache to file cache on Linux, on identical DS10s, the result is that
>the Linux system is 2.5-6.5 times faster. This is for an operation like
>"read text record from input, write text record to output" - pure IO. It
>doesn't matter if you do this in 1 file or in 1000 you're already
>starting out with the Unix systems "lighter" text file handling
>mechanisms 3X faster than those on OpenVMS. And it goes downhill from
>there, rapidly, because the lack of file effective file caching on
Sure, for accademia speed will always win out over integrity. You are
overlooking the stability FILES 11 gives you over the unix disk stream.
In the real world, where integrity is king, we want to know the data
actually made it to the disk (hopefully into a nice record oriented
indexed file and not some fprintf() stream) when the call returns. Write
caching may seem like a wonderfull thing, but it makes check pointing a
nightmare.
Roland
--
-----------------------------------------------------------
yyy...@flashcom.net To Respond delete ".illegaltospam"
MR/2 Internet Cruiser 1.52
For a Microsoft free univers
-----------------------------------------------------------
>Try some tests in Ada, Pascal or Fortran, or else make it quite clear in
>your strident posts that yours is a C-only viewpoint.
Actually on VMS BASIC seems to be the most closely mated to RMS (sans
Macro of course).
PHM: Somebody just told me Unix's file system is faster than VMS's. Is that
true?
Tech: Well, yes, it's about x% faster, but VMS's filesystem is safer.
PHM: So if we migrate to Unix, everything will run x% faster?
Tech: That depends on how much IO we're doing, but it might corrupt our
files.
PHM: Everybody else seems happy with it, how much does it cost?
Tech: Well, depending on which one, it's somewhere between free and z%
cheaper than VMS, but....
PHM: Will it run on that lovely, cheap, fast Intel chip I keep hearing
about, the Puntiam?
Tech: Pentium. Well, yes but the Alpha's faster.
PHM: What's it's clock speed?
Tech: Somewhere in the 800's
PHM: That's not as fast as the Pantyloom, I've heard that goes at 1000mhz
Tech: /Pentium/! Alpha's a different architecture type, called RISC, so
the mhz doesn't....
PHM: Alpha's a risk? I want a plan on my desk by tomorrow morning for
migrating to Unix on Pinto.
The trouble is, their eyes glaze over when you try to explain anything
vaguely technical, and all VMS's advantages are technical. It's not
particularly pretty, it's not instinctive to someone used to Billyboxes,
it's not advertised and raved about all over management magazines or on
television, and when they have heard the name VMS anytime in the last five
years it's been immediately prefixed with either "legacy" or "migrate
from". It's going to take a lot of time, effort and money to turn all that
around.
Shane
yyyc186.ill...@flashcom.net on 07/05/2000 07:33:55 PM
To: Info...@Mvb.Saic.Com
cc:
Subject: Re: OpenVMS loses big, was: RE: Compaq advertises
In <8jvmhh$b...@gap.cco.caltech.edu>, on 07/05/00
at 10:33 PM, mat...@seqaxp.bio.caltech.edu (David Mathog) said:
>For a UNIT text file operation, going RAMDISK to RAMDISK on OpenVMS, or
>file cache to file cache on Linux, on identical DS10s, the result is that
>the Linux system is 2.5-6.5 times faster. This is for an operation like
>"read text record from input, write text record to output" - pure IO. It
>doesn't matter if you do this in 1 file or in 1000 you're already
>starting out with the Unix systems "lighter" text file handling
>mechanisms 3X faster than those on OpenVMS. And it goes downhill from
>there, rapidly, because the lack of file effective file caching on
Sure, for accademia speed will always win out over integrity. You are
overlooking the stability FILES 11 gives you over the unix disk stream.
In the real world, where integrity is king, we want to know the data
actually made it to the disk (hopefully into a nice record oriented
indexed file and not some fprintf() stream) when the call returns. Write
caching may seem like a wonderfull thing, but it makes check pointing a
nightmare.
Roland
Bill Todd wrotes:
>>>
The normal approach to obtaining good streaming performance from a SCSI disk
is to use asynchronous multi-buffering such that the disk always has one or
more outstanding requests ready to be satisfied, and can just keep sending
or receiving the data without pause. This does not require enabling
write-back caching for writes to perform well. You can ask RMS to handle
this for you (using its read-ahead/write-behind options) in some cases.
If you weren't doing this, the reason you obtained decent streaming-read
performance was because the disk was automatically pre-fetching for you.
Depending on the intelligence of its algorithm, you might or might not have
been able to do somewhat better by doing your own multi-buffering.
<<<
Tis is not our expirience. Multi-buffering brings a little bit more perfor-
mance, but that growth that the disk onboard write cache brings. We have
tested it with WINNT. Disbaling the disk cache (read or write) brings the
performance down from 15.2MB/s to 3.6MB/s (in our test environment; 18.2GB
IBM DGHS disk). This value is relative system independent. So you can do
with RMS what you want, you can't speed up over the meassured value.
Regards Rudolf Wingert
P.S. The OpenVMS test was a little bit unrealistic, because we used a
contigeous file. But I think that on all systems non-contigeous
files will bring a performance degration.
- bill
<yyyc186.ill...@flashcom.net> wrote in message
news:3963f11b$2$lllp186$mr2...@news.flashcom.com...
[RMS multibuffering and asynchronous I/O]
> Tis is not our expirience. Multi-buffering brings a little bit more perfor-
> mance, but that growth that the disk onboard write cache brings. We have
> tested it with WINNT. Disbaling the disk cache (read or write) brings the
> performance down from 15.2MB/s to 3.6MB/s (in our test environment; 18.2GB
> IBM DGHS disk). This value is relative system independent.
> P.S. The OpenVMS test was a little bit unrealistic, because we used a
> contigeous file.
But that's what the original post was talking about: writing data sequentially
to files that are a large fraction of a disk's capacity, and (much) larger
than any on-disk cache. In that case, the cache might as well not exist as
long as you can keep the device fed with operations to perform. Ergo, if you
performance drops when you disable cache in this scenario, you're not doing
the I/O properly in the first case.
Jan
Given that you obtained the same result on NT (which does have competent
SCSI drivers), I suspect the former.
- bill
Rudolf Wingert <w...@fom.fgan.de> wrote in message
news:2000070606...@fom.fgan.de...
> Hello,
>
> Bill Todd wrotes:
>
> >>>
> The normal approach to obtaining good streaming performance from a SCSI
disk
> is to use asynchronous multi-buffering such that the disk always has one
or
> more outstanding requests ready to be satisfied, and can just keep sending
> or receiving the data without pause. This does not require enabling
> write-back caching for writes to perform well. You can ask RMS to handle
> this for you (using its read-ahead/write-behind options) in some cases.
>
> If you weren't doing this, the reason you obtained decent streaming-read
> performance was because the disk was automatically pre-fetching for you.
> Depending on the intelligence of its algorithm, you might or might not
have
> been able to do somewhat better by doing your own multi-buffering.
> <<<
>
> Tis is not our expirience. Multi-buffering brings a little bit more
perfor-
> mance, but that growth that the disk onboard write cache brings. We have
> tested it with WINNT. Disbaling the disk cache (read or write) brings the
> performance down from 15.2MB/s to 3.6MB/s (in our test environment; 18.2GB
> IBM DGHS disk). This value is relative system independent. So you can do
> with RMS what you want, you can't speed up over the meassured value.
>
> Regards Rudolf Wingert
>
> P.S. The OpenVMS test was a little bit unrealistic, because we used a
Well, yes, sure. I can't distinguish between RMS, and C on top of RMS,
based on tests written in C - but I can hardly run tests written to call
RMS on Linux! The closest I came to a test of "pure" RMS was a
comparison of APPEND on VMS, with >> on Linux, the latter being much
faster. I suppose that one could compare RMS calls with write()
calls, but that's moving far away from the initial problem, which was
a comparison of portable code run on both systems.
>
>Try some tests in Ada, Pascal or Fortran, or else make it quite
>clear in your strident posts that yours is a C-only viewpoint.
Fair enough. The only other language I have on both systems is
Fortran 90. Here's one simple test - all it does is write a large
file:
! MAKETESTF.f
! makes a 16000 entry fasta file, each containing a 500 bp sequence
!
! more or less verbatim translation of maketest.c.
!
integer i,j
!
open(unit=10,name='test.nfa',form='FORMATTED',status='NEW'
1 ,recl=132,carriagecontrol='LIST')
do i=1,16000
write(10,1000),i
do j=1,10
write(10,1005)
end do
end do
1000 format('>test',i5.5)
1005 format('AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA')
stop
end
(Note that the file it produces is not identical at the binary level
on the two platforms, but it is at the "text" level.)
Compaq F90 on both platforms, both DS10s. Run times on a RAMdisk:
OpenVMS 7.2-1 using RAMdisk 2.05 seconds (extend=0,buffer=0)
OpenVMS 7.2-1 using RAMdisk 1.78 seconds (extend=8000,buffer=255)
Linux, using file cache 1.20 seconds
So in this instance Linux beats OpenVMS by either 48% (tuned) or 71% (not
tuned). OpenVMS is still slower, but not quite so egregiously as it is
with C. I reran all the tests on physical disks +/- highwater marking
to put it all into perspective, using the IO benchmark testing programs
at:
http://seqaxp.bio.caltech.edu/pub/SOFTWARE/MYBENCHMARK.ZIP
on two DS10s, one VMS 7.2-1, the other RH 6.2, both with intraserver
U2W controllers and LVD disks, lots of memory. I reran these these partly
because I've discovered a problem when PIPE is used to pull out the
timing lines from the small number of output lines produced by the
test procedure. That is, when I ran it like:
$ pipe @testsplit | search sys$input elapsed
some tests came out much worse than when I did it like
$ @testsplit
Compare columns 4 and 5 to see the effect. I knew PIPE throughput
was bad, but really didn't expect this result at all. Keep in mind
that the output from the procedure just looks like:
MAKETESTF: Create test file, fortran
Elapsed time in 1/100ths of a second: 243, Device operations = 1127
MAKETEST: Creating test file,C
Elapsed time in 1/100ths of a second: 954, Device operations = 1098
etc.,
so we're talking a mere handful of bytes moving through the pipe. (I
suspect that the RMS settings are somehow interacting with PIPE to
produce this bizarre result.)
TEST 1 2 3 4 5 6
OS VMS VMS VMS VMS VMS VMS LINUX
disk hard hard hard hard hard RAM hard (file cached)
buffer 0 0 255 255 255 0
extend 0 0 2000 2000 2000 0
highwater Yes NO YES NO NO NO
pipe? NO NO NO NO YES NO
MAKETESTF: 26.26 22.09 3.86 2.20 22.07 2.05 1.20
MAKETEST: 25.60 21.28 10.90 9.15 21.17 .77 .30
MYSTART: .04 .04 .03 .03 .03 .01 .00
MYOPENIN .02 .05 .10 .03 .03 .01 .00
MYREAD .46 .49 .52 .57 .45 .38 .10
MYOPEN 7.69 7.78 7.74 7.75 7.68 .67 .10
MYSPLIT 42.83 37.75 54.75 25.25 37.56 1.38 .40
So, Larry is right, other languages are not so bad as C. But I'm
also right - you have to go to a lot of work to get the IO even close
on OpenVMS to what is obtained with no effort on Linux (Unix). And
while it's nice to know that Fortran is quicker than C, it doesn't
help me at all because all the code I receive is written in C,
something derived from C (like Perl), or even C++ - been a long time
since I've seen anything new written in Fortran.
Regards,
Then again, in the real world sometimes the technical people don't know what
they're talking about any more than their managers do. This particular tech
needs to ask which Unix is under consideration before being able to make
such a statement authoritatively, but (as is too often the case) simply has
assumed that his own limited experience (or, worse, something someone told
him over a beer one night) has general applicability.
- bill
> TEST 1 2 3 4 5 6
> OS VMS VMS VMS VMS VMS VMS LINUX
> disk hard hard hard hard hard RAM hard (file cached)
> buffer 0 0 255 255 255 0
> extend 0 0 2000 2000 2000 0
> highwater Yes NO YES NO NO NO
> pipe? NO NO NO NO YES NO
>
> MAKETESTF: 26.26 22.09 3.86 2.20 22.07 2.05 1.20
> MAKETEST: 25.60 21.28 10.90 9.15 21.17 .77 .30
> MYSTART: .04 .04 .03 .03 .03 .01 .00
> MYOPENIN .02 .05 .10 .03 .03 .01 .00
> MYREAD .46 .49 .52 .57 .45 .38 .10
> MYOPEN 7.69 7.78 7.74 7.75 7.68 .67 .10
> MYSPLIT 42.83 37.75 54.75 25.25 37.56 1.38 .40
>
>
> So, Larry is right, other languages are not so bad as C. But I'm
> also right - you have to go to a lot of work to get the IO even close
> on OpenVMS to what is obtained with no effort on Linux (Unix). And
> while it's nice to know that Fortran is quicker than C, it doesn't
> help me at all because all the code I receive is written in C,
> something derived from C (like Perl), or even C++ - been a long time
> since I've seen anything new written in Fortran.
Well I think it _should_ help you know who to talk to at DECUS
(in your region) next fall. The caching representative may be
the one who is going to provide the next near-term assistance,
but the C RTL representative is the one with whom you should
really spend time. There have been a lot of compatibility
complaints over the years, and perhaps they don't even know
people are looking for performance.
>"Terry C. Shannon" wrote:
>> (Terry here...) COE not only will equip OpenVMS with "Solaris-like" APIs, it
>> will guarantee that OpenVMS remains viable for a minimum of 15 years.
>> Probability Factor: 0.9999...
>
>What is the definition of viable ?
>
>The proprietary operating system and software that runs the shuttle is quite
>viable. It runs on 4 nasa vehicles and a few simulators. But don't expect ANY
>growth for it.
>
>and when you consider that irish railway, lets assume it is running VMS 3.0,
>then VMS 3.0 is going to remain viable for a very long period, until Compaq
>stops maintaining VAX hardware.
>
>COE protects existing customers. It helps alleviate the fears that pootential
>new customers may have for VMS's lifetime. But it does nothing to attract new customers.
>
>CEO definitely removes a brick from the Berlin wall, but until a lot more
>bricks have been removed, you won't see many people crossing over to the land
>of VMS.
>
>Marketing and affordability remain to be tackled.
>
>Are they moving in the right direction ? I would say a definite YES.
>Can they rest on their laurels and watch the fruits of their labour: DEFINITE
>NO.
>
>They still have a long way to go.
>
>> Yep, as is the Tier One status now enjoyed by the OS!
>
>When, where was it annouced that VMS now had Tier 1 status with Oracle ? I had
>heard that True64 had obtained that status (during wildfire launch), but never
>heard VMS getting that status.
>
>> > Are not the recent endorsements from major Customers like E*Trade on the new
>> > Alpha GS Series a good sign that Customers (and ISV's) are re-examining
>> > their strategies around OpenVMS?
>
>If VMS focuses on the small markets that need 98% reliability but tons of
>performance, such ass Etrade, it may have a rude awakening once Tandem is
>ported to Alpha and can offer that 99.999% reliability on the same
>architecture: Alpha.
Do you really think they are going to lower the price of Tandem
Hardware and Software? This is historically one of the most expensive
hardware and software combinations in existence. The Alpha will not
make the machines any less expensive than MIPS and the port to Alpha
has to be paid for somehow.
While the hardware prices may not increase in proportion to other
brands, the software houses that write for the Tandem architecture
probably will not reduce their prices.
>
>Tandem is poised to take some of the VMS markets once it gets the fast Alpha
>chip, and if VMS wants to survive, it must spread its wings and be the
>scalable VMS that it was designed for.
--
Art Rice **
Special Data Processing Corporation
--------------------------------------
All opinions expressed are mine and do
not reflect the views of my employer.
No, but where VMS currently has an edge on Tandem, is in performance since VMS
benefits from ALPHA and 64 bit computing. Tandem has equal/better fault
tolerance to VMS, although VMS's clustering does offer a few things Tandem doesn't.
But once Tandem has moved to Alpha, then VMS loses its performance edge. Why
should Compaq then continue to support both Tandem and VMS with very similar
customers and missions ?
Wouldn't it be much better to somehow be able to combine both OS into a single
unified one ? Such a bigger OS would have greater market share, more stamina
to ward off serious systems wannabes (NT et all) and also less internal
political fights as to which of the two will be the golden child and which
will be sent to a corner and told to be quiet.
Yeah, Compaq will continue to support each system as long as it remains
profitable... We've seen what such statements do...
Default extent is 25% of the file current size; you can select
the fraction. Never grabs more than 1/8 of free space unless
the program asked for more.
I found that essentially all the time files are truncated on close,
so there is no need to deal with that separately.
Jan Vorbrueggen wrote:
>
> mat...@seqaxp.bio.caltech.edu (David Mathog) writes:
>
> > Anyway, that suggests that the manner in which file extends are handled
> > could use some work in the performance area.
>
> I liked Glenn Everhart's scheme, the details of which I don't quite remember,
> but which are similar to this:
>
> - take the higher of the amount requested and x% (e.g., x=10) of the amount
> currently allocated to the file
> - take the lower of the previous amount and some fraction (e.g., 1-3) of
> free space left on the volume
> - take the lower of the previous amount and some fixed, volume-dependent
> maximum (e.g., 1% of the volume's total space)
>
> Thinking through various scenarios, this handles all I could think of well.
> One could also track the last request and automatically truncate to that
> (implied) size on close if an explicit $TRUNCATE isn't performed.
>
> Jan
I've seen an Oracle import running like a dog with a peak I/O
request queue on a disk of 498 requests and an average of about 90. The
disk had no lack of work to do but it was not accomplishing very much.
Message text written by Jan Vorbrueggen
NT might indeed have competent SCSI drivers, but I've seen some
indications not all are; indications like NT vendors finding it
strange to have a driver check the RC bit or some others which
control whether a device may simply invent data. (I've also
seen worse sins; not all vendors do great work.)
The situation with these OSs may be far more complex. Consider
that unix uses SCSI reserve/release to control disk access
(maybe not all unix but Solaris certainly does). Consider too that
NT uses a pervasive caching system. In the details discussing this
(have a look at Windows NT Filesystem Internals) it is mentioned
that a (mis)feature of this design is that NT cannot absolutely
guarantee writes ever make it to disk. (The boundary conditions
that are involved are a tad rare, but they exist.) VMS goes to great
lengths, IF the disk supports it, to queue I/O requests at the
disk rather than in the OS (to minimize latency).
I'd say there are way too many variables unknown here to
assign a definite cause. I have heard of writing speedups when
disk writeback cache is enabled, though these depend much on
how much data moves, and turning on that feature at times also
enables disk firmware bugs. I've seen such (but again, fortunately,
not too often).
By actual scsi bus monitors, VMS can keep a SCSI bus pretty well
saturated. There are less options for port drivers than on PCs
but the ones that there are can, apart from the occasional glitch
which does get fixed, smoke the bus. I would also note that
a number of VMS utilities multibuffer and use heavy AST programming
for you. If their unix or nt counterparts do not, so much the
worse for them. However and as I said, without many more bits
of information, intelligent commentary on relative speeds
of transfers is tough to do.
That the VMS filesystem needs work is apparent; it is getting
a rewritten cache, and additional filesystems are being worked.
I just hope that Hoff or someone else will release some ACP
and driver skeletons, soon rather than late, so the rest of us
can easily wire in Reiserfs or whatever else pleases us into VMS.
The area is not well documented and while some of us have poked
around enough to know it somewhat, VMS has way too few user
written filesystems, which would be tailorable to what people
need.
Only because SCSI command processing happens to take longer than
transferring a single block does: otherwise, the request size is irrelevant
to streaming bandwidth, as long as you keep at least one request always
queued at the drive.
And in the context of this discussion, 125 blocks is not particularly large,
and in fact is within the VMS QIO size limit that has been brought up as a
possible problem (which it isn't, at least in this context, since for
sequential access 125-block requests will stream at the maximum data rate of
the drive, because the transfer time *does* exceed the command-processing
time and thus the queue can be kept non-empty).
> I've seen an Oracle import running like a dog with a peak I/O
> request queue on a disk of 498 requests and an average of about 90. The
> disk had no lack of work to do but it was not accomplishing very much.
If it was not transferring at maximum bandwidth, it was because the requests
weren't physically sequential on the drive. Or because they were so small
that the command processing overhead exceeded the transfer time (as noted
above), though in that case you would see the queue form in the driver, not
at the disk.
- bill
Not if it highlights a VMS deficiency. But in this particular case, the
situation was merely that I had no idea what Rudolf might be running for VMS
SCSI drivers and was commenting that because the behavior he observed was
the same on NT (where I have experience that the default SCSI drivers handle
asynchrony just fine - as I would expect VMS's to do as well, but don't have
the direct experience to draw upon) the chances were that his code was the
problem.
...
> The situation with these OSs may be far more complex. Consider
> that unix uses SCSI reserve/release to control disk access
> (maybe not all unix but Solaris certainly does). Consider too that
> NT uses a pervasive caching system. In the details discussing this
> (have a look at Windows NT Filesystem Internals) it is mentioned
> that a (mis)feature of this design is that NT cannot absolutely
> guarantee writes ever make it to disk. (The boundary conditions
> that are involved are a tad rare, but they exist.)
I believe you're in error. If you access a file FILE_FLAG_NO_BUFFERING, a
WriteFile() operation will not complete until the proffered data is on the
disk. Period.
The only remaining question is whether, if the write caused meta-data to be
updated, that meta-data is also guaranteed to have been updated on disk
before operation completion. I spent over a month attempting to verify this
with Microsoft (using their highest available level of support, so they were
motivated to persevere) a few years ago, and eventually got an unambiguous
affirmative answer, supposedly verbatim from the development group.
That answer may or may not also apply to a FLUSH operation on NT (though one
would hope it does). But it is a means of ensuring that data has made it to
disk.
VMS goes to great
> lengths, IF the disk supports it, to queue I/O requests at the
> disk rather than in the OS (to minimize latency).
That statement makes no sense to me: one of us is confused.
>
> I'd say there are way too many variables unknown here to
> assign a definite cause.
Nothing in life is certain - even things you believe you have observed
directly. But in this case, the relative slow-down without disk caching
enabled is close enough to what one would predict as the consequence of
missing a rev for every 63.5 KB transferred that I have no problem putting
that forth as the *likely* explanation.
- bill
if you would like to compete with other OS, you have to measure under the
same conditions. Yes. But in the most cases other OSses use some features,
which are unsecure and/or hidden to the user. In our institute we do not
need as much security (for the normal work) as other do. In case of this
we have disabled highwater marking and erasing on delete. If you will
have a scratch disk, or an local disk exclusive used for page-/swapfiles,
you will speed up the system performance by an enabled disk write cache.
If you will have an asynchronous multi-buffered disk I/O, the write cache
will growth the performance too. The write can start at any point of a
track, without to wait a turn to the startpoint of that track.
Also we have test random read access time with a single and double buffered
I/O. With modern disk we see the most time the same access time. With older
disk the difference is much bigger. Also modern disk will have a max.
access time of 14ms, older 25ms. So the disk I/O performance is as good
(or as bad) as there of other OSes.
Regards Rudolf Wingert
I suspect you'll be much nearer using unformatted I/O with Fortran, which I
would consider reasonable for your application.
Jan
How does one install it? I found it on freeware CDROM version 4 in a .ZIP
archive, but it unpacked to raw files, and favoid.man says to use
vmsinstal, but I've only ever seen that work on BACKUP savesets.
Thanks,
>Art Rice wrote:
>> Do you really think they are going to lower the price of Tandem
>> Hardware and Software? This is historically one of the most expensive
>> hardware and software combinations in existence. The Alpha will not
>> make the machines any less expensive than MIPS and the port to Alpha
>> has to be paid for somehow.
>
>No, but where VMS currently has an edge on Tandem, is in performance since VMS
>benefits from ALPHA and 64 bit computing. Tandem has equal/better fault
>tolerance to VMS, although VMS's clustering does offer a few things Tandem doesn't.
>
>But once Tandem has moved to Alpha, then VMS loses its performance edge. Why
>should Compaq then continue to support both Tandem and VMS with very similar
>customers and missions ?
>
>Wouldn't it be much better to somehow be able to combine both OS into a single
>unified one ?
Start coding... OpenVMS is AFAIK a "general purpose OS". NSK
(Guardian) was designed from the ground up for OLTP. I mean
everything. That's one of the reasons they were so far behind
everyone else implementing TCP/IP. Founded around 1974, EXPAND, X.25,
and SNAX handled all the intersystem communicatioin that was needed
until sometime around 1990.
The Tandem OS depends on a specific machine architecture. We are not
just talking about a few redundant components. Nearly all components
inside the box are redundant. (drive pairs can be split but why pay
the high cost if you are going to destroy the fault tolerance?)
One thing Tandem needs to fix ( if they want to expand further than
web based apps) is terminal support. Pathway applications can only be
defined as Tandem or 3270. If you don't tell it to support 3270 it
won't even do that. (security through obscurity again.)
If you have a friend in your organization or city that has access to
TIM (the Tandem documentation CDs) have him/her show you the manuals
on "Introduction to Tandem NonStop Systems", "Introduction to
Pathway", and any of the server description manuals. for Example: the
K20000 server Descripsion Manual.
If the two OSs are to be merged, it will probably take til 2010 to get
the first Beta version ready to go.
>Such a bigger OS would have greater market share, more stamina
>to ward off serious systems wannabes (NT et all) and also less internal
>political fights as to which of the two will be the golden child and which
>will be sent to a corner and told to be quiet.
>
>Yeah, Compaq will continue to support each system as long as it remains
>profitable... We've seen what such statements do...
--
VMS has X.25 support. VMS has SCS which is probably the equivalent of SNAX (or
would DECNET be it equivalent ?).
> The Tandem OS depends on a specific machine architecture.
Yes, but that architecture is being changed to ALPHA based products. Sure
those hardware products will differ from current ALpha products with hardware
redundancy, but then again, so did the original VAXft machines different from
standard VAXes.
> One thing Tandem needs to fix ( if they want to expand further than
> web based apps) is terminal support.
Yes, these old huge Tandem terminals were *interesting* to say the least.
> If the two OSs are to be merged, it will probably take til 2010 to get
> the first Beta version ready to go.
However, if the major middlewares for Guardian were to be ported to VMS, and
if the remainder of operating system services were ported to VMS, wouldn't it
be possible to port all Tandem applications to VMS with an ease comparative to
the port from MIPS to ALPHA will be ?
Remember that VMS did have fault tolerance in its VAX line, so that would have
to be ported over to ALPHA.
So, if VMS on alpha were to be made fault tolerant, and if the middleware on
Guardian (base32 etc) were to be ported to VMS, wouldn't it be fairly easy to
then port the Guardian applications to VMS and as a result provide a single
high-availability platform that would have much greater market mindshare than
both Guardian and VMS operating as small unknown quantities ?
..
:However, if the major middlewares for Guardian were to be ported to VMS, and
:if the remainder of operating system services were ported to VMS, wouldn't it
:be possible to port all Tandem applications to VMS with an ease comparative to
:the port from MIPS to ALPHA will be ?
The mind boggles at the effort involved with the redesign and the
reimplementation of the necessary core of the Tandem NSK APIs onto
another operating system run-time environment, and then all of the
hauling over and altering and testing that would be needed for the
applications and the whole rest of the expected application environment.
The current porting involved in moving NSK and its applications over
to a custom-designed fault-tolerant Alpha hardware platform is a large
project -- having all of this hardware and having to layer the NSK Kernel
and (potentially) having OpenVMS itself running on this fault-tolerant
Alpha hardware would increase the effort involved by orders of magnitude.
(And to what gain, and for what customer market?)
I am not in a position to discuss plans (if any) to port OpenVMS itself
over to this fault-tolerant Alpha product.
:Remember that VMS did have fault tolerance in its VAX line, so that would
:have to be ported over to ALPHA.
OpenVMS is less of an issue here than the hardware -- there are (were)
various system services layered onto OpenVMS for control and management
of the VAXft hardware, but the central part of the VAXft product involved
the hardware itself. Things like the instruction cross-check processing,
hardware-level component shadowing, and core hardware hot-swap would all
be needed. These are neither trivial nor cheap...
:So, if VMS on alpha were to be made fault tolerant, and if the middleware on
:Guardian (base32 etc) were to be ported to VMS, wouldn't it be fairly easy to
:then port the Guardian applications to VMS and as a result provide a single
:high-availability platform that would have much greater market mindshare than
:both Guardian and VMS operating as small unknown quantities ?
Other than all of the really difficult stuff involved here, yes, this
project would be quite easy to accomplish. :-)
--------------------------- pure personal opinion ---------------------------
Hoff (Stephen) Hoffman OpenVMS Engineering hoffman#xdelta.zko.dec.com
> NSK(Guardian) was designed from the ground up for OLTP.
> I mean everything.
> [..SNIP...]
>If you have a friend in your organization or city that has access to
>TIM (the Tandem documentation CDs) have him/her show you the manuals
>on "Introduction to Tandem NonStop Systems", "Introduction to
>Pathway", and any of the server description manuals. for Example: the
>K20000 server Descripsion Manual.
Do you know, offhand, any texts (such as might be had in
a technical book store) that describe the Tandem architectures?
or better still, any online resources?
it's something I'd long wanted to read up on ...
Message text written by "Bill Todd"
No, there is not (not on normal SCSI drives, anyway). As long as the write
requests are queued to the drive asynchronously, such that the drive is
always in possession of the next request when the current one completes, the
data will be streamed onto the disk without interruption.
- bill
> Fragavoid does this, edits io$_modify on the fly...
>
Any relation to Freakazoid? (cartoon super hero developed by Spielberg)
:-)
--
===============================================================================
Wayne Sewell, Tachyon Software Consulting (281)812-0738 wa...@tachysoft.xxx
http://www.tachysoft.xxx/www/tachyon.html and wayne.html
change .xxx to .com in addresses above, assuming you are not a spambot :-)
===============================================================================
Otter, on dining with Bluto:"It's perfectly safe if you keep your arms and legs
away from his mouth."
What I don't understand is this sudden interest in trying to convert users of a
successful solution to something else. Sounds like 'migrate to NT' to me.
While there may be some customer catagories that use both systems, (and that
could be true for almost any commercial OS), the details of how something is
done can be important to people. If you didn't really care about the details,
you'd be using NT now.
Dave
--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. Fax: 724-529-0596
DFE Ultralights, Inc. E-Mail: da...@tsoft-inc.com
T-Soft, Inc. 170 Grimplin Road Vanderbilt, PA 15486
Perhaps in your client base this is true, but in mine data integrity is
paramount. There have been many attempts to replace the VMS system with
other systems. Mostly Eunuchs based and a few NT fiascos. Each of those
attempts lead to the slaughter of the CEO, CIO, and a few levels of VP
below. The nice part is that Oracle has been relegated to little more
than an off-line data storage mechanism. In the world of real-time and
reliability this client learned the hard way...Oracle can't play.
Roland
>Sadly, this is not usually how things happen. In the real world speed is
>often king because PHMs don't get the advantages:
>PHM: Somebody just told me Unix's file system is faster than VMS's. Is
>that true?
>Tech: Well, yes, it's about x% faster, but VMS's filesystem is safer.
>PHM: So if we migrate to Unix, everything will run x% faster? Tech:
>That depends on how much IO we're doing, but it might corrupt our files.
>PHM: Everybody else seems happy with it, how much does it cost? Tech:
>Well, depending on which one, it's somewhere between free and z% cheaper
>than VMS, but....
>PHM: Will it run on that lovely, cheap, fast Intel chip I keep hearing
>about, the Puntiam?
>Tech: Pentium. Well, yes but the Alpha's faster.
>PHM: What's it's clock speed?
>Tech: Somewhere in the 800's
>PHM: That's not as fast as the Pantyloom, I've heard that goes at 1000mhz
>Tech: /Pentium/! Alpha's a different architecture type, called RISC,
>so the mhz doesn't....
>PHM: Alpha's a risk? I want a plan on my desk by tomorrow morning for
>migrating to Unix on Pinto.
>The trouble is, their eyes glaze over when you try to explain anything
>vaguely technical, and all VMS's advantages are technical. It's not
>particularly pretty, it's not instinctive to someone used to Billyboxes,
>it's not advertised and raved about all over management magazines or on
>television, and when they have heard the name VMS anytime in the last
>five years it's been immediately prefixed with either "legacy" or
>"migrate from". It's going to take a lot of time, effort and money to
>turn all that around.
>Shane
>yyyc186.ill...@flashcom.net on 07/05/2000 07:33:55 PM
>To: Info...@Mvb.Saic.Com
>cc:
>Subject: Re: OpenVMS loses big, was: RE: Compaq advertises
>In <8jvmhh$b...@gap.cco.caltech.edu>, on 07/05/00
> at 10:33 PM, mat...@seqaxp.bio.caltech.edu (David Mathog) said:
>>For a UNIT text file operation, going RAMDISK to RAMDISK on OpenVMS, or
>>file cache to file cache on Linux, on identical DS10s, the result is that
>>the Linux system is 2.5-6.5 times faster. This is for an operation like
>>"read text record from input, write text record to output" - pure IO. It
>>doesn't matter if you do this in 1 file or in 1000 you're already
>>starting out with the Unix systems "lighter" text file handling
>>mechanisms 3X faster than those on OpenVMS. And it goes downhill from
>>there, rapidly, because the lack of file effective file caching on
>Sure, for accademia speed will always win out over integrity. You are
>overlooking the stability FILES 11 gives you over the unix disk stream.
>In the real world, where integrity is king, we want to know the data
>actually made it to the disk (hopefully into a nice record oriented
>indexed file and not some fprintf() stream) when the call returns. Write
>caching may seem like a wonderfull thing, but it makes check pointing a
>nightmare.
>Roland
>--
>-----------------------------------------------------------
>yyy...@flashcom.net To Respond delete ".illegaltospam"
> MR/2 Internet Cruiser 1.52
> For a Microsoft free univers
>-----------------------------------------------------------
--
-----------------------------------------------------------
yyy...@flashcom.net To Respond delete ".illegaltospam"
MR/2 Internet Cruiser 1.52
For a Microsoft free univers
-----------------------------------------------------------
Message text written by "Bill Todd"
It's somewhat hard to believe that any competently-designed controller
wouldn't allow its disks to stream data if they could.
Your example of Oracle 8 KB transfers falls somewhere in the grey area,
though: on a modern disk, getting 8 KB off the platter takes only 0.5 ms.
or less, which may well be less than the time it takes to process the next
SCSI command (the newest, fastest SCSI standards may finally have found a
way to speed up command processing, which at least until recently has been
stuck at SCSI-1 levels despite the bandwidth improvements in other
respects): if so, then the disk will miss a rev once in a while, but each
time it does, it will give a few more commands the opportunity to be
processed, so it won't miss a rev on *every* 8 KB request (and the request
queue at the disk shouldn't grow very large).
With a long queue, it sounds a lot more like the requests are randomly
distributed around the disk, rather than physically contiguous. Since each
request will then take a few ms. to complete (due to seek and rotational
latency), a lengthy queue can indeed develop. However (duh...), IIRC SCSI
drives don't normally support more than 256 requests concurrently queued at
the drive - so the example you gave sounds as if the queuing was in fact in
the driver.
The difference between Oracle and BACKUP is likely that Oracle just submits
disk requests from a large number of threads or processes without attempting
to coordinate them (trusting the underlying OS and disk facilities to do
so), whereas BACKUP is submitting a single, multi-buffered read or write
stream to the disk and just submits requests fast enough to keep the queue
from becoming empty (since it usually takes only a few buffers to reach
optimal performance).
- bill
This is a reason to look for qualified disks. Lots of disks out there
have no clue how to do tagged queueing. Some claim to do it but
do bizarre things so whatever support is there cannot be used.
Qualification tests this behavior, and others.
Command delays can be at SCSI level, in the conversation with the
control processor, in port driver control logic, or at class driver
and above ($qio or the like). If you don't measure, it is difficult
to know where this is. Once VMS passes an IRP to a class driver, it
knows nothing about what goes on below that level.
While on this topic, remember that ACP_DATACHECK can cause slowdowns
in SCSI processing by forcing data compare on metadata writes. I always
turn this off, because it not only prevents tagged queueing across such
operations, but forces a rotational delay in rereading data.
>
> Richard B. Gilbert <DRA...@compuserve.com> wrote in message
Oracle 8 was importing a table, with a couple of indices. There was no
other activity on the system except my process running monitor. The DBA
says that the table file was being written sequentially; if we were
bouncing the heads around for every block written, I could understand its
taking a long time.
Message text written by "Bill Todd"
>Richard B. Gilbert <DRA...@compuserve.com> wrote in message
- bill
>Richard B. Gilbert <DRA...@compuserve.com> wrote in message
Given that this was the VMS queue, there's no congestion point comparable to
that of SCSI command acceptance (at something 1 per ms.) to limit the speed
with which the queue depth can increase. So if, for example, Oracle built
up the entire table structure in memory and then just submitted 8 KB write
requests as fast as it could form them from the pre-built in-memory data, it
could potentially build up quite a request back-log even if the disk was in
fact streaming them to the platters at peak bandwidth.
Or the VMS file containing the Oracle table may have been heavily-fragmented
below the Oracle level, though fragmentation sufficient to slow things down
as much as you suggest by saying "The disk had no lack of work to do but it
was not accomplishing very much" seems somewhat unlikely. Or Oracle may
have been rebuilding the table logically rather than physically, meaning it
had to update the table indexing information for each page inserted, plus
the per-tuple indexing information for each *tuple* inserted (*that* would
slow you down more than a bit): if the table was not being imported from a
copy with absolutely identical characteristics, including page size and
indexed columns, at least *some* such index-building would be necessary,
though there are ways to cut it down drastically as long as the insertions
aren't being made to a table while other accesses to it may also be
occurring.
There are too many possibilities to make speculation here very useful. And
it's even possible that the HSZ *is* incompetent at handling data-streaming,
though I certainly wouldn't place that near the top of the probability list.
- bill
Regarding "merging" VMS and Tandem. An early Galaxy Roadmap showed several
planned methods for shared memory allocation. Memory fades but one was "Fast",
one "Contiguous", the one pertinent to this discussion is "Fault Tolerant".
EV7 supports lock-step instructions for Tandem.
So maybe 2 fault tolerant pieces are there in the future for VMS engineering
to use at an OS level. One would imagine there is strong incentive to make
VMS even more bullet proof. Whether that is their goal ( making VMS
fault tolerant ) in the future design directions of VMS is fun to speculate
about. Given there are a slew of third party apps that periodically
take out VMS (crash it) one can only imagine the hurdle seems very high
indeed. But maybe software and hardware firewalls between Galaxy instances
is the more reasonable goal in the design (speculation pure and simple).
Rob
In article <gv7cmsk96acppiutl...@4ax.com>, Art Rice <arice....@ue.itug.org> writes:
> On Thu, 06 Jul 2000 14:22:25 -0400, JF Mezei
> <jfmezei...@videotron.ca> wrote:
>
>>Art Rice wrote:
>>> Do you really think they are going to lower the price of Tandem
>>> Hardware and Software? This is historically one of the most expensive
>>> hardware and software combinations in existence. The Alpha will not
>>> make the machines any less expensive than MIPS and the port to Alpha
>>> has to be paid for somehow.
>>
>>No, but where VMS currently has an edge on Tandem, is in performance since VMS
>>benefits from ALPHA and 64 bit computing. Tandem has equal/better fault
>>tolerance to VMS, although VMS's clustering does offer a few things Tandem doesn't.
>>
>>But once Tandem has moved to Alpha, then VMS loses its performance edge. Why
>>should Compaq then continue to support both Tandem and VMS with very similar
>>customers and missions ?
>>
>>Wouldn't it be much better to somehow be able to combine both OS into a single
>>unified one ?
>
> Start coding... OpenVMS is AFAIK a "general purpose OS". NSK
> (Guardian) was designed from the ground up for OLTP. I mean
> everything. That's one of the reasons they were so far behind
> everyone else implementing TCP/IP. Founded around 1974, EXPAND, X.25,
> and SNAX handled all the intersystem communicatioin that was needed
> until sometime around 1990.
>
> The Tandem OS depends on a specific machine architecture. We are not
> just talking about a few redundant components. Nearly all components
> inside the box are redundant. (drive pairs can be split but why pay
> the high cost if you are going to destroy the fault tolerance?)
>
> One thing Tandem needs to fix ( if they want to expand further than
> web based apps) is terminal support. Pathway applications can only be
> defined as Tandem or 3270. If you don't tell it to support 3270 it
> won't even do that. (security through obscurity again.)
>
> If you have a friend in your organization or city that has access to
> TIM (the Tandem documentation CDs) have him/her show you the manuals
> on "Introduction to Tandem NonStop Systems", "Introduction to
> Pathway", and any of the server description manuals. for Example: the
> K20000 server Descripsion Manual.
>
> If the two OSs are to be merged, it will probably take til 2010 to get
> the first Beta version ready to go.
>
> The nice part is that Oracle has been relegated to little more
> than an off-line data storage mechanism. In the world of real-time and
> reliability this client learned the hard way...Oracle can't play.
So what _do_ your clients use as a database system? Inquiring minds...
Jan
>Art Rice wrote:
>> everyone else implementing TCP/IP. Founded around 1974, EXPAND, X.25,
>> and SNAX handled all the intersystem communicatioin that was needed
>> until sometime around 1990.
>
>VMS has X.25 support. VMS has SCS which is probably the equivalent of SNAX (or
>would DECNET be it equivalent ?).
>
>> The Tandem OS depends on a specific machine architecture.
>
>Yes, but that architecture is being changed to ALPHA based products. Sure
>those hardware products will differ from current ALpha products with hardware
>redundancy, but then again, so did the original VAXft machines different from
>standard VAXes.
>
>> One thing Tandem needs to fix ( if they want to expand further than
>> web based apps) is terminal support.
>
>Yes, these old huge Tandem terminals were *interesting* to say the least.
>
>> If the two OSs are to be merged, it will probably take til 2010 to get
>> the first Beta version ready to go.
>
>However, if the major middlewares for Guardian were to be ported to VMS, and
>if the remainder of operating system services were ported to VMS, wouldn't it
>be possible to port all Tandem applications to VMS with an ease comparative to
>the port from MIPS to ALPHA will be ?
I think you missed the architecture point. We are not talking about a
simple port like LINUX x86 to SPARC or ALPHA. (which all use a
"similar" physical architecture apart from the CPU.) Tandem's
physical architecture and software such as Pathway (now called
Pathway/TS I believe, can't get used to the new names.) provide over
50% of the fault tolerence.
>
>Remember that VMS did have fault tolerance in its VAX line, so that would have
>to be ported over to ALPHA.
>
>So, if VMS on alpha were to be made fault tolerant, and if the middleware on
>Guardian (base32 etc) were to be ported to VMS, wouldn't it be fairly easy to
>then port the Guardian applications to VMS and as a result provide a single
>high-availability platform that would have much greater market mindshare than
>both Guardian and VMS operating as small unknown quantities ?
--
>On Fri, 07 Jul 2000 18:45:13 GMT, Art Rice <arice....@ue.itug.org>
>wrote:
>
>> NSK(Guardian) was designed from the ground up for OLTP.
>> I mean everything.
>> [..SNIP...]
>>If you have a friend in your organization or city that has access to
>>TIM (the Tandem documentation CDs) have him/her show you the manuals
>>on "Introduction to Tandem NonStop Systems", "Introduction to
>>Pathway", and any of the server description manuals. for Example: the
>>K20000 server Descripsion Manual.
>
>Do you know, offhand, any texts (such as might be had in
>a technical book store) that describe the Tandem architectures?
>or better still, any online resources?
>
>it's something I'd long wanted to read up on ...
I wish there were. One Place to go first is the Tandem Web Site at
http://www.tandem.com also
http://www.itug.org follow links from there.
Another place with lots of links to vendors and users is:
http://www.madoreconsulting.com/ Flash and Non-Flash available. There
are links to user's TACL (Tandem Advanced Command Language) programs
out there.
There are some brief system explainations at Tandem's site along with
a bunch of white papers. Unfortunatly, no online manual. A Tandem
system is delivered with a License for TIM which will allow one to
access the on-line manuals and softdocs. (because all the manuals used
to fit on one CD and now take two or more.) You need to have TIM to
access these files. And, as manuals go, few of them are really goood
for training.
Because of the nature of the Tandem environment, and the original
purpose (Fault Tolerant OLTP.) they were not something educational
institutions thought about investing in, (if they even knew of their
existence.) Not like the days when nearly every university used UNIX
or VAX (some used both.) Tandem training is usually paid for by the
corporation owning the system but many Tandem people have contiued
their education with their own money. Currently Tandem classes are
over $2,000 for 5 days.
Not having readily available material in bookstores makes for a much
smaller work force and Tandem gets to collect all the money for
education. Unfortunatly, not having experience with the OS can make
it difficult to get one's foot in the door. I didn't even know that
Tandem existed before '89 when I started using one.