Our general feeling is that the worst choice is SGI with the MIPS
4000A chip. Clearly, judging on the very late release of the 4000A
chip, MIPS has had problems with the development. Since the MIPS
family is an old family, and because of these problems, it seems
likely that there is not too much chance of ever seeing a MIPS 5000
chip and a MIPS 6000 chip.
On the other hand, the Alpha chip is a new architecture, which in its
design was made to be able to grow. The question is if it is not too
young in terms of software support. Even DEC themself admit that
there are problems with their OSF/1. But these problems should
certainly dissappear in a couple of years. And how good really is the
Alpha chip?
We have the least knowledge of the HP PA-RISC-II chip. How does it
compare to the Alpha chip? What are the prospects of development for
it?
Any comments will be very, very helpful. Please answer by e-mail and
I'll make a summary if there is sufficient interest.
--
___ ___
/ o \ o \
Dov Grobgeld ( o o ) o |
The Weizmann Institute of Science, Israel \ o /o o /
"Where the tree of wisdom carries oranges" | | | |
_| |_ _| |_
What constitutes a "problem"?? Most of the complaints I've heard
internally about OSF/1 on the Alpha has to do less with robustness and
functionality than with details such as that the compiler is still not
generating the absolute fastest possible code for some cases, etc.
I believe that the first release of OSF/1 on the Alpha is going
to be Pretty Damn Good - a lot of the announcements from DEC and postings
from DEC folks here are being understated because, of course, it's not
going to be *perfect* right away.
We've had an Alpha here (locally) running a pre-release version
of OSF/1 for some time. It's stable and fast, even though the version
of the C compiler on it hardly does any optimization at all. That's an
example of the kind of "problem" that the first release (and subsequent
releases) will address - further polishing.
mjr.
I think you should review the trade literature, and ask your SGI
representative for more information about future CPUs. This is
always a good policy when contemplating a major upgrade.
Incidentally, an R6000 already exists, though it's not a product
of evolution from the R4000.
Allen
Keep in mind, DEC is in serious trouble. In the first fiscal quarter (ending
Sept 26), DEC lost $260.5 MILLION! I think you should buy DEC -- if you
don't, it may not be around much longer ;-)
--
David M. Senseman, Ph.D. | A man who has never gone to school may steal
(sens...@lonestar.utsa.edu) | from a freight car; but if he has a university
Division of Life Sciences | education, he may steal the whole railroad.
UT San Antonio | Theodore Roosevelt (1858-1919)
I fear your priorities are dead wrong. The most important part of
using computers is not the chip, not the company producing or selling
it, not the byte order, not the compilers, and not even the quality or
availability of other software, but the people who use the computers.
If they are unhappy with the computers they got from whoever pays for
the hardware, they will not get good results out of it. And you would not
believe the inventiveness of people when it comes to shifting the blame
onto the computer because "This machine just can't do it".
- Our general feeling is that the worst choice is SGI with the MIPS
- 4000A chip. Clearly, judging on the very late release of the 4000A
- chip, MIPS has had problems with the development. Since the MIPS
- family is an old family, and because of these problems, it seems
- likely that there is not too much chance of ever seeing a MIPS 5000
- chip and a MIPS 6000 chip.
If you are into buying futures, then even pork bellies would be a better
bet than computers. Evaluate the machines by all means, see if they do
NOW those things you consider important NOW, and identify whether they
could have a longer term use doing a basic job once the shine has faded.
Then base your decision not on what nice things you want, but on what
nasty things you want to avoid. Experience has shown that most of a
systems drawbacks stay with it for life, while the extra bonuses soon
get taken for granted.
Good things to avoid are high maintenance cost, aggressive engineering,
elaborate cooling requirements, expansion facilities, high discount schemes
for mass purchases ...
Good things to ignore when making a purchasing decision are those for
which you cannot sue for compliance, such as zero cycle branches, forward
looking architecture development, vendor commitment, industry leadership,
free lunches ...
Thomas
--
*** This is the operative statement, all previous statements are inoperative.
* email: cmaae47 @ ic.ac.uk (Thomas Sippel - Dau) (uk.ac.ic on Janet)
* voice: +44 71 589 5111 x4937 or 4934 (day), or +44 71 823 9497 (fax)
* snail: Imperial College of Science, Technology and Medicine
* The Center for Computing Services, Kensington SW7 2BX, Great Britain
Let's call it a "red flag" then. A new cpu architecture, new systems
designed around the architecture, a major new variant of an old operating
system, and software subsystems built to operate under it. All this adds
up to very understandable concern that this is not a family to bet one's
company's well being on for at least a year or so. Just common sense, no ?
greg pavlov
pav...@fstrf.org
Martin
And some knowledgible people I've spoken to who recently attended the
MIPS nondisclosure seminar came away very impressed.
-P.
--
************************f*u*cn*rd*ths*u*cn*gt*a*gd*jb************************
Peter S. Shenkin, Box 768 Havemeyer Hall, Dept. of Chemistry, Columbia Univ.,
New York, NY 10027; she...@still3.chem.columbia.edu; (212) 854-5143
*** In scenic New York: where the Third World is just a subway ride away ***
I agree with everything Thomas Sippel said and only want to add one point.
More and more, high performance computing is graphical computing. This is
certainly true in Biology and I expect in Chemical Physics as well.
If you look at Dec, Sun, and IBM, their graphics strategy has been
pretty uninspired. As an old PDP-11 user, I have tremendous respect for
Ken Olsen -- but let's face it, if there ever was a "left-brained"
person, it was Ken. Dec never had a coherent graphics vision and as far I
can see, it still doesn't. Sun is just as bad -- witness the TACC-1,
the incompatible TACC-2, and the lastest in the line, the incompatible
Freedom 3000 (Evans&Sutherland add on board). And IBM? As someone
who still owns an IBM PC/RT, don't get me started...
I don't know much about HP except that I have been told that their
operating system isn't really UNIX. Maybe someone else might want to
comment?
When will these impressed, knowledgeable (according to spell(1)) people be able
to buy systems based on this non-disclosed information?
Before or after similarly impressed (one hopes :-) (knowledge state unknown)
people who have been to non-disclosures from Digital?
>--
>************************f*u*cn*rd*ths*u*cn*gt*a*gd*jb************************
>Peter S. Shenkin, Box 768 Havemeyer Hall, Dept. of Chemistry, Columbia Univ.,
>New York, NY 10027; she...@still3.chem.columbia.edu; (212) 854-5143
>*** In scenic New York: where the Third World is just a subway ride away ***
Regards
Richard Sharpe
These opinions be mine, Digital has its own!
> I don't know much about HP except that I have been told that their
> operating system isn't really UNIX. Maybe someone else might want to
> comment?
HP-UX is real UNIX. HP-UX isn't BSD UNIX, but BSD is no longer a
standard in demand (it was only ever a pseudo-standard in the first
place). Even Sun knows this: the OS they're going to push for the
rest of the decade (Solaris) is System V Release 4.
From my experience if you're writing new code, you're smart to
implement in Standard C making only calls to the Standard C library,
the Posix.1 interface, and industry standard graphics interfaces.
That kind of code is maximally portable to HP-UX, Solaris, OSF/1, AIX,
DG/UX, and SVR4 (hell, it'll probably even work under Windows/NT if
that ever materializes).
--
fr...@centerline.com "So what we've decided to do is set you up in
uunet!centerline!franl Cicely, situated in an area that we Alaskans
(USA) 617-498-3255 refer to as The Alaskan Riviera."
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: 2.0
mQCNAisQJSUAAAED/jbCQchSwFG7IFKkrCQ6QKLxB0LVbP6co87karNBb88ur1+S
FK82JT9mNlWKvP4HHFEI1kLKk0PAvd0nez/mQIriAMUT2pfOnIAtdqtpddgQseZZ
7BY2vMiorjG7pe6e11Q+UIQcvqsY3Bl89YBgqrydWm8UWMy2qXeXQmAScOodAAUR
tC9GcmFuY2lzIFAuIExpdHRlcmlvLCBKci4gPGZyYW5sQGNlbnRlcmxpbmUuY29t
Pg==
=6hNb
-----END PGP PUBLIC KEY BLOCK-----
A few other questions to ask:
o which manufacturer has a good track record on implementing a broad
range of hardware from workstation to near-supercomputer?
o which one has a good track record on multiprocessor systems including
good programmer support and development tools?
o which manufacturer has a conservative approach to standard bechmarks
such as SPEC, rather than engineering benchmark engines that disappoint
on real problems?
On all three points I rate SGI highly. On the last, I have 2 interesting
examples. The R3000 Indigo according to SPECmarks should be about the
same speed as a SPARCstation 2. On real problems, in my experience, it is
about twice as fast as the SPARC machine. In a recent comparative
evaluation in which I gave a little help, an R4000 Indigo was only
about 1.3 times slower than a high-end IBM RS/6000 system on a floating
point-intensive problem, despite the fact that the IBM's SPEC float was
more like twice as good.
I am somewhat skeptical about how well DEC will implement MP Alpha
machines. I remember a case a few years back when a multiprocessor VAX
had to have a processor removed for maintenance and it was _faster_
without the extra processor! Of course things could have changed since
then...
--
Philip Machanick
Computer Science Dept, Univ of the Witwatersrand, 2050 Wits, South Africa
phi...@concave.cs.wits.ac.za phone: 27 (11) 716-3759 fax: 339-7965
If anyone goes to a "MIPS _disclosure_ seminar" please report.
It's real, it's unix and it's **mighty** idiosyncratic! It's easy
to port code off HP-UX, but porting onto it was considered difficult.
The only machine I've considered harder to port to was the honeywell
dps-6, with its 32-bit integer pointers and 48-bit character pointers.
--dave (who used to work for honeywell,
but now just says ``no bull'') c-b
--
David Collier-Brown, | dav...@CCS.YorkU.CA | lethe!dave
72 Abitibi Ave., |
Willowdale, Ontario, | York Postmaster and
CANADA. 416-223-8968 | occasional sendfail(8) consultant.
>I don't know much about HP except that I have been told that their
>operating system isn't really UNIX. Maybe someone else might want to
>comment?
HP makes great graphics equipment, but they don't know why. It's
kind'a an engineering exercise, like the moon shots. They all work
to a purpose, then say, "OK, now what?". HP desperately needs to
get someone who understands what graphics are FOR working in their
tech labs. Plus, a flight simulator would sell >$1M worth of
graphics tops, but staid HP doesn't do it! There are customers out
there who WANT to do graphics and NO engineering! Games which prove
capability aren't anathema to them...
Tony Burzio
Arete Associates
San Diego, CA
>sens...@ricky.brainlab.utsa.edu (David M. Senseman) writes:
>> I don't know much about HP except that I have been told that their
>> operating system isn't really UNIX. Maybe someone else might want to
>> comment?
>HP-UX is real UNIX. HP-UX isn't BSD UNIX, but BSD is no longer a
>standard in demand (it was only ever a pseudo-standard in the first
>place). Even Sun knows this: the OS they're going to push for the
>rest of the decade (Solaris) is System V Release 4.
ack! BSD is still in demand, it's just that so few are still producing
it. I don't care if that doesn't make sense, name 10 major decisions in the
computer world in the last couple of years that _have_ made sense!
HP-UX is indeed "real" unix, just not terribly stable unix (from what
I've seen...(brace for flames...)) I must say, it is getting better.
>From my experience if you're writing new code, you're smart to
>implement in Standard C making only calls to the Standard C library,
>the Posix.1 interface, and industry standard graphics interfaces.
>That kind of code is maximally portable to HP-UX, Solaris, OSF/1, AIX,
>DG/UX, and SVR4 (hell, it'll probably even work under Windows/NT if
>that ever materializes).
>--
>fr...@centerline.com "So what we've decided to do is set you up in
>uunet!centerline!franl Cicely, situated in an area that we Alaskans
>(USA) 617-498-3255 refer to as The Alaskan Riviera."
well, right on, but how relevant is it?
My 2bits on arch wars:
Dec Alpha: isn't in general availability yet...OSF/1 certainly is only in
advanced beta, whatever that means.
IBM RS/6000: Nice floating point performance. Expensive.
Sun: nobody beats them in their low end pricing. Excellent quality control
(best in business?) & technical support system.
4.1.3 works real good, Solaris doesn't. 40mhz sparcs might start volume
shipment within the next month, which puts it only 5 months late (from announce-
ment shipping goals).
SGI: incredible price/performance in midrange. Reasonable OS, & terrific
support from corp. Company hasn't grown big enough to forget that customers
are an asset, not a pain. nobody touches them on mid to high end graphics.
Also, they make the best workgroup servers in the industry. can you say
symmetric multiprocessing? Noone else can (uh, maybe sequent & solbourne).
HP: good on benchmarks & single issue workstations. mid-high price range.
current os is still immature. 7100 series arch is simply refinement of
older arch.
so, since money dictates, here are my favorites by $:
workstations:
$0 - $7k = Sun
$7k - $25k = SGI
$25k - $40k = HP/SGI
servers:
$0 - $30k = Sun
$30k - 75k = SGI
well, challenge me if you want me to support any of the above opinions.
-mike
ps: by posting header might be screwed, mail me at wils...@llnl.gov if
it is.
>so, since money dictates, here are my favorites by $:
>workstations:
>$0 - $7k = Sun
>$7k - $25k = SGI
>$25k - $40k = HP/SGI
>servers:
>$0 - $30k = Sun
>$30k - 75k = SGI
Ok, forgot 1 important thing. Above pricing reflects std educational pricing
from the various makers.
>ps: by posting header might be screwed, mail me at wils...@llnl.gov if
>it is.
it is screwed (sigh)...time to recompile.
-mike
I don't reagard it a "real" UNIX, then again I wouldn't buy a "real" UNIX,
1970s software technology is not something I would want to buy today.
Getting caught up in the "pure" UNIX war will lead you to restrict yourself to
"pure" SVR4 implementations, in the mainstream camp *only* SUN have gone for
this. That in my view does not make it much of a "standard".
If a vendor decides to do something about the crass inadequacies of UNIX we
should give them three cheers, not start a flame war about how the DIRECTORY
command *must* forever and ever be called ls because that is what the great tin
pot Gods who wrote UNIX thought was a nice, clear name for it.
The most threatening thing I see in computing today is the "we have found the
answer, all heretics will perish" attitude. I have an awful lot of experience
in computing, I have used six or seven operating systems and I have even
written one. UNIX in my view is an abomination, it has serious difficulties,
these could have been fixed quite easily, but I now realize nobody ever will.
At the moment I use a VMS box, I do so because I find that I do not spend my
time having to think in the "UNIX" mentality that centers around kludges. I do
not have to tolerate a help system that begins its insults of the user by being
invoked with "man".
Apollo in my view were the only UNIX vendor to realize that they had to put
work into the basic operating system. They had ACLs, shared libraries and many
other essential features five years ago.
What I find disgusting about UNIX is that it has *never* grown any operating
system extensions of its own, all the creative work is derrived from VMS,
Multics and the operating systems it killed.
> From my experience if you're writing new code, you're smart to
> implement in Standard C making only calls to the Standard C library,
> the Posix.1 interface, and industry standard graphics interfaces.
> That kind of code is maximally portable to HP-UX, Solaris, OSF/1, AIX,
> DG/UX, and SVR4 (hell, it'll probably even work under Windows/NT if
> that ever materializes).
If you do that properly it *will* work under windows NT, and under VMS too.
That is what POSIX is all about. From now on UNIX will no longer be able to
exist as a lowest common denominator system, the pressure will be to implement
the higher POSIX levels such as threads.
In the long term basing your choice of hardware on the operating system will no
longer be neccessary, for VMS to survive against Windows NT, it will have to be
ported to other platforms. For SUN to continue to sell hardware they will have
to support Windows NT as will HP.
Phill Hallam-Baker
I went to some damn impressive non-disclosure meetings, hell the chip was
amazing, did everything we need, ten times faster than the current product.
They were for the inmos T9000, ever seen one?
Point is, don't beleive the non-disclosure untill you have seen the iron. I
have seen an alpha and used it, it worked, was very nice. I have used gear for
quite some time that uses the same process. I am as confident as anyone could
be that they will in fact make it to market with the beast in reasonable time.
Projections of the future of the Mips chip are rather clouded by the initial
R4000 release, the concensus here is that to be keeping up with the game MIPs
should have put a bit more beef into it. Doubtless, a new and improved MIPs
chip will appear, but why non disclosure?
The only non-disclosure stuff I ever have any interest in is architecture,
providing backwards compatibility costs, you can see the effect in the tail end
of the VAX line. When someone breaks the backwards compatibility (at least at
the binary level) they can get a boost.
If Mips are going to break binary compatability then I would be interested to
hear in advance that they were planning it, the market they intended to
exploit, the reasoning behind their methodology etc. This would allow me to go
back and plan new systems.
I can't see any point in going to be told that SGI hope to bring out a faster
machine, except as an ego boost (gosh they let little *me* into a
*non-disclosure* meeting, how important I must be) or to stock up on ethanol
and cheesy snacklettes.
If you want to predict what a company is going to do:-
1) Read their balance sheet,
Forget profit and loss, that is irrelevant. what matters is cash flow,
assets etc. DEC are shedding staff in a reorganization, companies will always
report bad figures in that situation. The basic business is sound, DEC could
always just become a MIPS or SPARC chip pusher, they have no idealogical
hangups about their market that would prevent them from doing what is
necessary.
2) Look at their major market,
Don't flatter yourself that your needs represent the mainstream market.
Unless you are doing very boring computing indeed, you are in a specialist
market, some companies specialize in niche markets, others go for broader
markets. If you have a company oriented arround commercial work they are going
to be after COBOL, databases, IBM-PC emulation and the like. Real time control
provides another set of constraints, as does computer aided design.
DEC are making a loss but have huge cash flow and huge reserves. Plus a goodly
number of banks are criticaly dependent on them. They simply cannot afford for
DEC to go under because they would follow. This is known as the "IBM effect".
SGI have a niche market in graphics but have mushroomed into a very large
corporation within the last three years. Their main problem is that their niche
market is volatile, it swings widely as the market changes.
HP have the fastset box you can actually buy off the shelf today. This may
change tommorow, but HP are likely to stay in the front rank for a while.
SUN have gone into "we know better than you do what you want" mode. This was
fine for DEC and IBM, but these companies now realize they can't survive with
that mindset. SUN may report profits, but on what basis? How fast are they
writting off the capital value of the SPARC design? How much are they valuing
"goodwill" and "non-tangeable assets" at? Sun are failing to keep up, unless
they can get back in the race soon they will go under. The PC market is
producing machines that push into SPARC performance territory, but to buy a PC
is simple, just pick up the phone, dial a number and give your credit card
number. Try buying a SUN like that. Look in the current issue of byte, not a
single SUN dealer giving prices on the page for product. Remember that
technical competition is not the only kind, ease of purchase and use also
count.
CRAY may make a (re)entry into the big boys league. They have had a practice
run and may take the high end of the w/s market with multiprocessor MIMD boxes.
--
Phill Hallam-Baker
The problem with UNIX is that in 1972 a couple of folks looked at the world
as it then existed, and realised that by making a few simplifying assumptions
that limited the system to the timeshared model, they could develop a very
simple high-level programming model that covered basically the entire set of
applications they were interested in.
This has the advantage that as long as you were interested in that basic
environment, you could run a wide variety of alien operating systems and
present the same interface to the programmer.
Where does the "problem" bit come in? The problem is that to get past the
timeshared environment, you have to either:
1. Make the basic model a little more complex, or
2. Add a few new interfaces to deal with new environments,
and make them more or less well-integrated to the basic
model, or
3. Implement it on top of a more complex model, with an escape
mechanism.
Most people have done 2 or 3. System V IPC, Berkeley sockets, and a lot of
the stuff under ioctl() are examples of 2. Mach, Amoeba, and other UNIX
clones are examples of 3. The problem is, with a few changes in the interface
you can provide a model that works well for real-time and distributed
environments. The Plan 9 model covers the distributed world pretty well,
but does little for real-time. For real-time, you need to jack up the system
call interface and provide a general asynchronous interface to it.
The result would look kind of like sockets code, but it'd be more general:
Let's introduce an opaque object like a file descriptor called an TOKEN.
A TOKEN represents a system call in progress, a TOKSET represents a group
of tokens to be tested for completion or waited on.
ttymon()
{
TOKSET ttyset;
TOKSET waitset;
TOKEN ttys[NTTY];
TOKEN gettys[NTTY];
TOKEN tty;
int i;
ttyset = new_tokset();
for(i = 0; i < NTTY; i++) {
if(ttys[i] = async_open(ttyname(i), O_RDWR))
add_token(ttyset, ttys[i]);
else
alert(DEADTTY, i);
gettys[i] = 0;
}
while(waitset = async_wait(tokset)) {
for(i = 0; i < NTTY; i++) {
if(ttys[i] && token_in(ttys[i], waitset)) {
remove_token(ttyset, ttys[i]);
if(gettys[i] = spawn_getty(i, result(ttys[i])))
add_token(ttyset, gettys[i]);
else
alert(DEADGETTY, i);
ttys[i] = 0;
}
if(gettys[i] && token_in(gettys[i], waitset)) {
remove_token(ttyset, gettys[i]);
log_result(i, result(gettys[i]));
if(ttys[i] = async_open(ttyname(i), O_RDWR))
add_token(ttyset, ttys[i]);
else
alert(DEADTTY, i);
gettys[i] = 0;
}
}
}
}
> At the moment I use a VMS box, I do so because I find that I do not spend my
> time having to think in the "UNIX" mentality that centers around kludges. I do
> not have to tolerate a help system that begins its insults of the user by
> being invoked with "man".
That's because it's not a help system. It's an online manual. If you want a
help system, I'm afraid that nobody's written one for UNIX. If there's a
demand for it, you can be sure that you'll get plenty of buyers when you
write one and sell it at a reasonable price.
> Apollo in my view were the only UNIX vendor
Apollo were another group 3 vendor, alas.
> What I find disgusting about UNIX is that it has *never* grown any operating
> system extensions of its own, all the creative work is derrived from VMS,
> Multics and the operating systems it killed.
That's because UNIX is all about simplifying the working model. Not extending
it. POSIX *is* UNIX, and if you implement POSIX you have implemented a UNIX
system, for all practical purposes. It's just got some fancy terminology tacked
on so DEC and others can save face. Because, for all its warts, UNIX is still
a *GOOD* common denominator for a wide variety of application domains.
Certainly it's a much better API than VMS or WNT provide.
--
%Peter da Silva/77487-5012 USA/+1 713 274 5180/Have you hugged your wolf today?
/D{def}def/I{72 mul}D/L{lineto}D/C{curveto}D/F{0 562 moveto 180 576 324 648 396
736 C 432 736 L 482 670 518 634 612 612 C}D/G{setgray}D .75 G F 612 792 L 0 792
L fill 1 G 324 720 24 0 360 arc fill 0 G 3 setlinewidth F stroke showpage % 100
> POSIX *is* UNIX, and if you implement POSIX you have implemented a UNIX
>system, for all practical purposes. It's just got some fancy terminology tacked
>on so DEC and others can save face. Because, for all its warts, UNIX is still
>a *GOOD* common denominator for a wide variety of application domains.
>
So running POSIX on OpenVMS means I've really got UNIX? Great!
>Certainly it's a much better API than VMS or WNT provide.
What standard of reference are you using?
>--
>%Peter da Silva/77487-5012 USA/+1 713 274 5180/Have you hugged your wolf today?
PJDM
--
Peter Mayne | My statements, not Digital's.
Digital Equipment Corporation |
Canberra, ACT, Australia | "AXP!": Bill the Cat
> Hmm, probably not. Microsoft seem to think that X11 (ie.
>one of the "industry standard graphics interfaces" doesn't exist.
>In fact Microsofts idea of networking seems to be restricted to
>that of desktop PCs connected to file servers. Having seperate
>"compute" and "display" servers is not something they apear to
>admit is workable or desirable.
>
> Graeme Gill
>
On the contrary, Microsoft are encouraging the use of compute servers via the
use of RPCs. Hermes is a good example of this. ODBC is another one.
Microsoft may (repeat may) be right *if* they think that display servers aren't
desirable. For the price of an X display server, you can buy a box sufficient
to run Windows, or even NT, and that box can do sufficient local work (eg word
processing, calendars, games, what have you) that the compute server can be
downsized even more for further cost savings. And the display is snappier.
X emphasises display serving over compute serving, NT is the other way round.
Then why aren't they hedging their bets by producing an R4000 box? Or have
I missed something?
>2) Look at their major market,
> Don't flatter yourself that your needs represent the mainstream market.
>Unless you are doing very boring computing indeed, you are in a
specialist
>market, some companies specialize in niche markets, others go for broader
>markets. If you have a company oriented arround commercial work they are
going
>to be after COBOL, databases, IBM-PC emulation and the like. Real time
control
>provides another set of constraints, as does computer aided design.
>
>
>DEC are making a loss but have huge cash flow and huge reserves. Plus a
goodly
>number of banks are criticaly dependent on them. They simply cannot
afford for
>DEC to go under because they would follow. This is known as the "IBM
effect".
Well I don't give much for IBM's chances either. They have split into 14
companies perhaps in the hope that some parts can be salvaged.
>SGI have a niche market in graphics but have mushroomed into a very large
>corporation within the last three years. Their main problem is that
their niche
>market is volatile, it swings widely as the market changes.
SGI's strength is covering a wide range of markets. Graphics is important
because the performance issues it presents eventually float up in more
maintream computing (memory bandwidth, floating point performance). If
they could drop to the sub-$5000 market, they'd cover everything from
high-end PCs to the lower end of super computers. Another critical
strength is in multiprocessor systems, an area where most competition
is still finding its feet.
But going through RPCs requires that I install a dedicated compute
client/display server at my box. The nice thing about X is that I can
use a program that's installed on the DecStation or the Sun and still
work at my Apollo, and the only effort on my side is setting the
DISPLAY.
X is usually suboptimal in terms of net bandwidth consumption and
distribution of work, but in most cases this does not matter.
|> For the price of an X display server, you can buy a box sufficient
|> to run Windows, or even NT, and that box can do sufficient local work (eg word
|> processing, calendars, games, what have you) that the compute server can be
|> downsized even more for further cost savings. And the display is snappier.
For the price of an X-terminal I can buy a box, but it has no screen
(at least none that is worth talking about). It also has no disk, so say
bye bye to the net bandwidth advantages.
- anton
--
M. Anton Ertl Some things have to be seen to be believed
an...@mips.complang.tuwien.ac.at Most things have to be believed to be seen
For a benchmark on the Snake series for Parallelogram I did a quick
port of some Sun C code in about 30 minutes. A couple of library calls
missing on the HP, but, what the heck---I knew my code and wrote them
out within 30 mins. This was many months and at least one major OS
version ago.
Damon
--
Damon Hart-Davis Internet: d...@exnet.co.uk, d...@hd.org
Public-access UNIX (Suns), news and mail for UK#5 per month. FIRST MONTH FREE.
[1.35] Cheap Sun eqpt. UUCP news/mail feeds. Tel/Fax: +44 81 755 0077.
From which division are the 6000 people being laid off from?
>4.1.3 works real good, Solaris doesn't. 40mhz sparcs might start volume
>shipment within the next month which puts it only 5 months late (from announce-
>ment shipping goals).
That's ok, I waited for the HP 68040 processor a similar
period :-)
>SGI: incredible price/performance in midrange. Reasonable OS, & terrific
>support from corp. Company hasn't grown big enough to forget that customers
>are an asset, not a pain. nobody touches them on mid to high end graphics.
SGI has more effective use of 3D graphics because it's easier to use.
For example, you can create an object in a window with many less
calls than with an HP. In addition, once drawn the window can be
transformed at will with the mouse. On the HP, it's all up to the
programmer to handle the many and often complicated 3D calls. HP
has faster graphics, but they'll never win over SGI.
Even though a 730 will toast an 8 processor SGI!
These must be SPEC92 numbers. Are these numbers published by SGI or MIPS?
Are they estimates, simulated, or measured?
The only numbers I have seen are 95 SPECint89, 126 SPECfp89, 113 SPECmark89
all based on simulations by MIPS.
/d
|> Dec Alpha: isn't in general availability yet...OSF/1 certainly is only in
|> advanced beta, whatever that means.
For the record...
General availability for Alpha systems (DEC 3000 Model 400, etc.) is here.
They began shipping last week with VMS.
I don't know what "advanced beta" means either but, while general availability
of DEC OSF/1 V1.2 (Alpha) is scheduled for March, several hundred Alpha OSF/1
systems have now begun shipping to software developers under an early release
program. Also, Version 1.0 of DEC OSF/1 was released last March.
OPEN SYSTEMS
TODAY reviewed it in their May 11 issue and praised it's stability and
speed, even
though V1.0 was only a developer's release for some DECStations. (They
later
compared its performance with Solaris 1.0 and 2.0 on the same speed
processors
and it proved much faster than either the BSD or SVR4 versions of
Solaris on the
majority of the benchmarks (Sept 21 issue, pg. 73.). And, V1.2 has
performance
improvements over V1.0.
Steve
Unix Product Marketing ---> you may have guessed this ;-)
mcin...@decvax.dec.com
There are SPECfp89 and SPECint89 from the old SPEC suite, and there are
SPECfp92 and SPECint92 from the current SPEC suite. The old metrics may
*not* be compared with the current ones.
SPECint89 and SPECint92 are fairly close, within 10% on machines for
which I have data, for all the major manufacturers. SPECfp89 and
SPECfp92 differ by at least 20% to as much as 75% for the same
machine. (SPECfp89 is higher of course due to the 030.matrix300
inflation.) That same large floating point variation makes SPECmark89
a less reliable performance indicator, because SPECfp89 comprises such
a large part of SPECmark89.
View any claims of "SPECfp" and "SPECint" the same way you would view
claims by a car salesman of "EPA mileage" without specifying city or
highway.
As I recall, SGI's announcement was careful to properly specify
SPECfp89 and SPECint89. That 89 suffix is very meaningful.
---
Walter Bays walte...@eng.sun.com
Sun Microsystems, 2550 Garcia Ave., MTV15-404, Mountain View, CA 94043
(415) 336-3689 SPEC Steering Committee Chairman FAX (415) 968-4873
Ack! X Window system calls and Losedows NT window calls are incompatible
at best. (Yeah, right! M$ is the only computer company that dares
introducing an allegedly "new_from_scratch" OS that's not multi-user
and a new windowing system that's not networked.)
--
Volker Herminghaus-Shirai (v...@rhein-main.de)
Looks good on the outside, but -
intel inside
>> The 75 Mhz version of the R4000 (I believe it is called a R4400) is out; it
>> gives about 90 SPECfp and 90 SPECint, which is pretty respectable. A
>
>The only numbers I have seen are 95 SPECint89, 126 SPECfp89, 113 SPECmark89
>all based on simulations by MIPS.
>
Just to clear it up, the R4400 is a complete new chip which runs
at either 50, 67, or 75Mhz, The above ratings are for the 75Mhz
versions. The R4400 is more than just a 75Mhz version of the R4000.
It features 32K of on chip primary cache and can control up to
4 MB of secondary cache, it also has a write buffer which allows
the CPU to overlap execution of instructions with writes to the
screen.
--
______ _ _
_/ ____) _( )_ _( )_ Scott Klosterman
_ (____ ____ ___ (__ __) (__ __) sc...@sgisupport.lerc.nasa.gov
\____ \_/ __) / \ ( ) ___( ) _
____) _ (__ ( O ) ( \_/ ___ \_/ )
(______/ \____) \___/ \___/ \___/
David,
Check out the Sept'92 issue of Computer Graphics World. The "Output" column contains a good, concise overview of Digital's graphics strategy. The net of it is consistent 2D APS's (X, GKS, Display Postscript) and 3D API's (GKS-3D, PHIGS, PEXlib, OpenGL) across both Unix and VMS.
--
Russ Jones
UNIX Marketing
Digital Equipment Corporation, Palo Alto, CA
All opinions expressed are mine, and not necessarily those of my employer
I don't think it's a question of whether they COULD ship a $5000 machine,
and it's certainly not a question of whether anyone would buy one.. the
thing I wonder about is whether they can do it SOON ENOUGH. Sun has already
entered the sub-$5000 market, which makes it a safe bet that other vendors
will follow... SGI still has other strengths over them (3D graphics being
their strongest), but what happens when someone buys a bunch of Suns
because SGI can't compete with Sun on price? Isn't that a bunch of sales
that SGI didn't make? (This comparison is only valid assuming that the
machines are for "general" use; obviously a Sun won't run Insight.)
It runs deeper than that with me, though; I have found SGI to have the
most bizarre pricing "strategy" I've ever seen. Example: For a little
more than the price of a 4D/20 -> 4D/35 upgrade, I can buy an Indigo and
use the 4D/20 as a rather large doorstop with 3D rendering capability.
Then I can put another 16MB of RAM in the Indigo with what I saved by
not paying the FSE to install the 4D/35 upgrade. For what it would cost
to upgrade all the 4D/20 and 4D/25 machines we have here, we could buy
a Crimson and use the old machines as a stand for the Crimson; that way
I could stub my toes on old technology when I come in to work at night.
When the machines are no longer supported by SGI, I will be able to trade
them in against the price of new machines. This is happening now with
certain old SGI machines (3030 if I'm not mistaken). Only 4 years away...
Disclaimer: Bitter sarcasm from a slightly embittered sysadmin.
Dileep's question is asking for clarification on Martin's SPEC numbers. At
the chip announce, MIPS gave simulated performance numbers for an aggressive
R4400 machine. (I say "aggressive" because of a post Charlie Price made here
saying that the simulated system had about as good a memory subsystem as
could be built.) Those numbers were 95 SPECint89 and 126 SPECfp89. Any other
numbers (especially from real hardware) are of great interest.
--
Zalman Stern zal...@adobe.com (415) 962 3824
Adobe Systems, 1585 Charleston Rd., POB 7900, Mountain View, CA 94039-7900
"Yeah. Ask 'em if they'll upgrade my shifters too." Bill Watterson
Can Irix? As far as I know Irix kernel doesn't use multiple processors
symmetrically. A fellow here at HUT has been doing a lot of
benchmarking on various hardware and operating systems and some tests
showed that on SGI's multiprocessor workstations network throughput
can become bottleneck in some situations because networking is
handeled by one processor. Is this fixed in newer versions of Irix?
OSF should have Mach at the bottom so one would expect at least a
little symmetry - right?
How's Solaris when it comes to SMP?
(what would be the most appropriate newsgroup?)
--
Antti P Miettinen / a...@kata.hut.fi
If this is the same comparison (that dec made) posted earlier to ths solaris
group then I can tell it was just *slightly* biased. The machines benchmarked
were not equal on cpu power.
jussi
--
============================================================================
Jussi Eloranta Internet(/Bitnet): ! The ultimate trip is
University of Jyvaskyla, Jussi.E...@jyu.fi ! death.
Finland [130.234.0.1] ! -- Jim Morrison
Dave McAllister
shindo.esd.sgi.com
In article <APM.92De...@kikka.hut.fi>, a...@kikka.hut.fi (Antti
--
*SHINDO - the ART of the MIND*
*Fortune for the day*
| My 2bits on arch wars:
And I won't repeast it all here. As far as I can see, with one
exception "part is parts" and the o/s is a bigger selling point (or
drawback) than the vendor. I admin BSF, Ultrix, SunOS, V.4/386, Xenix,
SCO UNIX, V.3.2, Ultrix (MIPS and VAX), HP/UX and a little AIX, and the
biggest problem I have with some of them is the stuff the vendors put in
software.
I'd love V.4 instead of HP/UX, but that's my opinion.
--
bill davidsen, GE Corp. R&D Center; Box 8; Schenectady NY 12345
Keyboard controller has been disabled, press F1 to continue.
HP did an early release of OSF/1 once too, and it sucked soooo bad
it was pulled immediately. Apparently OSF/1 has some real problems,
probably down in the kernel where it was too hard to fix...
>OPEN SYSTEMS
>TODAY reviewed it in their May 11 issue and praised it's stability and
>speed, even
>though V1.0 was only a developer's release for some DECStations. (They
>later
Oh great, that's the rag that includes VMS as an "open system?"
UNIX is bad enough, even decades into development. What about the
compilers? Even SGI, a stable and venerable company, had a MAJOR
bug in it's FORTRAN compiler that we found which messed up do loops
on some occasions (it's been fixed). How could DEC find all the
defects before releasing product, with all the pressure from
management to make up for their bungling? Naw, OSF/1 isn't ready yet...
>Steve
>Unix Product Marketing ---> you may have guessed this ;-)
Will OSF/1 be available on the DECstations?
It seems a shame that this discussion is overlooking some of the best
symmetrical multiprocessors in the industry.
Data General's Aviion line ranges from one to eight processors, with
memory and peripheral configurations to match. The DG/UX kernel is
fully symmetrical with respect to memory, processors, and I/O. That means
that there are NO bottlenecks of the "one processor handles...." variety.
And, yes, as one would expect of a well-engineered SMP, network
throughput (and just about everything else) scales up as processors
are added.
"Multiprocessor aspects of the DG/UX kernel" by Michael Kelley in the
Winter '89 Usenix conference proceeding is a good technical overview,
although obviously we haven't been standing still in the three years since
then.
--
----------------------------------------------------------------------
Eric Hamilton +1 919 248 6172
Data General Corporation hami...@dg-rtp.rtp.dg.com
62 Alexander Drive ...!mcnc!rti!xyzzy!hamilton
Research Triangle Park, NC 27709, USA
-Rob
-Rob
|> I don't know what "advanced beta" means either but, while general availability
|> of DEC OSF/1 V1.2 (Alpha) is scheduled for March, several hundred Alpha OSF/1
|> systems have now begun shipping to software developers under an early release
|> program. Also, Version 1.0 of DEC OSF/1 was released last March.
|> OPEN SYSTEMS
|> TODAY reviewed it in their May 11 issue and praised it's stability and
|> speed, even
|> though V1.0 was only a developer's release for some DECStations. (They
I.e. job blow user can't buy it where as they can by Solaris
1.1, 2.0 and now 2.1. They can also get HP-UX 8.x and 9.x(?).
Again, vapor city UNIX wise.
|> later
|> compared its performance with Solaris 1.0 and 2.0 on the same speed
|> processors
|> and it proved much faster than either the BSD or SVR4 versions of
|> Solaris on the
|> majority of the benchmarks (Sept 21 issue, pg. 73.). And, V1.2 has
|> performance
|> improvements over V1.0.
|>
Ya, but people couldn't BUY IT. i.e. vaporware.
Now that both are out maybe lucky new owners can post OSF/1
and Solaris 2.1 numbers? Both have performance improvements
and I assume the new OSF/1 can be purchased by Joe User along
with any end user apps (s)he may need?
-Rob
> So running POSIX on OpenVMS means I've really got UNIX? Great!
If it's complete enough for your porpoises it's effectively UNIX. I haven't
looked at it, so I don't know if I'd classify it as a high quality UNIX, but
yes...
Just because something's effectively UNIX doesn't mean it's a high quality
implementation. You can run Minix 1.x on a vanilla IBM/PC with two floppies,
but I wouldn't want to try to use Emacs on it.
> >Certainly it's a much better API than VMS or WNT provide.
> What standard of reference are you using?
Complexity (bad) uniformity (good) information hiding (good) ... all the same
criteria I use to evaluate a programming language or application.
--
%Peter da Silva/77487-5012 USA/+1 713 274 5180/Have you hugged your wolf today?
>> Also, Version 1.0 of DEC OSF/1 was released last March. OPEN
>> SYSTEMS TODAY reviewed it in their May 11 issue and praised it's
>> stability and speed, even though V1.0 was only a developer's
>> release for some DECStations.
> I.e. job blow user can't buy it where as they can by Solaris
> 1.1, 2.0 and now 2.1.
Actually, Joe Blow user can buy it and has been able to for several
months. The "developer's release" title refers more to its lack of
tuning and of final testing than to its availability.
david carlton
car...@husc.harvard.edu
I want you to MEMORIZE the collected poems of EDNA ST VINCENT
MILLAY.. BACKWARDS!!
Not too surprising. The 30 and 35 were done *after* the idea of Indigo
was well along, mainly as an upgrade path for 20 and 25 customers who
required the VME slot, or had other reasons for preferring the upgrade
path. Originally it wasn't as horrible a mechanical upgrade, but it
pushed past what the 20/25 chassis was designed for, and so the chassis
had to be modified to meet EMI and thermal requirements (which also
raised the cost) The 30 and 35 did ship sooner than Indigo, but I'm
pretty sure it was after Indigo was officially announced.
--
Let no one tell me that silence gives consent, | Dave Olson
because whoever is silent dissents. | Silicon Graphics, Inc.
Maria Isabel Barreno | ol...@sgi.com
>>OPEN SYSTEMS
>>TODAY . . .
>
>Oh great, that's the rag that includes VMS as an "open system?"
>
>Tony Burzio
>Arete Associates
>San Diego, CA
>
And what makes you think OpenVMS isn't open?
(Oh, oh, I've started one of those "what does open mean?" wars.)
Remember that any single TCP/IP request is necessarily single threaded.
You can't parallize it, except the byte copies and checksumming, and
those are better handled by not doing byte copies and by doing
checksumming in hardware.
For the last year or two, network code in IRIX runs on any processor,
albeit for the duration of a single request or TCP state change no CPU
switching occurs and only one CPU at a time can do TCP stuff. The
current scheme can be thought of as a two potentially nested monitors,
with some queuing to mitigate waiting for a monitor. Hardware
interrupts are still pointed to a single CPU.
Be careful using the Sequent or OSF model for "MP-izing" network code,
of adding (un)locking calls every few lines. The classic number of
instructions for TCP/IP from user space to the wire, excluding
checksumming, is on the order of 200 instructions. It takes a
non-trivial number of cycles to use one of the hardware semaphores in
the current Silicon Graphics MP systems. Use more than a very few
locks, or fail to have hardware support for locking, and you will find
a lot of your cycles are spent fiddling with locks. I bet that the OSF
or Sequent code suffers from this problem. IRIX's TCP/IP speed is
competative.
Some customers have finally noticed that problem even in the current
grab-the-network-monitor-and-run style in IRIX 4.0. The cycles
available to non-network processes running on a system with an
unchanged, high network load has been reduced from previous releases.
It's the cost of making it possible to switch CPU's.
You can be politically-correct-MP, or you can be fast, but it is not
always possible and never easy to be both.
Vernon Schryver, v...@sgi.com
As someone who rather recently invested quite a bit of my company's money
in DEC's MIPS-based systems, I find the above question to be very relevant.
Too late and after the fact, unfortunately....
greg pavlov
pav...@fstrf.org
Yes, but if you are to benchmark operating systems then the machines must
have equal cpu & I/O power.
> I am somewhat skeptical about how well DEC will implement MP Alpha
> machines. I remember a case a few years back when a multiprocessor VAX
> had to have a processor removed for maintenance and it was _faster_
> without the extra processor!
You must be thinking of DEC's first and most short-lived multi-processor
VAX, the 11-782. It was very asymmetric, and under the right load
conditions, it could indeed exhibit the behavior you describe. Please note
that this was all a long time ago.
> Of course things could have changed since then...
Yes, a lot! If you want to evaluate DEC's performance in multi-processor
design, look at current models such as the VAX 6000, 7000, 10000 series.
> In article <1992Nov30.1...@e2big.mko.dec.com>
> mcin...@unix.enet.dec.com (Steve McIntosh) writes:
>>OPEN SYSTEMS TODAY reviewed it in their May 11 issue
> Oh great, that's the rag that includes VMS as an "open system?"
You've got a problem with that? Consider the standards that VMS currently
supports: ANSI standards for terminals, printers and magtape, Ethernet,
X.25, X.400, X, Motif, TCP/IP, ISO/OSI, Posix. I reckon this makes VMS at
least as open as any other O/S, and more open than most.
But every shared memory machine suffers from the n+1 effect at some point. It is
the principle disadvantage with the shared memory system. .For the 11-782, n was
1. For one of the Convex machines it was 16. For most machines it hovers in the
range of 4-6.
This is the big problem with producing "high end" machine just by adding
processors, unless your communications bandwith scales with the number of
processors , you will have a limit to the number of processors that work.
This is why point to point networks like the transputer, and switching networks
(which have a limit but in the 100+ processor range) are so interesting.
--
Phill Hallam-Baker
I was interested to see the length and breadth of this thread. It`s
fascinating that few of the contributers made comments, as above, to the
effect "who cares if we can do nothing fast".
I can't believe the interest in better faster hardware that runs O/S's
that do so little for the user. For years in the PC community I could
not understand why anyone would buy a Mac when it was arguably slower
and more expensive than an IBM clone PC. Finally after years it dawned
on me that while I was running Wordstar *real fast* they were making
documents that I could SIMPLY NOT CREATE! In fact now I realize how
much of a line (fish) I've been swallowing from the whole computer
industry.
Now I'm into NeXT and while the *Better Faster Iron* community is
arguing about how to multiply matrixes faster I'm doing more USEFULL
work on my 030 NeXT than on machines 10 or 20 times faster. Why?
Because with NeXT I can do anything that the fast Iron can do (ok slower
:-) BUT I can do stuff that is simply impossible on other platforms.
Sure there are reasons to have fast hardware I have access to that stuff
when I need it, thats why we have a heterogenious Network here. Do you
think that Mac's and PC's outsell UNIX workstations 100:1 ? because they
multiply better?
Gad I wish I was writing this on my NeXT so I could run the dictionary
and thesaurus. Gad I wish every one had multimedia mail so I could post
pictures of dinasour's from the dictionary to the net.
Just go buy an i860 HyperCube if you want to multiply !
Ian
--
---------------------------------------------------------------------------
Ian Jefferson ij...@ccs.carleton.ca No NeXT mail please!
ij...@computeractive.on.ca NeXT mail please!
>Gad I wish I was writing this on my NeXT so I could run the dictionary
>and thesaurus. Gad I wish every one had multimedia mail so I could post
>pictures of dinasour's from the dictionary to the net.
So why didn't you, if its so much better?
We'd like to know???
Doug McDonald
P.S. I'm writing this from my PC.
Because (with a few exceptions) PC's and Mac's are much cheaper.
|> Gad I wish I was writing this on my NeXT so I could run the dictionary
|> and thesaurus. Gad I wish every one had multimedia mail so I could post
|> pictures of dinasour's from the dictionary to the net.
Don't need a NeXT to do that- dictionary/thesaurus CD roms are available for
most computers and workstations. And sending/posting multimedia objects
(e.g. stupid pictures) didn't begin and doesn't end with the NeXT (ie.
ever heard of SmallTalk or Andrew?)
--
--- It is kind of strange being in CS theory, given computers really do exist.
John Mount: jmo...@cs.cmu.edu (412)268-6247
School of Computer Science, Carnegie Mellon University,
5000 Forbes Ave., Pittsburgh PA 15213-3891
vjs> Be careful using the Sequent or OSF model for "MP-izing" network
vjs> code, of adding (un)locking calls every few lines. [ ... ] Use
vjs> more than a very few locks, or fail to have hardware support for
vjs> locking, and you will find a lot of your cycles are spent fiddling
vjs> with locks. I bet that the OSF or Sequent code suffers from this
vjs> problem. [ ... ]
vjs> You can be politically-correct-MP, or you can be fast, but it is
vjs> not always possible and never easy to be both.
I don't know why, but here we have DEC 5000s (monoprocessor) and DEC
5830s (three processors), running Ultrix 4.2; well, each processor in
the 5830 is lower than the processor in the 5000, but look at these
numbers for compiling a 2961 line program on the 5830:
user 0m11.35s
sys 0m9.75s
and on the 5000:
user 0m4.26s
sys 0m1.46s
Note that on the 5830 (which was quite empty at the time the compile
was going on) system time is almost 50% of the toal CPU time, and on the
5000 system time is less than 25% of the total.
Not that 25% system voerhead is small, but 50% is ludicrous; and as far
I can imagine the only reason must be that the machine is a
multiprocessor.
--
Piercarlo Grandi | JNET: p...@uk.ac.aber
Dept of CS, University of Wales | UUCP: ...!aber-cs!pcg
Penglais, Aberystwyth SY23 3BZ, UK | INET: p...@aber.ac.uk
Is this really very bad? I know that parallel sorting routines sometimes
take less overall time for a slightly larger problem. E.g. pad the data
will a few hundred fake entries and your program takes 5% less time. This
has to do with things like the caching sub-system & collisions etc etc. I
have observed this behaviour on the CM-5, and I believe that it is the
same on other machines.
I am *not* saying that this is desirable; but if it happens fairly
infrequently, *who cares?*
-amit
Amit Gupta
am...@cs.Berkeley.EDU
Yes, and ?
Do these DECstations work or do they not ?
Did you commission a worldbeating development on MIPS chips from DEC ?
Did DEC fail to fulfill the contract ?
Did you ever buy a car to get to that tantalizing open road in the ads ?
Dis you ever see a movie because you knew the director needed the money
for making the next film ?
Because if you have a contractual problem, talk to your contracts droid or
your lawyer. DEC have quite openly committed themselves for low cost upgrades
to R4000 chips on the DECstation 5000-xxx range. If they are not available
yet, I would guess that is because the R4000 chips are not that low cost yet.
If you cannot wait the few months until they are, you may have to sacrifice
the low cost attribute.
If you bought the dream of problems solved by compute power from a hardware
vendor, then you will have a few problems to come your way. Car ads always
show you the free road with one car on it - your car, once you paid your money.
But regardless of how much you spend on a car, or how many you buy, actual
roads always seem to stay congested. Try complianing ...
I also see people here spending money on things that are plainly not on offer,
not just in computing; it is the stuff that sales and marketing are made off.
If you spend your own money on your dreams, go ahead, but that is not a solid
business decision.
If the computers you buy earn enough for you to pay for themselves, they are
a good buy, regardless of what they cost or how fast they are. But if they
inhibit you in what you want to do, don't buy them, or walk away from those
you bought.
That primarily means you need to know what you want to do with those computers,
and that is a lot harder than comparing feature lists, discount schemes, or
other marketing hype. Actually, life as such was never easy ...
Thoams
--
*** This is the operative statement, all previous statements are inoperative.
* email: cmaae47 @ ic.ac.uk (Thomas Sippel - Dau) (uk.ac.ic on Janet)
* voice: +44 71 589 5111 x4937 or 4934 (day), or +44 71 823 9497 (fax)
* snail: Imperial College of Science, Technology and Medicine
* The Center for Computing Services, Kensington SW7 2BX, Great Britain
1-8 way SMP is no big deal anymore. Anyone with enough cash in their pocket
can licience SVR4 ES/MP (or whatever it's called now) and get decent scalability
at the low end. The technical challenge is to get 16/24/30-way SMP working
ala Encore/Pyramid/Sequent. This requires some significant enhancements
to generally available OS technology. Above and beyond that are clustered
and MPP systems but again the list of sucessful implementations is still
slim. I'm not saying that the DG Aviion or SGI servers are not good products,
I just think that bragging rights for "best symmetrical multiprocessors in
the industry" lie elsewhere.
--
-m------- Hans Jespersen ha...@pyrtor.UUCP
---mmm----- Pyramid Technology Canada Tel: (416) 490-1165
-----mmmmm--- 2235 Sheppard Ave. East, Suite 1104 Fax: (416) 490-9864
-------mmmmmmm- Willowdale, Ontario, Canada M2J 5B5
Are you perhaps thinking of our former arch-nemisis, Alliant? :-)
: I don't know much about HP except that I have been told that their
: operating system isn't really UNIX. Maybe someone else might want to
: comment?
:
I can't think of any ways that HP-UX is not really Unix.
--
Lee Sailer
- Let's leave my employer
out of this, OK?
sens...@ricky.brainlab.utsa.edu (David M. Senseman) writes:
|I don't know much about HP except that I have been told that their
|operating system isn't really UNIX. Maybe someone else might want to
|comment?
It's real, it's unix and it's **mighty** idiosyncratic! It's easy
to port code off HP-UX, but porting onto it was considered difficult.
The only machine I've considered harder to port to was the honeywell
dps-6, with its 32-bit integer pointers and 48-bit character pointers.
I typically have little trouble porting applications to HPUX. There are some
that are well-nigh impossible ports, mind you -- those that are heavily
infested with BSD'isms, for example. But otherwise there are just a few
things to do, like (1) change 'cc' to 'c89', (2) add a -D_HPUX_SOURCE to the
compile line, and (3) add the appropriate X11 and Motif directories to the
link and include paths (i.e. -L<dir> and -I<dir>). These alone do the trick
for a surprising number of programs I've pulled off the net.
--
Maurice LeBrun m...@fusion.ph.utexas.edu
Institute for Fusion Studies, University of Texas at Austin
Faire de la bonne cuisine demande un certain temps. Si on vous fait
attendre, c'est pour mieux vous servir, et vous plaire.
[menu of restaurant Antoine, New Orleans]
Can I take the standard "elm" distribution, say, and install it and have it
work, with no more than a couple of days hacking? Or any other reasonably
portable (i.e., not System V or BSD-specific) package? How about CKermit?
Other than software portability, what possible criteria can you use to define
an "open" API?
I ran a benchmark (a real non-toy example) on a 16-processor T800-based
machine and it was 3 times slower than a single R3000 processor SGI
Indigo.
If you need 50 processors to better a single-processor machine, you'd
better be scalable to 100+. (I know the T9000 is supposed to be better
but vapourware always is.)
I do not believe that shared-memory is inherently less scalable than
distributed memory. You obviously can't scale a single bus design very
far especially with fast processors, but other interconnects are possible.
Distributed memory machines are generally limited by communication cost,
and so are shared memory machines, except in the latter case the
communication is effected by caching. A scalable distributed memory
program
is one in which communication cost is controlled (ideally does not grow
with number of processors). The same is true of shared memory, except
the programming techniques may be different.
--
Philip Machanick
Computer Science Dept, Univ of the Witwatersrand, 2050 Wits, South Africa
phi...@concave.cs.wits.ac.za phone: 27 (11) 716-3759 fax: 339-7965
|>|> I don't know what "advanced beta" means either but, while general availability
|>|> of DEC OSF/1 V1.2 (Alpha) is scheduled for March, several hundred Alpha OSF/1
|>|> systems have now begun shipping to software developers under an early release
|>|> program. Also, Version 1.0 of DEC OSF/1 was released last March.
|>|> OPEN SYSTEMS
|>|> TODAY reviewed it in their May 11 issue and praised it's stability and
|>|> speed, even
|>|> though V1.0 was only a developer's release for some DECStations. (They
|>
|> I.e. job blow user can't buy it where as they can by Solaris
|> 1.1, 2.0 and now 2.1. They can also get HP-UX 8.x and 9.x(?).
|> Again, vapor city UNIX wise.
Call a sales rep, tell him you want a DEC 3000 workstation with
the OSF/1 advance kit and they'll be more than happy to sell it to you and
deliver it before the end of the year if you choose to have it that way.
You can also buy DEC OSF/1 V1.0 on DECstation (MIPS R3000) platforms. That's
been shipping since March 1992.
|> Who cares about equal CPU, were they equal in COST and AVAILABILITY
|> of iron and software packages? i.e. A machine I can use to solve
|> my problems today with the software I want is MUCH more
|> valuable than a promise of something being available RSN
Oh, I see that you've swallowed McNeally's line that "Performance now
equal's volume and we've won". Now that Sun is asking its customers to
pay the same amount for a fraction of the performance offered by the other
vendors, they want you to believe. Take a look at a SS10-30 and a
DEC 3000 Model 400 . Same price, Alpha is TWICE the performance. The
Alpha machine also outperforms the SS10-41 which costs $5K more.
Yea, performance is volume all right. Those who need performance need
Volumes of Suns to do the job....
--
William M. Jackson
Alpha Workstation Product Marketing Manager
Workstation Engineering
Digital Equipment Corporation
146 Main Street
Maynard, MA 01754
>In article <ijeff.723393631@cunews> ij...@hank.carleton.ca (Ian Jefferson) writes:
>>Gad I wish I was writing this on my NeXT so I could run the dictionary
>>and thesaurus. Gad I wish every one had multimedia mail so I could post
>>pictures of dinasour's from the dictionary to the net.
>So why didn't you, if its so much better?
>We'd like to know???
That's easy. The machine I have access to to read news is a SUN, although we have some
NeXT's here I don't get to sit in front of them. My NeXT is at my home and I havn't had
time to install the SLIP connection.
>Doug McDonald
>P.S. I'm writing this from my PC.
--
It's a System VR4 variant. As for their graphical interface, they use Motif,
and they pass my standards for excellence in a GUI: Apple sued them.
Darrin
--
mdchaney@iubacs mdch...@bronze.ucs.indiana.edu mdch...@rose.ucs.indiana.edu
"I want- I need- to live, to see it all..."
>Oh, I see that you've swallowed McNeally's line that "Performance now
>equal's volume and we've won".
Well, it could be argued he's following the same strategy (more units lock out
better machines) as the IBM-PC/Intel world...
Play in the intelluctual sandbox of Usenet
-- > SYS...@CADLAB.ENG.UMD.EDU < --
>It's a System VR4 variant. As for their graphical interface, they use Motif,
>and they pass my standards for excellence in a GUI: Apple sued them.
I don't think your standards are very high then....Apple also sued Microsoft.
--
/-----------------------------------------------------------------------------\
| Doug Siebert | "I don't have to take this abuse |
| Internet: dsie...@isca.uiowa.edu | from you - I've got hundreds of |
| NeXTMail: dsie...@chop.isca.uiowa.edu | people waiting in line to abuse |
| ICBM: 41d 39m 55s N, 91d 30m 43s W | me!" Bill Murray, Ghostbusters |
\-----------------------------------------------------------------------------/
Those are useless standards if ever I heard of one. I've spent a lot of time
writing filters running on *Xenix 286* so VMS can support all the terminals
we have. VMS has a lookalike to the terminfo/curses library, but the editors
and so on don't use it! If this is the sort of "support" VMS provides, I'll
take CP/M.
> Ethernet,
But being able to use it to talk to anything else takes extra software.
> X.25, X.400,
X.400 has negative value. X.25 is a niche.
> X,
Useful once you buy that extra software. Maybe.
We're using CMU TCP/IP, which doesn't support rsh, so it's pretty much of
a pain DOING anything with X. We sent the Infoserver back.
> Motif,
Frills.
> TCP/IP,
Extra cost.
> ISO/OSI,
We have THREE sets of OSI network software here and NONE of them talk to
each other! Pity, since OpenNET is the best remote file system I've ever
used.
> Posix.
I hope the Posix support is better than the SMG library support.
> I reckon this makes VMS at
> least as open as any other O/S, and more open than most.
So long as you're talking to a marketeer, not trying to make it interoperate,
yes. Of course interoperation is the whole *point* to Open Systems.
I'm hoping for good things from Open VMS, because I just inherited the job
of making all this stuff work together. I'm not holding my breath, though.
--
%Peter da Silva/77487-5012 USA/+1 713 274 5180/Have you hugged your wolf today?
/L{lineto}def/C{curveto}def/F{0 562 moveto 180 576 324 648 396 736 C 432 736 L
482 670 518 634 612 612 C}def/G{setgray}def .75 G F 612 792 L 0 792 L fill 1 G
324 720 24 0 360 arc fill 0 G 3 setlinewidth F stroke showpage % "Peerless"
|>In article <ijeff.723393631@cunews> ij...@hank.carleton.ca (Ian Jefferson)
|>writes:
|>
|>
|>>Gad I wish I was writing this on my NeXT so I could run the dictionary
|>>and thesaurus. Gad I wish every one had multimedia mail so I could post
|>>pictures of dinasour's from the dictionary to the net.
|>
|>So why didn't you, if its so much better?
|>
|>We'd like to know???
Ian is of course completely right. As far as the low end goes NeXt have ease of
use sewn up. I'm not sure that other vendors haven't started to catch up though.
If NeXtstep was avaliable for the Alpha I would be very pleased to use it.
Problem with NeXt step is that it only realy takes off when everybody has the
same standard. We need tools which allow these sorts of things, there is already
a global hypertext network, the world wide web.
I haven't got a dinasour and if I could post it not enough would read it to
warrant the bandwidth. Herese a postscript picture from my thesis instead:-
http://zws009.desy.de.:2784/ZWS009$DKA0/
hallam/www/docs/pictures/MULTIPLEXING_ROUTING_PLANES.GRA'
For more details try :-
telnet info.cern.ch
Passowrd Web (or is it www?)
--
Phill Hallam-Baker
|>In article <Byoo9...@dscomsa.desy.de> Phill Hallam-Baker,
|>hal...@zeus02.desy.de writes:
|>>This is why point to point networks like the transputer, and switching
|>networks
|>>(which have a limit but in the 100+ processor range) are so interesting.
|>
|>I ran a benchmark (a real non-toy example) on a 16-processor T800-based
|>machine and it was 3 times slower than a single R3000 processor SGI
|>Indigo.
|>
|>If you need 50 processors to better a single-processor machine, you'd
|>better be scalable to 100+. (I know the T9000 is supposed to be better
|>but vapourware always is.)
I have used arrays >1000 without switches and >100 with.
ZEUS has a 1000 node embedded array.
Comparing a 1986 processor with a 1992 processor for speed is garbage, and you
should know that. T800 is in 2 micron CMOS and was a first generation device.
The R3000 is a second iteration floating point unit and should be able to take
on 10 or so T800s on an equal footing.
Comparing two architectures on the basis of a single test on two processors from
different generations does not have much value. Parallel code is very sensitive
to the algorithm, partitioning, load balancing system etc.
There is no reason why you could not add link engines to the R4000, Alpha or HP
chip. Given the right O/S you could then take on CRAY and give them a whipping.
DEC will in fact license the alpha chip die so if someone was willing to do the
work it could happen.
|>I do not believe that shared-memory is inherently less scalable than
|>distributed memory.
Read Hockney and Jesshope, Parallel computers II. They go through it in all the
gorry detail. Switches and busses simply do not scale. Either the performance
does not keep up or the cost rises with the square of the connections.
|>You obviously can't scale a single bus design very
|>far especially with fast processors, but other interconnects are possible.
And those interconnects end up as loosely coupled ensembles of locally tightly
copuled macines. The point about shared memory is not so much the hardware
model, by all means use shared memory to implement fast links. ZEUS uses that
very architecture. But the shared memory is only being used as an optimisation
of a losely coupled paradigm and not as a fundamental basis.
|>Distributed memory machines are generally limited by communication cost,
|>and so are shared memory machines, except in the latter case the
|>communication is effected by caching. A scalable distributed memory
|>program
|>is one in which communication cost is controlled (ideally does not grow
|>with number of processors). The same is true of shared memory, except
|>the programming techniques may be different.
As the chip die shrinks it will no longer be possible to efficiently use the
extra area to optimize a sequential processor. Two routes suggest themselves,
either grow a large on board cache or stick an extra processor on. Small scale
parallelism on the tightly coupled model will happen at the chip level. For the
large machines you need to get out of the shared memory paradigm. You have to
reduce communication costs by accepting data locality.
I consider the demand that every processor in a network have equal access time
to every memory element to be an unreasonable one. If you abandon it you have
very much more freedom to design the hardware.
--
Phill Hallam-Baker
Like I said, I don't know much about HP. I went back and asked my
colleague (computer scientist) and this is what he said:
"It is fairly nonstandard although my impression is that it is not as
nonstandard as AIX. This means that porting software is harder to HP-UX
than most versions."
This same idea was mentioned by several other individuals, both on
"comp.sys.sgi" and to me via E-Mail. Remember that an academic
institution such as ours must make HEAVY use of public domain
software. For better or worse, much of this software is written
on Sun workstations -- and often uses a number of BSDisms. As I
understand it, HP-UX offered much less BSD "stuff" than Sun or
SGI's IRIX for that matter. Also remember that many faculty are
really OVERWORKED between teaching, reseach and f*#@$ committees.
We just don't have that much free time to hack away at "non-standard"
code. Also keep in mind that the original thread was from an academic
department (chemistry) and it was to them who I was responding
in my post.
While I will spend hours bashing IBM for what they did
to me (long story), I have no axe to grind with HP or DEC for that
matter. I wish them both well. SGI needs alittle competition to
keep the new products coming :-)
--
David M. Senseman, Ph.D. | A man who has never gone to school may steal
(sens...@lonestar.utsa.edu) | from a freight car; but if he has a university
Division of Life Sciences | education, he may steal the whole railroad.
UT San Antonio | Theodore Roosevelt (1858-1919)
But Cray is the company that is building a machine with (initially)
2048 Alphas. The design, BTW, is an 8x8x8 mesh, torus along each
dimension, with 16 MB per Alpha, no secondary caches, and the memory
cost exceeds the cost of the Alphas.
>But the shared memory is only being used as an optimisation
>of a losely coupled paradigm and not as a fundamental basis.
Then I don't know what you mean by "fundamental". Both Cray and KSR
allow programs random access to the entire collective (virtual)
memory. Programs with better locality run faster, but unoptimized
programs *do* run, right off: a useful property.
--
Don D.C.Lindsay Carnegie Mellon Computer Science
Even the Japanese couldn't whip CRAY after 550 million of
gov't spending.
This is not a dispute or a flame, but could you provide a
source for your comment--I'd not heard it.
--
Snout: O Bottom, thou art chang'd! What do I see on thee?
Bottom: What do you see? You see an ass-head of your own, do you?
---"I despise mystics, they fancy themselves so deep, when they----
----aren't even superficial" --Nietzsche ---------------------
Err, not to start a dispute over the desirability of such government
subsidies of supercomputer industries, but the US government has
historically given large subsidies to the US supercomputer industry.
In particular, large amounts of applied research are funded by the
US government (NRL in the "old days", lately DARPA and SEMATECH),
and the US government also lobbies strongly against government-funded
organizations (eg NCAR, US universities) buying Japanese supercomputers.
Moreover, the bomb labs (Los Alamos and Livermore) and NSA all sign
up well in advance to buy the latest US supercomputers (IBM Stretch,
CDC 6600/7600, Cray 1/1-S/X-MP/Y-MP/2/C-90/, CM-1/2/5, ... [*]).
In an industry where *total* sales of a successful model generally
fit in an (8 bit) byte with room to spare :-), having a guaranteed
3 sales up front (Los Alamos, Livermore, NSA) can be a significant
boost in raising capitol to finance development. And having these
sales be almost independent of price, and knowing that NSA will
pay large amounts of money for custom crypto hardware in place of
FP, certainly doesn't hurt either...
The above statements, I believe, are all matters of historical
record, and aren't (shouldn't be) controversial. The *desirability*
of such subsidies is a topic for considerable debate (flame wars),
but that's beyond the charter of comp.arch .
[*] Yes, I know about the Cray-3. But it was (is) a superfailure,
not a supercomputer.
- Jonathan Thornburg
<jona...@geop.ubc.ca> through end of (calendar) 1992, then
<jona...@einstein.ph.utexas.edu> or <jona...@hermes.chpc.utexas.edu>
[for a few more months] U of British Columbia / {Astronomy, Physics}
[then through Aug/92] U of Texas at Austin / Physics / Relativity
Every DEC salesman seems to come out with the CRAY-alpha project almost as soon
as they are in the door.
I wasn't so much commenting on outperformming CRAY on raw performance, however
since they are using a readily avaliable technology they could be outperformed
on other fronts, Cost, for example. Architecture for another. Seymour may be
good at building computers, but is CRAY?
--
Phill Hallam-Baker
|>
|>In article <ByrFJ...@dscomsa.desy.de> Hal...@zeus02.desy.de writes:
|>>There is no reason why you could not add link engines to the R4000,
|>>Alpha or HP chip. Given the right O/S you could then take on CRAY and
|>>give them a whipping.
|>
|>But Cray is the company that is building a machine with (initially)
|>2048 Alphas. The design, BTW, is an 8x8x8 mesh, torus along each
|>dimension, with 16 MB per Alpha, no secondary caches, and the memory
|>cost exceeds the cost of the Alphas.
Memory cost exceeds the costs of alphas.
This is the critical flaw in the exploitation of trivial parallelism, the
computable unit requires a large memory space - virtual memory is not applicable
in this case since comms bandwidth is the major limitation on speed.
If you go to a different software paradigm you can exploit the finner and
intermediate grain parallelism. The key is to retain sufficient flexibility so
that you can tackle any problem whilst maximizing the number of processors you
get for your money.
|>>But the shared memory is only being used as an optimisation
|>>of a losely coupled paradigm and not as a fundamental basis.
|>
|>Then I don't know what you mean by "fundamental". Both Cray and KSR
|>allow programs random access to the entire collective (virtual)
|>memory. Programs with better locality run faster, but unoptimized
|>programs *do* run, right off: a useful property.
Virtualizing shared memory at a high level is not the same as implementing it in
the low level hardware. We can easily envisage a scheme in which a large virtual
memory distributed across the compute surface is avaliable. This provides a
useful layer of abstraction and should be controllable, most parallel programs
end up twiddling cacheing schemes for load balancing.
It would be nice to have a system where at the base level you start with a
program running parallel threads across the distributed memory, to optimize you
create a file separate from the algorithmic code which optimizes the
filling/locking/replication parameters. Idealy the tool would be an expert
system and allow some measure of autotuning.
But this does not absolve the programmer from breaking the problem up into
parallel work units. The problem is though that languages such as C, ADA etc do
not have the parallel constructs at the right level to achieve this. The
primitives (threads) are just too low level. They have the info on what to do,
but not why to do it. The latter is essential if you are going to load
balance/optimize in a portable way.
--
Phill Hallam-Baker
I wouldn't call it "readily available technology" when the chip
packages touch cold plates containing Fluorinert channels. Cray
Research also have their own ECL fab line, and a deal with Motorola
to keep the line state-of-the-art. (Yes, I guess it's costly.)
As for CRAY, they used to be divided into the 1,2,3 group and the
X,Y,C group. Seymour Cray himself worked mostly on the 1,2, and 3.
The others, the X-MP, Y-MP and Y-MP C90, were/are fine machines. The
upcoming (500 MHz?) machine should be as good, although
competitiveness is a harder call.
>In article <1992Dec1.2...@ryn.mro4.dec.com> Peter...@cao.mts.dec.com writes:
>> And what makes you think OpenVMS isn't open?
>
>Can I take the standard "elm" distribution, say, and install it and have it
>work, with no more than a couple of days hacking? Or any other reasonably
>portable (i.e., not System V or BSD-specific) package? How about CKermit?
If they're POSIX compliant, yes.
>%Peter da Silva
PJDM
--
Peter Mayne | My statements, not Digital's.
Digital Equipment Corporation |
Canberra, ACT, Australia | "AXP!": Bill the Cat
>Memory cost exceeds the costs of alphas. This is the critical flaw
>in the exploitation of trivial parallelism, the computable unit
>requires a large memory space - virtual memory is not applicable in
>this case since comms bandwidth is the major limitation on speed.
Actually, that's an interesting horse race. Relative to the CM-5,
Cray has decided to (initially) use more expensive packaging: a
faster clock; more expensive memory (I assume); no vector units; and
virtual memory. I also believe that they have higher interconnect
bandwidth, although of course contention and latency will be
fascinating questions.
(Anyone care to guess what "one quarter the bisection bandwidth of
the C90" comes out to, per path? Where do you bisect a C90? If that
means that 8x8 paths == 1/4 ( 16 CPU * 4 ports * 16 B * 250 MHz),
then one path = 8 B * 150 MHz ???? )
How are we to compare these, in the absence of benchmarks? One point
is that designs are moving to more powerful nodes, rather than to
more numerous nodes, presumably on the theory of being more broadly
applicable. MasPar, oddly enough, both defies this trend ( 16 K PEs )
and follows this trend (the PEs recently got faster).
Even that's not enough -- it gets far worse. Caches, MMUs, paging, DMA and a
host of other effects can render your carefully constructed benchmark results
meaningless. Benchmarking OSes is _extremely hard_.
>Jussi Eloranta Internet(/Bitnet): ! The ultimate trip is
>University of Jyvaskyla, Jussi.E...@jyu.fi ! death.
>Finland [130.234.0.1] ! -- Jim Morrison
Mat
| Mathew Lodge | "I don't care how many times they go |
| mj...@minster.york.ac.uk | up-tiddly-up-up. They're still gits." |
| Langwith College, Uni of York, UK | -- Blackadder Goes Forth |
jonathan> applied research are funded by the US government (NRL in the
jonathan> "old days", lately DARPA and SEMATECH), and the US
jonathan> government also lobbies strongly against government-funded
jonathan> organizations (eg NCAR, US universities) buying Japanese
jonathan> supercomputers.
Also include NASA in that list. It's no secret (ie: it was in the
newspapers) that congress cut our budget when they heard a rumor that
we were *considering* buying a SX-3. Right now we're in negotiations
to buy a C90 with all the bells & whistles (to the tune of ~$70
million). Also, sites like NASA are likely to buy "not ready for
prime-time" supercomputers like the iPSC/860, CM-2, or KSR which have
an even smaller market than Convexen, Crays, etc.
--
J. Eric Townsend -- j...@nas.nasa.gov -- 415.604.4311
'92 R100R, DoD# 0378, BMWMOA, AMA, ACLU, EFF, CPSR
"HotHead Rules!"
|>
|> This is not a dispute or a flame, but could you provide a
|> source for your comment--I'd not heard it.
|>
Not taken as such. Reference _Digital Review_ Feb 17, 1992:
Headline article: "Cray Research licenses Alpha CPU for use in
massively parallel system". It also seems to get mentioned in about
every third article which gives an overview of DEC's Alpha strategy.
Actually, any discussion about government policy with respect
to computer purchases belongs elsewhere (unless it's contingent on
architectural concerns). However, I don't think this should be allowed
to pass without comment. My comment is simply that "turnabout is fair play."
In my last job, I worked with a group of high-energy physicists who
were doing an experiment at a facility in Japan, in collaboration with
University of Tokyo and some other institutions. The Japanese government
was very reluctant to allow any US-built computer hardware be used on
the experiment, preferring instead that everyone be forced to use a
Fujitsu supercomputer. When it was made clear to the government that
the experiment _couldn't_ work that way, our institution was allowed to
import a VAX, but had to retain ownership of it in spite of the fact
that this abrogated the agreement in which the Japanese were expected
to provide the computing equipment for the experiment.
I decline to state whether rigid government regulations about
whose equipment can be acquired is rational or irrational. I simply
argue that the Japanese can least of all cry "foul."
Followups by mail, please.
>In article <1992Dec3.1...@brt.deakin.edu.au> dou...@brt.deakin.edu.au (Douglas Miller) writes:
>> I reckon this makes VMS at
>> least as open as any other O/S, and more open than most.
>So long as you're talking to a marketeer, not trying to make it interoperate,
>yes. Of course interoperation is the whole *point* to Open Systems.
Not really. Open systems = "not closed systems"
"Closed systems" means proprietary systems (i.e. a system with hardware
and/or software available only from one vendor).
Note this has nothing to do with interoperability.
So: Where can you get hardware that runs VMS? DEC alone.
Where can you get VMS? DEC alone.
Ergo, VAX/VMS (or Alpha/VMS) is not an open system.
Know your terminology, and correct the marketing types if necessary.
Note:
"Open system" does not necessarily mean "good system" (this is a marketing-
induced myth). Example: 386 PC+DOS is pretty much an open system: hundreds
of system vendors, at least three chip vendors (Cyrix, AMD, Intel), and two
OS vendors (Microsoft, Digital Research). But is a 386 PC + DOS a "good"
system? Certainly not for some meanings of "good".
"Openness" generally requires access to technical specifications for
various system pieces, and/or strict conformance to standards. Eg. beware
the "open" systems vendor who won't give you detailed technical specs for their
hardware, out of the fear that you might write an OS for their box that
would compete with theirs (a certain prominent workstation manufacturer
comes to mind).
In my experience, the term "Open Systems" in the mouth of a marketing type
often means something like "I'm going to OPEN my mouth now and talk about
our SYSTEMS." :-)
Regards,
John
--
John DiMarco j...@cdf.toronto.edu
Computing Disciplines Facility Systems Manager j...@cdf.utoronto.ca
University of Toronto EA201B,(416)978-1928
>> Gad I wish I was writing this on my NeXT so I could run the dictionary
>> and thesaurus. Gad I wish every one had multimedia mail so I could post
>> pictures of dinasour's from the dictionary to the net.
>
>Don't need a NeXT to do that- dictionary/thesaurus CD roms are available for
>most computers and workstations. And sending/posting multimedia objects
>(e.g. stupid pictures) didn't begin and doesn't end with the NeXT (ie.
>ever heard of SmallTalk or Andrew?)
It might not have begun or ended with the NeXT but the NeXT is a damn good platform for doing it. Are you doing it? Are most people with PCs and Suns/etc. doing it? Have you seen how the spell checker and online dictionary works on the NeXT? (I'm talking about the one you get when you buy the machine, that is configured to work out of the box too). You are missing the point the original poster made.
On a NeXT, learn how to use the dictionary, spell-checker, etc. in one app and you know how to do it for them all. There's also nothing quite like sending around tiffs, sounds, binaries, directories, etc. via email/usenet and being able to launch them from mail or drag and drop them into file folders the way you do on the NeXT. I do this nearly everyday from the NeXT.
When I used to tell people about Unix and X11, a lot of people would say "yeah, but you could do that on a PC or the Mac (or even worse VM/CMS)." Sure - and were they doing it? Most often they weren't, because their machines didn't come set up that way. Now its the same thing when I tell people about the NeXT. With a NeXT, you plug a brand new one in, and its all there. And when you develop an app, its all there.
Before I had a NeXT I was skeptical of what it offered over other (faster) workstations. Now that I use a NeXT, a Mac, and a Sparc everyday it is very clear that the NeXT is a very sophisticated tool, and easy to use (my non-computer hacker wife uses the NeXT every day and I've hardly explained anything to her). For me, its the best of the lot.
The NeXT is, simply put, consistent. And I haven't even gotten into how you develop an app on it, and how to tie services like Mail, spell checking, screen grabs, etc. together. (And its a hell of a lot prettier than Windows or X11 or Open Look.)
Its hard to tone down enthusiasm for a NeXT once you use one all the time. If you haven't had a chance to use a 3.0 system for a while, take a look if you can.
(And if I was home right now instead of at work I would have posted this from my NeXT.
Soon, I'll be posting from a NeXT at work too. Right now I'm using the interface from xvnews which uses Sun Openwindows to post this. What joy.)
---
Gordie Freedman: gor...@kaleida.com Sigs are for kids
My opinions here, not my employers
Sure have. I'll get my opinion in early, and just once. As I've always
maintained, the current definition of "open" is marketroid bullshit. Means
whatever's convenient in any particular salesman's effort to push more
(generally inferior) product. I don't buy systems that are described as "open"
more than once on the first page of the product literature, as a rule of thumb.
A real definition of "open"? Here's mine:
It's not an "open system" unless the base price includes a source license.
>
>PJDM
>--
>Peter Mayne | My statements, not Digital's.
>Digital Equipment Corporation |
>Canberra, ACT, Australia | "AXP!": Bill the Cat
--
*******************************************************************************
*Thor Simon * Okay, just a little pin-prick...There'll be no more-*
*t...@panix.COM * Aieeeeaaaugh!-but you may feel a little _sick_. *
*t...@spock.UUCP * ---Pink Floyd *
*******************************************************************************
Very weird. Only politics could make a VAX and a supercomputer
alternatives.
--
Philip Machanick
Computer Science Dept, Univ of the Witwatersrand, 2050 Wits, South Africa
phi...@concave.cs.wits.ac.za phone: 27 (11) 716-3759 fax: 339-7965
Well, a definition I always use is from ``Distributed Systems: Concepts
and Design'' by George F. Coulouris and Jean Dollimore, p.40:
``[An open systems is] one that provides access to a set
of services that is variable and can be extended.''
This is, in turn, based upon a definition by Lampson and Sproull in 1979
in their conference paper ``An open operating system fr a single user
machine'' which gave the definition of an open operating system as one,
``offering a variety of facilities, any of which the user
may accept, reject, modify or extend.''
Now to my mind these give a much better basis for discussion than what
the marketeers have told people open means (usually ``our machines and
no one elses'' ;-) ). How open VMS is I can't really say as I'm not a
heavy VAX users. However, UNIX probably comes pretty close; if you
don't like ls(1), you can write your own. In the days when source for
the OS came with it (or if you take something like the BSD386 release),
then I'd say it was totally open; you can take any part of the system,
be it in the kernel or the user space, and rewrite it.
Jon
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Jon Knight, Research Student in High Performance Networking and Distributed
Systems in the Department of _Computer_Studies_ at Loughborough University.
** I can't even talk to you without the aid of chemicals or lies. - Pele **
"If it ain't source, it ain't software".
(attributed to Dave Tweten, or so says Eugene)
Rob
---
-legal mumbo jumbo follows-
This mail/post only reflects the opinions of the poster (author),
and in no manner reflects any corporate policy, statement, opinion,
or other expression by Network Systems Corporation.
It can be done, if you keep your objective in mind.
True, industry benchmarks suffer from the "Molehill effect": A certain
task is accepted as a benchmark and the time it take the machine to do it
is quoted, whicg is all very sporting as long as moles are doing it. It is
often followed by cries of "unfair" when somebody then brings out a
Caterpillar Truck and cleans up.
But it is difficult to see how manufacturers of computers can do anything
else, as the resulting numbers must be comparible. It is also not realistic
to set up "standard tasks" that should be run in an hour in a few years time,
but unfortunately take several months to run with today's computers.
As a user, however, you can do a lot better, by taking a scaleable task and
testing the machine to breakdown. And many finite element tasks are easily
scaleable, just increase the mesh size, and take note whether the problem
complexity grows linear or with the cube (or whatever) of the parameter.
This gives you in particular an idea of where a system breaks down first.
Take an example I once had, I used a matrix inversion by an algorithm similar
to gaussian elimination on two machines, fred and bill, and I increased the
matrix size. I got the following results:
machine N arraysize real time user time system time %
name bytes 000 h:m:s h:m:s h:m:s
fred 100 160 0:05.4 0:01.2 0.0 23
fred 200 640 0:25.3 0:08.3 0.0 33
fred 500 4,000 9:51.4 3:53.8 0.6 40
fred 1000 16,000 2:02:18.4 38:10.6 3.0 31
fred 1500 36,000 154:31:54.7 3:12:01.9 6.5 2
bill 100 160 3.2 1.8 0.0 56
bill 100 160 4.0 1.8 0.0 40
bill 200 640 30.0 13.9 1.2 50
bill 200 640 41.0 15.0 0.1 36
bill 500 4,000 10:53.0 5:10.7 3.8 48
bill 1000 16,000 1:15:03.5 45:08.3 59.6 61
bill 1000 16,000 1:19:09.3 45:22.0 1:00.0 59
bill 1500 36,000 9:21:58.0 2:56:07.8 29:59.3 37
Both machines had 32 Mbytes memory, for small arrays fred was a little faster
in user time spent. For the last size tested (1500x1500), bill got nominally
a better user time than fred, but had a significant system time overhead, so
that fred was still "faster". But looking at the real time required, bill
did that overnight, while fred spent almost a week.
The reason is that naive Gaussian elimination algorithms have an almost
random memory access pattern and page furiously once the matrix is no
longer completely in memory. Thus, bill could still be used for tasks of
this ilk if they were handled sensibly, like only one of them and not
during the day, while on fred this task would have prompted administrative
action. Needless to say, bill felt "strained" while this was going on,
fred totally bombed out (ever tried editing a file when character echo
takes a minute :-()
Such scale tests to destruction also allow you to see what configuration
is still suitable. Just a fast CPU does not help if you cannot add enough
memory so that your typical tasks can make use of the CPU speed.
Unfortunately test results like that are almost impossible to report ...
Thomas
--
*** This is the operative statement, all previous statements are inoperative.
* email: cmaae47 @ ic.ac.uk (Thomas Sippel - Dau) (uk.ac.ic on Janet)
* voice: +44 71 589 5111 x4937 or 4934 (day), or +44 71 823 9497 (fax)
* snail: Imperial College of Science, Technology and Medicine
* The Center for Computing Services, Kensington SW7 2BX, Great Britain
You may wish to reconsider your comment concerning who is "not ready
for prime-time". The KSR I played on at Supercomputing '92 looked ready
for real work to me!
Cray Research has a few things going for it:
1) About 16 years of experience in Chippewa Falls and Eagan, don't
knock 16 years of building supers, it's definitiely a plus.
2) Experience with the SOFTWARE end of try to get the most out
of hardware. As everyone knows, it's usually NOT the hardware
that's the problem, it's the software that drives the hardware.
3) 16 years gives you lot's of contacts to find and get the people
you need to make a product work.
4) Years of experience in cooling, cases and mechanical issues. Cray
Research has a knack of using fairly common technology but
cranks up the clocking and provides more efficiant cooling. You
don't just perfect this sort of thing overnight. It takes a while
to get it right and to train a workforce that can consistantly
build things correctly.
Of these, I think the software is the biggest thing in Cray Research's
favor. The company that licks the software issues in massively
parallel machines wins the prize and Cray Research has a good
start.
Cray Research has done the Y-MP and C-90 without much input from
Seymour and they seem to have done OK. They also seem to see
the writing on the wall of a small number of custom processors
thus the switch to SPARC and Alpha as main CPU and using Cray
Research know how in other areas to drive performance up on
commodity CPU's. The SPARC servers also let them leverage
the software applications into their market, i.e. running
1-2-3 directly on a CRI SPARC superserver feeding into/out of
a cruncher running on a CRI Alpha massively parallel box. Seems
like a shrewed move on CRI's part.
The future belongs to companys that can get software to work
on massivley parallel hardware that uses commodity parts in
clever ways and sell it all at a decent price. I'd be willing
to bet a fair chunk of money that CRI will be one of the prominent
companys in this area.
-Rob
My understanding of 'open systems' would be that 'open interfaces' is a
better description. Consider your example of the ls command. You can
only re-write it because the system calls that you need to make to
obtain the directory information are documented (i.e. 'open').
As an example, various companies have expressed dis-pleasure recently because
Microsoft are believed to be using un-documented windows calls in their
applications. Assuming that this is the case (and I don't know one way or the
other), this prevents Windows from being a truely open system (not that it
has necessarily ever claimed to be). The key point underlying this is that
people fear that Microsoft are deriving an unfair commercial advantage from this
information.
A second key part of this is that the interfaces are stable, i.e. they do not
change in arbitrarily, which implies that they are public or published in some
manner.
Note that none of this requires access to the source code for the underlying
system, just the definition of the interfaces and the appropriate tools to
be able to access them. This obsession with having the source code to
everything (I mean generally, not you personally) is a recipe for disaster.
I remember a quote (but not the person responsible) along the lines that
there were only half a dozen people in the world who could work on the Unix
kernel source and leave it in a better state than when they started. This is a
bit overstated, but I believe that the general sentiment is right.
Andy..
--
--------------------------------------------------------------------------
Andy Jackson - a...@cam-orl.co.uk - phone 0223-343308 - fax 0223-323542
Olivetti Research, 24a Trumpington Street, Cambridge, UK, CB2 1QA
--------------------------------------------------------------------------
Wrong.
>
> "Closed systems" means proprietary systems (i.e. a system with hardware
> and/or software available only from one vendor).
>
> Note this has nothing to do with interoperability.
>
> So: Where can you get hardware that runs VMS? DEC alone.
> Where can you get VMS? DEC alone.
>
> Ergo, VAX/VMS (or Alpha/VMS) is not an open system.
Err... Open Systems means that you aren't constrained to one vendor, it also
means you shouldn't get into a "lock in" situation with a vendor. IE:
you should be able to move to another platform. Well obviously unless
the OS is the same and you are binary compatible at the IS level as well
as data you can kiss that goodbye, but for the mean time lets
run through that again, with slightly different questions.
So: Where can you get hardware that runs HP-UX? HP alone.
Where can you get HP-UX? HP alone.
So: Where can you get hardware that runs Ultrix? DEC alone.
Where can you get Ultrix? DEC alone.
So: Where can you get hardware that runs AIX? IBM alone.
Where can you get AIX? IBM alone.
So: Where can you get hardware that runs SunOS? Sun alone.
Where can you get SunOs? Sun alone.
So: Where can you get hardware that runs Solaris? Sun alone (for workstations).
Anyone (for PCs :-) ).
Where can you get Solaris? SunSoft alone.
etc for just about any vendor, for just about any hardware, for just about
any OS.
This particular line of thinking is in itself correct, but in terms of open
systems is bogus.
Sure they're all UNIX, but you can't take on app from a Sun (for example) and
run it on a DEC or HP or IBM.
VMS is not open, but Open VMS has a POSIX layer and those things that are meant
to included in an Open System *whatever they are*. And BTW the Alha version is
the Open VMS version.
I'm no VMS fan (although I do read NEWS on it), I like UNIX. All our code
runs on VMS and any UNIX box we've got. It supports X, Motif, various
databases and signal handling (yes a bit of conditional stuff, cos VMS does
that differently). Our application is non-trivial to say the least.
The worst platform to port to was the SUN. Load of definitions in the headers
missing. That's OK since we have a config header file with all the system
dependencies in. The IBM and HP ports:- main problems are that the compilers
are more thorough than gcc, so trailing commas in enums or using NULL instead
of 0 are thrown out as errors. Sometimes you find that char is signed or
unsigned as default (caused a mega bug on IBM kit, that one). Any once you
have this config header file to sort out missing or incorrect definitions,
bingo, compile on any platform you want. That means this lot:-
DECstation
VAXstation
SPARCstation
DG Aviion
HP snakes
IBM RS/6000
Alpha
When we get a new box, the initial port of the software base before evryone
gets their hands on it takes between 1 and 3 days. After that continual
development happens on all platforms simultaneously. Now if that isn't open
I don't know what is, and surprise surprise VMS is in their too, and they can
all talk to each other too, and act as clients etc. etc. etc.
Yes and the alpha port is painless too, although trying to get a database out
of anyone isn't easy (if running OSF/1). Interestingly our ordinary vanilla
VMS can run our software. We don't need Open VMS!
To sum up. I think you were out of line on the VMS statement, even though I
strongly believe VMS stands for "VOMIT MAKING SYSTEM".
--
sn...@lsl.co.uk
muso/unix joke: "which debugger do you use?"
"I use dbx..."
"Oh really, we use Dolby C..."
Motorola inside.
|>Well, a definition I always use is from ``Distributed Systems: Concepts
|>and Design'' by George F. Coulouris and Jean Dollimore, p.40:
|>
|> ``[An open systems is] one that provides access to a set
|> of services that is variable and can be extended.''
|>
|>This is, in turn, based upon a definition by Lampson and Sproull in 1979
|>in their conference paper ``An open operating system fr a single user
|>machine'' which gave the definition of an open operating system as one,
|>
|> ``offering a variety of facilities, any of which the user
|> may accept, reject, modify or extend.''
|>
|>Now to my mind these give a much better basis for discussion than what
|>the marketeers have told people open means (usually ``our machines and
|>no one elses'' ;-) ). How open VMS is I can't really say as I'm not a
|>heavy VAX users. However, UNIX probably comes pretty close; if you
|>don't like ls(1), you can write your own.
You can under VMS, in fact I have the abreviation
ls :== "DIR"
in my login.com, specificaly to stop confusion when using both.
But you could also go the whole hog and install a command ls (or even
DIRECTORY) should you want to. You can even replace the whole DCL_TABLES should
you want to, or if this is not enough you can replace the command interpreter
itself (eg use the POSIX one).
|> In the days when source for
|>the OS came with it (or if you take something like the BSD386 release),
|>then I'd say it was totally open; you can take any part of the system,
|>be it in the kernel or the user space, and rewrite it.
The real test is doing it without any privs and without affecting other users. I
have suffered a great many sysops with the customization bug who customize the
user space... Often this makes a system unusable.
The sources arn't enough for a system to be open any more. If you indulge in
source hackery you have diverged and your system will not be supported in
future. What I want is a system that I can add custom device drivers etc to
without source code hackery.
I have seen people replace the VMS scheduler without source code hackery, that
to me is an open system.
It's not that I am particularly pro-VMS as such, I would like a new O/S to play
with. It may be possible to do everything with UNIX, but the same holds for a
Turing Machine, that does not make it a usefull O/S.
--
Phill Hallam-Baker
Not quite true. Solbourne and Tadpole are two names that spring to mind
who make Sun compatible kit which run SunOS (or Solaris) derived
operating systems. There are loads of other no name vendors doing the
sam thing. Sun compatible hardware, at least at the low end, is
begining to look like the PC world.
Umm, a slight error, it should be
Where can you get hardware that runs Solaris? Sun or any of the
dozen or so clone makers
The same applies for Solaris below.
|So: Where can you get hardware that runs Solaris? Sun alone (for workstations).
| Anyone (for PCs :-) ).
| Where can you get Solaris? SunSoft alone.
|
|This particular line of thinking is in itself correct, but in terms of open
|systems is bogus.
|
|Sure they're all UNIX, but you can't take on app from a Sun (for example) and
|run it on a DEC or HP or IBM.
But you can run it on a few dozen other SPARC based machines, even Sun
have to get their machines tested for SPARC compliance by somebody
else.
--
Phillip Fayers email: fay...@cardiff.ac.uk
Sun admin/support/programming phone: 0222 874000 x 5282 (UK)