Wellllll... Microsoft has announced that on May 20th, Bill Gates will go
national via satellite to explain the "new" strategy for scaling Windows
NT onto high performance systems.
THe magazine "Communications Week" beat him to the punch in thier march
24th issue by breaking the story of a new deal between MS and Hewlett
Packard in which, basically, microsoft is going to port most of HPUX
(HP's UNIX) into windows to get Win NT to scale better.
SO basically, the next version of windows NT wil be.....UNIX!
Of course, microsoft won't be honest enough to admit that they are
caving in, so there is sure to be a lot of smoke blown around to obscure
what they are doing.
Maybe instead of "Cairo", MS will cal the newest WIn NT "Boise" after
HP's home location?
Maybe the merger of HPUX and Windows NT will be called Win UX? Naaaaah!!
Who knows though, Win NT might just grow up to be a real operating
system now!
Why am I not suprised? When I first installed the NT Resource Kit and the Back Ofc
sdk, the first thing I noticed were ports of all my unix favorites(grep, etc) including
dns....guess even the MS developers need decent tools:-)
Jim Smith
M$'s clustering software, "Wolfpack" really being "Chihuahua Pack"...
>THe magazine "Communications Week" beat him to the punch in thier march
>24th issue by breaking the story of a new deal between MS and Hewlett
>Packard in which, basically, microsoft is going to port most of HPUX
>(HP's UNIX) into windows to get Win NT to scale better.
>SO basically, the next version of windows NT wil be.....UNIX!
I'd imagine that the Win32 API would be implemented by some sort
of library or server running on top of the HP/UX kernel. The interesting
question is how much of the "native" HP/UX API will show through. Will it
simply be tasking and memory management? Or extra stuff like networking
and file management?
>Of course, microsoft won't be honest enough to admit that they are
>caving in, so there is sure to be a lot of smoke blown around to obscure
>what they are doing.
My guess is that Chairman Bill will brag about how
Unix-compatible WinNT has become.
>Maybe the merger of HPUX and Windows NT will be called Win UX? Naaaaah!!
As in Linux? :-)
--
Loren Petrich Happiness is a fast Macintosh
pet...@netcom.com And a fast train
My home page: http://www.webcom.com/petrich/home.html
Mirrored at: ftp://ftp.netcom.com/pub/pe/petrich/home.html
No, they'll call it MSUX.
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
a JAMax "Satire is a sort of glass, wherein beholders do
h o w generally discover everybody's face but their own."
tan lle --Jonathan Swift
[On M$ using HP/UX as WinNT's new core so it can scale better...]
One of the interesting things here is how Unix has taken over
much of the computer industry, and how it seems posed for yet more
takeover -- all this for an OS (or at least a family of OSes) that has no
leading personality, or even organization, in control. Here's what I'm
familiar with, and what I recall about its Unix-compatibility.
OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
OS/400 {for AS/400's] -- ?
DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
FreeBSD, NetBSD, etc. -- Unix
SunOS/Solaris -- Unix
HP/UX -- Unix
SCO -- Unix
Data General DG/UX -- Unix
DEC OSF/1 -- Unix
Xenix -- Unix
Apple A/UX -- Unix
IBM AIX -- Unix
Linux -- Unix
DOS -- none (legacy OS)
Windoze 3.x -- none (shell for DOS)
Windoze 95 -- none (cross between NT and a shell for DOS)
Windoze NT -- has POSIX layer (?)
OS/2 -- none (?)
the MacOS -- none (legacy OS)
the AmigaOS -- none (legacy OS)
NeXTStep -- Unix (OpenStep fits on top of various Unixes and also NT)
the BeOS -- has POSIX layer, bash shell, Unix-style directory syntax
GNU Hurd -- ?
Novell NetWare -- none
Several of these I have labeled "legacy OSes"; these are either
old Big Iron OSes (OS/390, OS/400, VMS) or are hangovers from small
systems (DOS, the MacOS, the AmigaOS), whose ultimate fate is likely to be
hosted by various Unixes or NT:
DOS:
NTVDM in NT
the MacOS:
the Blue Box in PowerPC NeXTStep (Rhapsody)
Fredlabs's VirtualMac in the BeOS
the AmigaOS:
the A\Box is to run a version of Unix and host the AmigaOS in some way
> [On M$ using HP/UX as WinNT's new core so it can scale better...]
OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
OS/400 {for AS/400's] -- ?
DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
FreeBSD, NetBSD, etc. -- Unix
SunOS/Solaris -- Unix
HP/UX -- Unix
SCO -- Unix
Data General DG/UX -- Unix
DEC OSF/1 -- Unix
Xenix -- Unix
Apple A/UX -- Unix
IBM AIX -- Unix
Linux -- Unix
DOS -- none (legacy OS)
Windoze 3.x -- none (shell for DOS)
Windoze 95 -- none (cross between NT and a shell for DOS)
Windoze NT -- has POSIX layer (?)
OS/2 -- none (?)
the MacOS -- none (legacy OS)
the AmigaOS -- none (legacy OS)
NeXTStep -- Unix (OpenStep fits on top of various Unixes and also NT)
the BeOS -- has POSIX layer, bash shell, Unix-style directory syntax
GNU Hurd -- ?
Novell NetWare -- none
Here's some more:
Cray UNICOS -- Unix
SGI Irix -- Unix
Just more evidence of the Great Unix Takeover :-)
If it is true which I doubt it is also sad because HP-UX is not
threaded at least 10.20 is not and does not scale well beyond 8
CPU's.
Regards
Andrew Harrison
Senior Consultant SunUK
>
> [On M$ using HP/UX as WinNT's new core so it can scale better...]
>
> One of the interesting things here is how Unix has taken over
> much of the computer industry, and how it seems posed for yet more
> takeover -- all this for an OS (or at least a family of OSes) that has no
> leading personality, or even organization, in control. Here's what I'm
> familiar with, and what I recall about its Unix-compatibility.
>
>
> OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
> OS/400 {for AS/400's] -- ?
> DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
[snip]
> Several of these I have labeled "legacy OSes"; these are either
> old Big Iron OSes (OS/390, OS/400, VMS)
Mind kindly suggest you leave VMS, or as it is now called OpenVMS
out of the group of legacy OSes. OpenVMS on Alpha is about
to get new clustering legs that Unix can't match. Digital is calling
it "Galaxies Software Architecture". It will allow their
next generation 21264 based Wildfire (32 CPU) server to contain
8 VMS nodes of 4 CPUs each. 8 copies of VMS running inside
that machine clustered over a high-speed crossbar.
Digital in December stated (at DECUS) Galaxies + Wildfire will do
over one million "tpcs". Current high-end numbers are in
the 20 to 30 thousand range. Look for Galaxies early 1998.
Search http://www.computerworld.com/ or http://www.infoworld.com/
for "wildfire" to see a smidgen of detail.
If the OS can scale that high, doesn't it make the others legacy
OSes?
Rob
> One of the interesting things here is how Unix has taken over
> much of the computer industry, and how it seems posed for yet more
> takeover -- all this for an OS (or at least a family of OSes) that has no
> leading personality, or even organization, in control.
Is it really that surprising? The core of Unix has consistently been
the best designed OS core, no matter how distasteful you may find
command lines and X. Features of modern OS kernels showed up primarily
in Unix first. Are you surprised that OS features from higher up
workstations--which almost exclusively run variants of Unix--are what
eventually find their way onto PCs?
I suppose it just works better to take a system that was designed right
the first time than it does to hack at an old one. (And that goes for
interfaces too, which is why I like Rhapsody so much. A Unix core with
an Apple/Nextstep interface should come close to providing the best of
both worlds even if it doesn't take a giant leap forward in either
area. I imagine I'm not the only one who pondered the possibilty of
putting a Mac interface on a Unix core several years ago.)
--
Eric Bennett ( er...@pobox.com ; http://www.pobox.com/~ericb )
Sixty-seven percent of the doctors surveyed preferred X to Y. (Jones
couldn't be persauded.)
-John Allen Paulos
Well that little tidbit of information makes me BELIEVE it is true. If
HP-UX is not really "all that great as unix goes" as your post seems to
imply, than that is ALL the MORE reason for Microsoft to integrate it into
NT. Why would MS want to make something work "REALLY WELL" the FIRST
TIME??
Obviously, get the "unix" NAME to SELL the IDEA that the next NT/UNIX
combination platform will do "REALLY WELL" in the high end, but obviously
make sure it doesn't Live up to expectations, because well, as IBM found
out with OS/2, if you makes somehting that works pretty good, then people
don't upgrade as often.
Chris J. Alumbaugh
********************************
As it is SPOKEN.. so it appears
OS/2 Warp 4.0 and Voice Dictation
********************************
* my email address in the header has been hacked*
* please shift all letters 1 letter to the right*
*to decipher correct address to reply by email*
>In comp.os.ms-windows.nt.advocacy unique cat <uni...@olg.com> wrote:
>] Here's an interesting item.
>]
>] [...] the story of a new deal between MS and Hewlett Packard in
>] which, basically, microsoft is going to port most of HPUX (HP's
>] UNIX) into windows to get Win NT to scale better.
>]
>] Maybe the merger of HPUX and Windows NT will be called Win UX?
>] Naaaaah!!
>No, they'll call it MSUX.
Argh. Why HPUX? Grrrr... Hewlett Packard should have been named
Packard Hewlett so HPUX would be called PHUX. Ok, so HP/UX isn't my
favorite when compared to SunOS or even AIX. :)
--
Wayne Hyde | System Administrator
wjh...@cise.ufl.edu | Delete 'SPAM' from my address before replying.
http://coop.wec.ufl.edu/wjh | I speak for me, not my employers
How will this compare to the Tera MTA, running a microkernel
based Unix? http://204.118.137.100/SystemCharacteristics.html
This computer will also have 32 CPUs .. or 256 .. running *one*
copy of Unix with *one* memory as opposed to 8 copies of VMS
(with 8 memories I assume). With compiler support for
fine-grained parallelism (4 + 2-per-thread instruction overhead for
creating new threads for this purpose).
From their PR it sounds like a better approach than clustering.
I don't know if the machines are at all comparable though.
Here's a quote from their web page:
"[...] or cluster computers. Regardless of the name, they all
suffer the same basic problem: a truly horrible programming
model. "
I guess they don't think clustering is worth beans... ;)
] Search http://www.computerworld.com/ or http://www.infoworld.com/
] for "wildfire" to see a smidgen of detail.
]
] If the OS can scale that high, doesn't it make the others legacy
] OSes?
Unix will never die, therefore VMS is legacy.
Are you kidding? He'll brag about how WinNT-compatible Unix has become.
--> Kent.
>In article <3344AC...@olg.com>, unique cat <uni...@olg.com> wrote:
>>Here's an interesting item.
>
> M$'s clustering software, "Wolfpack" really being "Chihuahua Pack"...
>
>>THe magazine "Communications Week" beat him to the punch in thier march
>>24th issue by breaking the story of a new deal between MS and Hewlett
>>Packard in which, basically, microsoft is going to port most of HPUX
>>(HP's UNIX) into windows to get Win NT to scale better.
>
>>SO basically, the next version of windows NT wil be.....UNIX!
>
> I'd imagine that the Win32 API would be implemented by some sort
>of library or server running on top of the HP/UX kernel. The interesting
>question is how much of the "native" HP/UX API will show through. Will it
>simply be tasking and memory management? Or extra stuff like networking
>and file management?
>
>>Of course, microsoft won't be honest enough to admit that they are
>>caving in, so there is sure to be a lot of smoke blown around to obscure
>>what they are doing.
>
> My guess is that Chairman Bill will brag about how
>Unix-compatible WinNT has become.
Bullseye! Having ridiculed Apple for moving to OpenStep, now the
M$ minions have decided the future of computing is to take a Unix
kernel and slap a cool API and GUI on top of it. Once again, Apple
leads the way (at least in the mainstream), but watch M$ take the credit.
But still, this new Windows direction leaves me 'confused'.... ;)
>>Maybe the merger of HPUX and Windows NT will be called Win UX? Naaaaah!!
>
> As in Linux? :-)
No, as in WindowsUX- which we knew all along. ;) After all, if WindowsNoT isn't
crappy enough for you, WindowsUX sure will be!
Allan
>--
>Loren Petrich Happiness is a fast Macintosh
>pet...@netcom.com And a fast train
>My home page: http://www.webcom.com/petrich/home.html
>Mirrored at: ftp://ftp.netcom.com/pub/pe/petrich/home.html
--
aguyton,comet.net
Send mail to me @ comet.net Fight the spammers!
I believe that OS/390 now supports a fairly complete Unix API
natively - it could almost be considered a version of Unix now (with
lots of legacy extensions).
For that matter, OS/2 supports a large number of Windows API
functions natively as well.
--
John Bayko (Tau).
ba...@cs.uregina.ca
http://www.cs.uregina.ca/~bayko
> I believe that OS/390 now supports a fairly complete Unix API
>natively - it could almost be considered a version of Unix now (with
>lots of legacy extensions).
So OS/390, like VMS, might qualify as a variety of Unix out of
API-compatibility?
There are a host of possible incompatibilities:
EBCDIC vs. ASCII
Filesystem features: to Unix, every file is an unstructured stream, while
the file structure I knew from VM/CMS in the 1980's was record-based:
records could be either fixed-length or variable-length. [VMS is also
that way; but I remember when support for Unixish streams was added]
Also, filename conventions are different. VM/CMS uses 8.8 (sort of like
DOS 8.3), for example.
Tasking/threading: I'm not sure to what extent OS/390 supports threads.
[VMS now supports "DECThreads"]
One copy of UNIX? Hmmmm, how fast can that ONE copy
of UNIX keep track of resources? The 8 copies of VMS will be working
*somewhat* independent of each other. Things like low-level
interrupts in a single OS will be somewhat constraining, no?
One copy of an OS can only context switch so fast, eh?
So how is Galaxies memory/file locking tracked? (Resource contention).
Via a much extended Distributed Lock Manager.
(Please note, that is a guess on my part as
to the "how is it done?" question. Public information detailing
all the ins and outs of Galaxies may be a year away.)
> From their PR it sounds like a better approach than clustering.
> I don't know if the machines are at all comparable though.
>
Yes. And it waits to be seen if Sybase, Oracle, Informix, et al.
develop specifically to take advantage of Tera. I do know that
Sybase and Oracle are EXTREMELY excited about Galaxies given:
> ] Digital in December stated (at DECUS) Galaxies + Wildfire
> ] will do over one million "tpcs". Current high-end
> ] numbers are in the 20 to 30 thousand range. Look for
> ] Galaxies early 1998.
>
One million is a much larger number than 30,000.
> Here's a quote from their web page:
>
> "[...] or cluster computers. Regardless of the name, they all
> suffer the same basic problem: a truly horrible programming
> model. "
>
Check that quote. Maybe it is older than the Galaxies
revelation in December 1996??
> I guess they don't think clustering is worth beans... ;)
>
Yeah. Very nice posturing on their part.
The basic problem with large business applications has always
been I/O. Tera addresses the scientific/engineering problem
of launching multiple threads of "calculation". The more
creative approaches today for LARGE RDBMS is to throw tons
of memory (Gigabytes) at it. The problem with that though
is accessing the memory. As the numbers of CPUs increase,
memory latency increases. Sun with 64 CPUs has 400 to 500
nsec latency. SGI is taking a NUMA approach. SGI seems
to do better in scientific/engineering. The Sun
UltraEnteprise 10000 based on Cray cross-bar technology
does well in both. There has been a raging discussion off
and on in comp.arch. Search the dejanews archive if interested.
The Galaxies approach is described as a "NUMA like"
in the InfoWorld article. Really is the way it should work
don't you suppose? Node A sharing Node B memory. Now
remove the traditional cluster bottleneck of the highspeed
network (whether HIPPI or FDDI or Myrinet) and have the
nodes communicate over a VERY high bandwidth, extremely
LOW latency cross-bar (very similar to UE 10000 cross-bar
but NEWER research ;-) ) Of course the OS MUST be capable of
extending into such a relm. True file locking, memory sharing, etc.
Whoola! One million tpcs with a much extended VMS!
Sidenote:
For years VMS was mocked for its method of file
access through RMS (Record Management Services). True
it can be somewhat slower to go through another layer
to access files. UNIX file access has been unfettered.
Raw. Stream files. None of this goofy file types of
VMS. None of the goofy overhead of RMS. But wait,
RMS allows you to lock files or records!
Who cares? We are UNIX. We will handle the locking.
Thank you very much.
Okay. We are VMS. We have something on the horizon
that says you better stick locking in the core of
your OS as a layer. You can't catch us otherwise
as our OS will now communciate with other copies of
our OS at a MUCH improved speed. Our locking overhead
is going to go WAY down. So sorry Mr. UNIX.
> ] Search http://www.computerworld.com/ or http://www.infoworld.com/
> ] for "wildfire" to see a smidgen of detail.
> ]
> ] If the OS can scale that high, doesn't it make the others legacy
> ] OSes?
>
> Unix will never die, therefore VMS is legacy.
>
"It is better to burn out, than to fade away. My my hey hey."
Neil Young
Yeah, legacy. Rather embarassing that an old legacy VMS
will smoke an old legacy UNIX.
Researchers will probably discover people using MS-DOS twenty
years from now. Guess it won't die either. Just shrink to
nothing ;-).
Rob
"UNIX will be preempted by NT and its new partners. UNIX doesn't know it yet
-- it won't notice until it's too late, because UNIX is the Yugoslavia of
software, at war with itself -- but it's all over. I have IS manager sources
who refer to UNIX as legacy systems." -- Paul Cubbage, Dataquest
WNT's POSIX layer is virtually useless; if your program uses it,
then it can't use the WNT API.
OS/2 has partial POSIX compatibility. I've ported several appli-
cations developed under Linux and Solaris to OS/2 without having to
modify code.
>the AmigaOS:
>the A\Box is to run a version of Unix and host the AmigaOS in some way
And don't forget pOS, which supports the AmigaOS API natively.
(qv, www.pios.de)
Just a few nits to pick there..
--
>
> [On M$ using HP/UX as WinNT's new core so it can scale better...]
> One of the interesting things here is how Unix has taken over
> much of the computer industry, and how it seems posed for yet more
> takeover -- all this for an OS (or at least a family of OSes) that has no
> leading personality, or even organization, in control. Here's what I'm
> familiar with, and what I recall about its Unix-compatibility.
You're another of the PC mouths in these groups who do NOT know
the capabilities of current MVS, VMS or AS/400 systems.
You know - the machines that DO run the world's crucial systems!
Take it to the Linux advocacy groups - but leave it out of here, please!
Luke
[ DELETED ]
>Maybe instead of "Cairo", MS will cal the newest WIn NT "Boise" after
>HP's home location?
Not to nitpick, but HP's headquarters, I think this is what you mean by
"home location", is Palo Alto, CA. HPUX is based in Cupertino, CA. and
Boulder, CO.
Josef
> OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
> OS/400 {for AS/400's] -- ?
> DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
> FreeBSD, NetBSD, etc. -- Unix
> SunOS/Solaris -- Unix
> HP/UX -- Unix
> SCO -- Unix
> Data General DG/UX -- Unix
> DEC OSF/1 -- Unix
> Xenix -- Unix
> Apple A/UX -- Unix
> IBM AIX -- Unix
> Linux -- Unix
> DOS -- none (legacy OS)
> Windoze 3.x -- none (shell for DOS)
> Windoze 95 -- none (cross between NT and a shell for DOS)
> Windoze NT -- has POSIX layer (?)
> OS/2 -- none (?)
> the MacOS -- none (legacy OS)
> the AmigaOS -- none (legacy OS)
> NeXTStep -- Unix (OpenStep fits on top of various Unixes and also NT)
> the BeOS -- has POSIX layer, bash shell, Unix-style directory syntax
> GNU Hurd -- ?
> Novell NetWare -- none
> Cray UNICOS -- Unix
> SGI Irix -- Unix
Plan 9 -- Unix 2. The Revenge.
--
:sb)
> One of the interesting things here is how Unix has taken over
>much of the computer industry, and how it seems posed for yet more
>takeover -- all this for an OS (or at least a family of OSes) that has no
>leading personality, or even organization, in control. Here's what I'm
>familiar with, and what I recall about its Unix-compatibility.
>
>OS/2 -- none (?)
True for Warp 4 as packaged and distributed, but with emx and gcc you
can port Posix apps relatively "easily" (from what I understand).
That's why a whole pile of Unix utilities (including a bunch of shells
like tcsh and bash, a whole bunch of gnu utils, a number of net clients
like slrn/trn/tin/pine/lynx/ncftp, etc., and XFree86) have been ported
to OS/2. There's also an OS/2 IFS for ext2fs for folks that need to
get at those Linux filesystems from OS/2. :-)
>the MacOS -- none (legacy OS)
AU/X and MkLinux both run on Mac hardware (whether on both 68k and PPC
or just one or the other I dunno).
--
-Rich Steiner >>>---> rste...@skypoint.com >>>---> Bloomington, MN
Written online using SLRN + FTE under OS/2 Warp 4
The Theorem Theorem: If If, Then Then
>Rob Young <you...@eisner.decus.org> wrote:
>] pet...@netcom.com (Loren Petrich) writes:
>] >
>] > [On M$ using HP/UX as WinNT's new core so it can scale better...]
>] >
>] > One of the interesting things here is how Unix has taken over
>] >
>] > DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
>]
>] Mind kindly suggest you leave VMS, or as it is now called
>] OpenVMS out of the group of legacy OSes. OpenVMS on
>] Alpha is about to get new clustering legs that Unix can't
>] match. Digital is calling it "Galaxies Software
>] Architecture". It will allow their next generation 21264
>] based Wildfire (32 CPU) server to contain 8 VMS nodes of
>] 4 CPUs each. 8 copies of VMS running inside that machine
>] clustered over a high-speed crossbar.
>]
>] Digital in December stated (at DECUS) Galaxies + Wildfire
>] will do over one million "tpcs". Current high-end
>] numbers are in the 20 to 30 thousand range. Look for
>] Galaxies early 1998.
>
>How will this compare to the Tera MTA, running a microkernel
>based Unix? http://204.118.137.100/SystemCharacteristics.html
>
>This computer will also have 32 CPUs .. or 256 .. running *one*
>copy of Unix with *one* memory as opposed to 8 copies of VMS
>(with 8 memories I assume). With compiler support for
>fine-grained parallelism (4 + 2-per-thread instruction overhead for
>creating new threads for this purpose).
>
>From their PR it sounds like a better approach than clustering.
>I don't know if the machines are at all comparable though.
So what happens when that memory dies? On a clustered machine the
rest keep going.
What if you want to upgrade the OS? On a clustered machine you
upgrade them one at a time and the system stays up.
There are VMS clusters out there that haven't been rebooted in 10
years, including OS and software upgrades. Try that without
clustering.
John Wiltshire
>Here's an interesting item.
>Everyone probably knows the problems that microsoft has had in trying to
>scale Windows NT to run on large high throughput systems. The task
>threading scheme of WNT starts to cause negative performance hits for
>each CPU after 4 that is added to an SMP architecture, the clustering
>technology named "wolfpack" is falling apart (it has been dubbed
>chihuahua pack by the popular press) and the WNT network directory
>scheme dies with over 200 users.
I hadn't heard that wolfpack was falling apart?
>Wellllll... Microsoft has announced that on May 20th, Bill Gates will go
>national via satellite to explain the "new" strategy for scaling Windows
>NT onto high performance systems.
>
>THe magazine "Communications Week" beat him to the punch in thier march
>24th issue by breaking the story of a new deal between MS and Hewlett
>Packard in which, basically, microsoft is going to port most of HPUX
>(HP's UNIX) into windows to get Win NT to scale better.
>
>SO basically, the next version of windows NT wil be.....UNIX!
>
>Of course, microsoft won't be honest enough to admit that they are
>caving in, so there is sure to be a lot of smoke blown around to obscure
>what they are doing.
>
>Maybe instead of "Cairo", MS will cal the newest WIn NT "Boise" after
>HP's home location?
>
>Maybe the merger of HPUX and Windows NT will be called Win UX? Naaaaah!!
>
>Who knows though, Win NT might just grow up to be a real operating
>system now!
Sounds like the classic Pentium mistake of '2+2 = 5.6e10'. You can
read way too much into MS leaks - most of which exist to put the fear
of god into the competition and force them to make mistakes or lose
money.
John Wiltshire
>In message <3344FA...@uk.sun.com> - Andrew Harrison
><andrew....@uk.sun.com> writes:
>:>If it is true which I doubt it is also sad because HP-UX is not
>:>threaded at least 10.20 is not and does not scale well beyond 8
>:>CPU's.
>Well that little tidbit of information makes me BELIEVE it is true. If
>HP-UX is not really "all that great as unix goes" as your post seems to
>imply, than that is ALL the MORE reason for Microsoft to integrate it into
>NT. Why would MS want to make something work "REALLY WELL" the FIRST
>TIME??
Heh - what would actually be much easier and better would be to simply
build a HP-UX virtual machine to sit on top of the NT executive...
Many people don't know that Win32 isn't the native NT system calls -
the native calls are all NtXXXX() - the Win32 subsystem is just that -
a DLL that maps calls to the real system calls. So it would be quite
possible to graft HP-UX (or anything else) onto the top of NT.
BTW, you seem to beleive lots of things - sounds like one of the
sillier net.rumors I've encountered.
David LeBlanc |Why would you want to have your desktop user,
dleb...@mindspring.com |your mere mortals, messing around with a 32-bit
|minicomputer-class computing environment?
|Scott McNealy
> Plan 9 -- Unix 2. The Revenge.
Maybe if it were free. I really don't feel like shelling out $350.
-brian
>Many people don't know that Win32 isn't the native NT system calls -
>the native calls are all NtXXXX() - the Win32 subsystem is just that -
>a DLL that maps calls to the real system calls. So it would be quite
>possible to graft HP-UX (or anything else) onto the top of NT.
Do you know where I can find documentation for the native Nt*() API's?
Thanks,
Darwin Ouyang
From what I understand it is a highly parallel kernel.
If one OS is doing 8 things at a time, how is that worse than
8 OSs doing 1 thing at a time?
Each CPU can have upto 128 threads running at once, so context
switching is not a problem.
] Sidenote:
] For years VMS was mocked for its method of file
] access through RMS (Record Management Services).
] True it can be somewhat slower to go through another
] layer to access files. UNIX file access has been
] unfettered. Raw. Stream files. None of this goofy
] file types of VMS. None of the goofy overhead of
] RMS. But wait, RMS allows you to lock files or
] records!
]
] Who cares? We are UNIX. We will handle the locking.
] Thank you very much.
]
] Okay. We are VMS. We have something on the horizon
] that says you better stick locking in the core of
] your OS as a layer. You can't catch us otherwise as
] our OS will now communciate with other copies of our
] OS at a MUCH improved speed. Our locking overhead
] is going to go WAY down. So sorry Mr. UNIX.
I do not see how VMS-style locking cannot be done with Unix.
Some programs can make assumptions about locking that could greatly
increase their speed over VMS if specifically optimized for Unix.
] Yeah, legacy. Rather embarassing that an old legacy VMS
] will smoke an old legacy UNIX.
Possibly.. no need to place bets now though.
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
a JAMax "A good style should show no sign of effort. What
h o w is written should seem a happy accident."
tan lle --Somerset Maugham
<snip>
Ahum, there is a leading organisation, called OSF (Open Software
Foundation) last time I checked (It's not always easy tracing the U*ix
trademark owner du jour ;-)). They decide whether your system is a Unix
System or not. Fi Linux is NO Unix AFAIK. But .. surprise surprise,
S/390 *is* Unix branded since last year (Now this is actually no good
marketing for S/390 with a MTTF of 25 years, while a U*ix certainly
doesn't have this reliability).
--
Pieter Dubois
___________________________________
Disclaimer ... which Disclaimer?
Didn't you read my name?
WARNING : E-mail adress prohibited for usage in mailinglists without
strict permission of owner (me).
Unsollicited usage agrees in paying 1000$ for each time used to me by
the offender.
It uses a custom controller to route memory requests, so I assume
it could be made to not use that memory while it was replaced. I
don't know if it does this or not.
That is a good point though.
] What if you want to upgrade the OS? On a clustered machine you
] upgrade them one at a time and the system stays up.
And another.
] There are VMS clusters out there that haven't been rebooted in
] 10 years, including OS and software upgrades. Try that without
] clustering.
These are good points, but I fail to see how they show that VMS
is faster.
] John Wiltshire
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
>> One of the interesting things here is how Unix has taken over
>>much of the computer industry, ...
>>OS/2 -- none (?)
>True for Warp 4 as packaged and distributed, but with emx and gcc you
>can port Posix apps relatively "easily" (from what I understand).
>That's why a whole pile of Unix utilities ...
Is there some sort of Posix library that one can link to that
allows one to do that with a minimum of code rewriting?
>>the MacOS -- none (legacy OS)
>AU/X and MkLinux both run on Mac hardware (whether on both 68k and PPC
>or just one or the other I dunno).
A/UX runs on 68K Macs, MkLinux runs on PowerMacs
A/UX can host the MacOS as a Unix process; that is in principle possible
for MkLinux, but there does not appear to be any effort to write a MacOS
host for it (there are similar such hosts in the works for the BeOS
[fredlabs's VirtualMac] and Rhapsody [the Blue Box]).
So it appears that the MacOS's fate is to be hosted by various
versions of Unix (or at least Unix-compatible OSes). Just more evidence
of the takeover. :-)
:>>Well that little tidbit of information makes me BELIEVE it is true. If
snip
:>BTW, you seem to beleive lots of things - sounds like one of the
:>sillier net.rumors I've encountered.
Maybe you shouldn't take everything you read so literally??
>THe magazine "Communications Week" beat him to the punch in thier march
>24th issue by breaking the story of a new deal between MS and Hewlett
>Packard in which, basically, microsoft is going to port most of HPUX
>(HP's UNIX) into windows to get Win NT to scale better.
>SO basically, the next version of windows NT wil be.....UNIX!
My reading of the pact suggests something quite different. There's
not a word in the announcement indicating that NT is moving toward UNIX;
rather, they recognize that NT is not Enterprise-ready and they need help
making it so. HP, on the other hand, has major strengths in the Enterprise,
and they're now ready to put those talents and resources to strengthen NT -
and thus legitimize NT - in the Enterprise.
HP is weakening their committment to their own OpenMail product in
favor of Exchange, supporting SMS through its OpenView, and offering service
and support for NT that Microsoft is simply incabable of providing. And what
is Microsoft doing to support HP and its products? Essentially nothing. HP
is attempting to go Digital one better; Digital, like Intergraph before it,
is betting their future on the success of NT.
Quite tellingly, HP's work with SCO to develop a next-generation
UNIX for Intel and PA/Risc got nary a mention in the announcement of this
grand alliance.
>Of course, microsoft won't be honest enough to admit that they are
>caving in, so there is sure to be a lot of smoke blown around to obscure
>what they are doing.
>Maybe instead of "Cairo", MS will cal the newest WIn NT "Boise" after
>HP's home location?
Actually, I believe HP's headquarters are in Palo Alto, California.
Boise is home location of their laser printer operations. What Micrsoft *is*
doing is buying some high profile UNIX talent, and encouraging others to
move their tools and products to NT.
>Maybe the merger of HPUX and Windows NT will be called Win UX? Naaaaah!!
>Who knows though, Win NT might just grow up to be a real operating
>system now!
It probably will, with a little help from their "friends".
Well, at present UNIX is just a set of specification - no matter how
they
was implemented. It is not a particular OS. It is a minimum requirement
for OS to be named UNIX.
> OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
Branded as UNIX 95.
Actually, from certain point of view OS/390 is more UNIX than,
for example, Solaris :)
...
> FreeBSD, NetBSD, etc. -- Unix
They was designed based on USL source code (a long time ago),
but they are not UNIX branded OS.
...
> Linux -- Unix
Strictly say - it is not.
AFAIK, Lasermoon was working on Linux-FT which they planned
to brand as UNIX sometime in future.
...
> Windoze NT -- has POSIX layer (?)
Compliance with POSIX does not imply that it is UNIX system.
Softway is working on OpenNT (based on Microsoft POSIX subsystem code)
for NT, which they plan to brand as UNIX.
Alex Kompel.
shu...@sequoiap.com
It is already done. It is called WISE (Windows Interface Source
Enviroment).
There are couple of products I aware of: Wind/U from Bristol Technology
and
MainWin from MainSoft.
Just wait until Monday Jan 17, 2038, a couple seconds after
7:14 PM PST. Any Unix machines running at that time
will go haywire as the system time word goes negative.
Tim. (sho...@triumf.ca)
Correction --- any _32 bit_ UNIX machine running at that time will experience
some sort of problem. Given that 2038 is more than 40 years away, there is
hope that very few machines will still be 32 bit at the time. Think about
it --- that would be comparable to today's machines running the 1957 Univac
OS ;-)
Bernie
--
============================================================================
"It's a magical world, Hobbes ol' buddy...
...let's go exploring"
Calvin's final words, on December 31st, 1995
You have to get a source license 8-(
Some of it is documented in the DDK - they haven't been very nice
about documenting what you need to build a new subsystem, but the
folks at Softway have done it. I have heard that they are working on
buttoning those areas down enough to publish the API.
The problem is that as soon as they publish it, they are locked in -
if it is undocumented, they can change it all over the place and not
worry about it. IMHO, they'd do well to encourage people to write
more subsystems - I think I might shell out $200 or so for a Mac
subsystem, and I have this weird idea that it would be fun to make a
Linux subsystem (yes, I know this indicates I just might be crazy...).
You can easily see the calls by doing a dumpbin /exports on ntdll.dll.
- there are 225 in that one alone.
Granted, a single system can have many applications running at once.
In Tera's case threading shines like no other, lowering memory
contention, etc.
However, the UNIX variant running on a Tera machine must still
contend with the mundane system tasks of swapping processes
for instance, dispatching I/O etc. Tracking all that. I don't
care how wonderfully parallel certain aspects are. Certain
system level tasks don't parallelize well at all. How many
schedulers does this Tera machine have? One.
8 OSs can be doing more things at once. 8 schedulers ;-).
> ] Sidenote:
> ] For years VMS was mocked for its method of file
> ] access through RMS (Record Management Services).
> ] True it can be somewhat slower to go through another
> ] layer to access files. UNIX file access has been
> ] unfettered. Raw. Stream files. None of this goofy
> ] file types of VMS. None of the goofy overhead of
> ] RMS. But wait, RMS allows you to lock files or
> ] records!
> ]
> ] Who cares? We are UNIX. We will handle the locking.
> ] Thank you very much.
> ]
> ] Okay. We are VMS. We have something on the horizon
> ] that says you better stick locking in the core of
> ] your OS as a layer. You can't catch us otherwise as
> ] our OS will now communciate with other copies of our
> ] OS at a MUCH improved speed. Our locking overhead
> ] is going to go WAY down. So sorry Mr. UNIX.
>
> I do not see how VMS-style locking cannot be done with Unix.
>
Oh? Give me an example. How about if I give you an example
that isn't so trivial in UNIX? In VMS, log onto node A,
log on to node B, edit a file on node A, try to copy that
same file on node B. Locking is built into the O/S. Not
a problem in UNIX, you can create something to mimic that.
Not in general. Granted, you can do a Distributed Lock
Manager in UNIX. One not nearly as robust to scale to
a Galaxies level as UNIX doesn't have the equivalent RMS
layer allowing for really fine-grained locking at a record
level. Never mind locking a file, that is trivial ;-).
> Some programs can make assumptions about locking that could greatly
> increase their speed over VMS if specifically optimized for Unix.
>
Now you are talking. Yes, use the built in and program for
it. In UNIX, you have all the tools you need to create indexed
files if you wish. One problem, you are doing it as a work-around.
It is built into VMS and yes it causes overhead, hence sometimes
slower. But RMS is more than just overhead. With RMS I can
preallocate a 10000 block file at an application level and tell it to
be contiguous on disk and tell it to grow by 2500 blocks each
time it needs to grow and do writes in multiple blocks with multiple
buffers and many variations on that. And when Sprialog goes
production, turn on write caching and wait 30 seconds before
the writes are flushed to disk in one long stream (like writing
to an infinite log, all writes are written out with no seeks
to update file headers). You can't outwrite VMS. You can out lock
and unlock VMS, for now.
Architecturally, it looked like UNIX was a win for the longest
time. Roaring from the back of the pack comes VMS though.
>
> ] Yeah, legacy. Rather embarassing that an old legacy VMS
> ] will smoke an old legacy UNIX.
>
> Possibly.. no need to place bets now though.
>
Why not? Digital doesn't blow engineering smoke. They
are currently sampling a 600 MHz 21164 at 18 SpecInt95
and 27 SpecFp95. They say the 21264 will do 40 SpecInt95
and 60 SpecFp95 at 600 MHz, you can bet on it.
If Digital says one million tpcs for Galaxies + Wildfire,
get an NDA and find out all the details. Very large VMS shops
are planning appropriately, budgest and all. Again, Oracle
and Sybase are psyched. You will know I am not pulling your
leg when you read Informix has a VMS port underway. Now, I
don't know that for *sure* but a BIG birdy strongly hinted that.
Betting Digital delivers one million tpcs would be a fairly
safe bet.
Rob
Who's to guarantee that the size of system time word has any direct
relation to the machine word size? Unix had a 32-bit time word on
16-bit machines. There are lots of commercial Unices out there with
32-bit time words on 64-bit machines. An incredible amount of software,
many file systems, and all sorts of databases have the 32-bit time
word etched into them - going to a 64-bit time word is not just a matter
of changing the header files and recompiling!!!
> Think about
>it --- that would be comparable to today's machines running the 1957 Univac
>OS ;-)
There's all sorts of machines today running a 1971 PDP-9 OS, and
stuck with all of its archaic structures and limitations. That OS is
Unix.
Tim. (sho...@triumf.ca)
The key part into which so much is being read is that HP will help Microsoft
with "clustering and high availability features" according to
Communications Week.
AFAIK, so far, it was Digital that was providing this.
-arun gupta
OS/390 is a combination of MVS plus Open Unix or SPEC 1170 APIs.
VM is capable of emulating AIX inside the mainframe for running Unix
applications.
>OS/400 {for AS/400's] -- ?
OS/400 as of release 3.1 and beyond has adopted 70% of Open Unix or SPEC
1170 calls, calls that IBM considered to be the most commonly used.
There may be plans of emulating an entire Unix personality on top of
OS/400 as part of the Workplace like "Hydra" initiative.
Thus the Unixification of the world is actually much more extensive than
what you previously believed.
I predict that IBM will release a new server line, possibly called
AS/400 Model 600, or AS/600 or AS/6000. The Universal Server will
unite RS/6000 and AS/400 into a single line, and the server is capable
of running OS/400 or AIX by installation choice. The server will have
multiple CPUs using a new kind of 64 bit addressing PowerPC that is
descended from the PowerPC AS and the POWER3-PPC620-630 development
efforts.
>DEC VMS -- has POSIX layer (Unix-API-compatible) (legacy OS)
>FreeBSD, NetBSD, etc. -- Unix
>SunOS/Solaris -- Unix
>HP/UX -- Unix
>SCO -- Unix
>Data General DG/UX -- Unix
>DEC OSF/1 -- Unix
>Xenix -- Unix
>Apple A/UX -- Unix
>IBM AIX -- Unix
>Linux -- Unix
>DOS -- none (legacy OS)
>Windoze 3.x -- none (shell for DOS)
>Windoze 95 -- none (cross between NT and a shell for DOS)
>Windoze NT -- has POSIX layer (?)
>OS/2 -- none (?)
Has GNU style utilities on shareware as well as an X server.
>the MacOS -- none (legacy OS)
>the AmigaOS -- none (legacy OS)
AmigaOS is said to have Unix like qualities and Unix like utilities on
shareware, not to mention Unix variants that can run on the Amiga
hardware.
>NeXTStep -- Unix (OpenStep fits on top of various Unixes and also NT)
>the BeOS -- has POSIX layer, bash shell, Unix-style directory syntax
>GNU Hurd -- ?
>Novell NetWare -- none
>
> Several of these I have labeled "legacy OSes"; these are either
>old Big Iron OSes (OS/390, OS/400, VMS) or are hangovers from small
>systems (DOS, the MacOS, the AmigaOS), whose ultimate fate is likely to be
>hosted by various Unixes or NT:
>
>DOS:
>NTVDM in NT
>
>the MacOS:
>the Blue Box in PowerPC NeXTStep (Rhapsody)
>Fredlabs's VirtualMac in the BeOS
>
>the AmigaOS:
>the A\Box is to run a version of Unix and host the AmigaOS in some way
>--
As for Apple, regardless of who takes over, via NeXT inspired coup, via
Oracle takeover, via Sun merger, etc,. what is inevitable is the
Unixification of the Macintosh.
Note that MkLinux on the Macintosh came out unscathed in the technology
reorganization. Apple stopped their own AIX updates, but inevitably
when PPCP comes, Power Macs will be able to run AIX directly. IBM has a
lot to fear on their own RS/6000 line, if AIX gets to run on a dual
250MHz 604e Power Computing Power Tower Pro on PPCP on a much lower
price than an RS/6000 Model 43P. (Gad so many Power words and P letters
here.) Running AIX on a Power Mac or Mac clone may put many Macs into
businesses, considering the 10,000 vertical market apps AIX has.
Rgds,
Chris
"Devant le comportement irrationnel de sa machine, j'ai compris que se
poser en dfenseur de Windows releve de la plus profonde bassesse. J'ai
honte" --- Eric Bernatchez, "La Presse" newspaper, "Cyberpresse"
column, March 22, 1997, Montreal, Canada.
"Confronted with his machines irrational behaviour, it dawned upon me
that taking the position of Windows advocate is of the lowest possible
ethics. I am ashamed".
***cro...@kuentos.guam.net***
Actually that would be PDP-7. And that UNIX is as much alike to today's
Unixes as you are to a amoeba. Or maybe a grasshopper.
>
>Tim. (sho...@triumf.ca)
--
("\''/").__..-''"`-. . Roberto Alsina
`9_ 9 ) `-. ( ).`-._.`) ral...@unl.edu.ar
(_Y_.)' ._ ) `._`. " -.-' Centro de Telematica
_..`-'_..-_/ /-'_.' Universidad Nacional del Litoral
(l)-'' ((i).' ((!.' Santa Fe - Argentina
>>OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
>OS/390 is a combination of MVS plus Open Unix or SPEC 1170 APIs.
>VM is capable of emulating AIX inside the mainframe for running Unix
>applications.
I presume that one has to recompile for the S/390 instruction
set, however. I wonder how long IBM will continue to support it in native
form, and not (say) as a PowerPC emulation.
>OS/400 as of release 3.1 and beyond has adopted 70% of Open Unix ...
>There may be plans of emulating an entire Unix personality on top of
>OS/400 as part of the Workplace like "Hydra" initiative.
>Thus the Unixification of the world is actually much more extensive than
>what you previously believed.
Interesting. I wonder where it's buried in IBM's website, because
I don't recall boasts of Unix-compatibility in the stuff I've read there
(mainly stuff that I did not need the search facility to find; I've used
it for some things, without much success).
[IBM someday releasing a server that's a cross between RS/6000 and
AS/400...]
Certainly possible, if internal politics does not get in the way.
>>the AmigaOS -- none (legacy OS)
>AmigaOS is said to have Unix like qualities and Unix like utilities on
>shareware, not to mention Unix variants that can run on the Amiga
>hardware.
The AmigaOS has preemptive multitasking, but only a single memory
space (I don't know if virtual memory was ever implemented for it). So
it's only halfway there :-)
>>the MacOS:
>>the Blue Box in PowerPC NeXTStep (Rhapsody)
>>Fredlabs's VirtualMac in the BeOS
>As for Apple, regardless of who takes over, via NeXT inspired coup, via
>Oracle takeover, via Sun merger, etc,. what is inevitable is the
>Unixification of the Macintosh.
At least with the "Unixness" well hidden :-)
>Note that MkLinux on the Macintosh came out unscathed in the technology
>reorganization. Apple stopped their own AIX updates, but inevitably
>when PPCP comes, Power Macs will be able to run AIX directly. ...
So the likely PPCP OSes are all Unix variants/compatibles (BeOS,
NeXT, AIX, MkLinux, possibly Solaris), with the exception of one legacy
OS (MacOS) and one different-just-to-be-different OS (WindozeNT) [for
instance, Winsock is has only a few small differences with BSD Sockets,
differents that are alleged to be gratuitous.]
IBM has a
>lot to fear on their own RS/6000 line, if AIX gets to run on a dual
>250MHz 604e Power Computing Power Tower Pro on PPCP on a much lower
>price than an RS/6000 Model 43P. ...
One wonders if the RS/6000 division will drag their feet in fear
of cheap clones, or whether they will realize that high volume can
compensate for low income per unit. Or else they might merge with the
AS/400 division :-)
Is that a fact? From what I read, once a process is allocated
threads on processors, minimal scheduling is needed (the
processors switch threads automatically so this 'mundane'
scheduling is removed).
You could also say 'I don't care how woderful the Alpha processor
is, SMP breaks down after a certain number of processors' Why
does each node only have *4* processors? Why not 64 .. if you
really want speed!
] 8 OSs can be doing more things at once. 8 schedulers ;-).
8 schedulers, working for only 4 processors each and being
bogged down with 'mundane' scheduling for thread/process switching.
] > ] Sidenote:
] > ] For years VMS was mocked for its method of file
] > ] access through RMS (Record Management Services).
] > ] True it can be somewhat slower to go through another
] > ] layer to access files. UNIX file access has been
] > ] unfettered. Raw. Stream files. None of this goofy
] > ] file types of VMS. None of the goofy overhead of
] > ] RMS. But wait, RMS allows you to lock files or
] > ] records!
] > ]
] > ] Who cares? We are UNIX. We will handle the locking.
] > ] Thank you very much.
] > ]
] > ] Okay. We are VMS. We have something on the horizon
] > ] that says you better stick locking in the core of
] > ] your OS as a layer. You can't catch us otherwise as
] > ] our OS will now communciate with other copies of our
] > ] OS at a MUCH improved speed. Our locking overhead
] > ] is going to go WAY down. So sorry Mr. UNIX.
] >
] > I do not see how VMS-style locking cannot be done with Unix.
] >
] Oh? Give me an example. How about if I give you an
] example that isn't so trivial in UNIX? In VMS, log onto
] node A, log on to node B, edit a file on node A, try to
] copy that same file on node B. Locking is built into the
That sounds like a file-system issue .. or perhaps it should be.
] O/S. Not a problem in UNIX, you can create something to
] mimic that. Not in general. Granted, you can do a
] Distributed Lock Manager in UNIX. One not nearly as
] robust to scale to a Galaxies level as UNIX doesn't have
] the equivalent RMS layer allowing for really fine-grained
] locking at a record level. Never mind locking a file,
] that is trivial ;-).
Why should it have the equivalent to RMS layer to lock records?
Files in most Unix filesystems are not record-based. If you
wanted to put a record-based filesystem in Unix you would most
likely use ioctl(), which is a completely different demon. ;)
] > Some programs can make assumptions about locking that could
] > greatly increase their speed over VMS if specifically
] > optimized for Unix.
] >
] Now you are talking. Yes, use the built in and program
] for it. In UNIX, you have all the tools you need to
] create indexed files if you wish. One problem, you are
] doing it as a work-around. It is built into VMS and yes
] it causes overhead, hence sometimes slower. But RMS is
] more than just overhead. With RMS I can preallocate a
] 10000 block file at an application level and tell it to
] be contiguous on disk and tell it to grow by 2500 blocks
] each time it needs to grow and do writes in multiple
] blocks with multiple buffers and many variations on that.
Again, is this something different from a filesystem issue?
I know QNX will let you allocate a contiguous disk file with room
for growing (when you need to change the size it changes it into
a normal file that can fragment, but that could be different).
] And when Sprialog goes production, turn on write caching
] and wait 30 seconds before the writes are flushed to disk
] in one long stream (like writing to an infinite log, all
] writes are written out with no seeks to update file
] headers). You can't outwrite VMS. You can out lock and
] unlock VMS, for now.
I don't know what Sprialog is, but this also seems like a
criticism of UFS, not Unix in general.
] Architecturally, it looked like UNIX was a win for the longest
] time. Roaring from the back of the pack comes VMS though.
Or it could be that Digital just has a lot of really bright guys
working on VMS and are neglecting Digital Unix.
] > ] Yeah, legacy. Rather embarassing that an old legacy VMS
] > ] will smoke an old legacy UNIX.
] >
] > Possibly.. no need to place bets now though.
]
] Why not? Digital doesn't blow engineering smoke. They
] are currently sampling a 600 MHz 21164 at 18 SpecInt95
] and 27 SpecFp95. They say the 21264 will do 40 SpecInt95
] and 60 SpecFp95 at 600 MHz, you can bet on it.
I'm confused; those only run VMS?
] If Digital says one million tpcs for Galaxies + Wildfire,
] get an NDA and find out all the details. Very large VMS
] shops are planning appropriately, budgest and all.
] Again, Oracle and Sybase are psyched. You will know I am
] not pulling your leg when you read Informix has a VMS
] port underway. Now, I don't know that for *sure* but a
] BIG birdy strongly hinted that.
]
] Betting Digital delivers one million tpcs would be a
] fairly safe bet.
You say "Roaring from the back of the pack comes VMS though."
Given that everything Digital says is true, why is it not
possibly for Unix to 'pull a VMS'?
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
a JAMax ki-bosh (KIE-bahsh): (1836, origin unknown)
h o w something that serves as a check or stop
tan lle ("put the /kibosh/ on that")
>In comp.os.ms-windows.nt.advocacy John Wiltshire <j...@qits.net.au.nospam> wrote:
>] <jmax...@cslab.vt.edu> wrote in comp.os.ms-windows.nt.advocacy:
>] >Rob Young <you...@eisner.decus.org> wrote:
>] >] pet...@netcom.com (Loren Petrich) writes:
>] >] >
>] >] match. Digital is calling it "Galaxies Software
>] >] Architecture". It will allow their next generation 21264
>] >] based Wildfire (32 CPU) server to contain 8 VMS nodes of
>] >] 4 CPUs each. 8 copies of VMS running inside that machine
>] >] clustered over a high-speed crossbar.
>] >
>] >How will this compare to the Tera MTA, running a microkernel
>] >based Unix? http://204.118.137.100/SystemCharacteristics.html
>] >
>] >This computer will also have 32 CPUs .. or 256 .. running
>] >*one* copy of Unix with *one* memory as opposed to 8 copies of
>] >VMS (with 8 memories I assume). With compiler support for
>] >fine-grained parallelism (4 + 2-per-thread instruction
>] >overhead for creating new threads for this purpose).
>] >
>] >From their PR it sounds like a better approach than
>] >clustering. I don't know if the machines are at all comparable
>] >though.
>]
>] So what happens when that memory dies? On a clustered machine
>] the rest keep going.
>
>It uses a custom controller to route memory requests, so I assume
>it could be made to not use that memory while it was replaced. I
>don't know if it does this or not.
>
>That is a good point though.
>
>] What if you want to upgrade the OS? On a clustered machine you
>] upgrade them one at a time and the system stays up.
>
>And another.
>
>] There are VMS clusters out there that haven't been rebooted in
>] 10 years, including OS and software upgrades. Try that without
>] clustering.
>
>These are good points, but I fail to see how they show that VMS
>is faster.
A clustered solution probably won't be faster than a single machine.
It is the inherent reliability and scalability that makes it
attractive.
John Wiltshire
>>Correction --- any _32 bit_ UNIX machine running at that time will experience
>>some sort of problem. Given that 2038 is more than 40 years away, there is
>>hope that very few machines will still be 32 bit at the time.
>Who's to guarantee that the size of system time word has any direct
>relation to the machine word size? Unix had a 32-bit time word on
>16-bit machines.
Wouldn't be much good otherwise, would it? I mean, if it turns over every
10 hours, things are not gonna be very useful ;-)
>There are lots of commercial Unices out there with
>32-bit time words on 64-bit machines.
But those are 32 bit UNIXes. There is no 64 bit UNIX I know of that has
time_t defined as a 32 bit type (and it would be very stupid to do so).
So I should rewrite my comment to "any _32 bit UNIX_ machine ...". Which,
once again, should all be gone in 40 years.
>An incredible amount of software, many file systems, and all sorts of
>databases have the 32-bit time word etched into them - going to a
>64-bit time word is not just a matter of changing the header files
>and recompiling!!!
It should be. Anyone who uses the time() function should know that it
returns a time_t. If time_t gets changed, a recopile should be all that
is needed. Filesystems could be more of a problem, but of course 64 bit
UNIXes should also provide 64 bit for date stamps in their filesystems.
Yes, the Minix filesystem will probably be very confused in 2038, but hey ---
considering that we will probably have exabytes harddisks by then, I
sincerely hope noone is using the Minix filesystem, anyway. Nor FAT, for
that ;-)
Cool. Could you point me to the documentation for these NtXXXX calls?
Sounds like a iBCS or even a (GASP) Linux subsystem would be
a very usefull project for someone to undertake.
>BTW, you seem to beleive lots of things - sounds like one of the
>sillier net.rumors I've encountered.
>
>
>David LeBlanc |Why would you want to have your desktop user,
>dleb...@mindspring.com |your mere mortals, messing around with a 32-bit
> |minicomputer-class computing environment?
> |Scott McNealy
-- cary
cob...@access.digex.net
>Who's to guarantee that the size of system time word has any direct
>relation to the machine word size?
It doesn't. There's no reason we can't move 32-bit machines to a
64-bit time_t.
>An incredible amount of software,
>many file systems, and all sorts of databases have the 32-bit time
>word etched into them - going to a 64-bit time word is not just a matter
>of changing the header files and recompiling!!!
No, there's a bit of pain involved, but it doesn't look to me like
it would be all that much worse that than change from a 32-bit to
a 64-bit off_t. The only problem that wasn't as big when changing
off_t is dealing with data in permanent storage. However, the
filesystem stuff is pretty easy; FFS already deals with two inode
types (4.2 and 4.4BSD); it can be made to deal with a third.
>There's all sorts of machines today running a 1971 PDP-9 OS, and
>stuck with all of its archaic structures and limitations. That OS is
>Unix.
This from the man still running RT-11 at home! :-)
cjs
--
Curt Sampson c...@portal.ca Info at http://www.portal.ca/
Internet Portal Services, Inc. Through infinite myst, software reverberates
Vancouver, BC (604) 257-9400 In code possess'd of invisible folly.
>In article <5i772p$h2h$1...@nntp.ucs.ubc.ca>,
>Tim Shoppa <sho...@alph02.triumf.ca> wrote:
>
>>Who's to guarantee that the size of system time word has any direct
>>relation to the machine word size?
>
>It doesn't. There's no reason we can't move 32-bit machines to a
>64-bit time_t.
You only have 41 years left to change the code. Better get crackin'.
<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <jsh...@ix.netcom.com>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
>You have to get a source license 8-(
:-( That sucks.
>Some of it is documented in the DDK - they haven't been very nice
>about documenting what you need to build a new subsystem, but the
>folks at Softway have done it. I have heard that they are working on
>buttoning those areas down enough to publish the API.
Thats a good development. Have you checked out the Softway Unix subsystem
yet? I havn't had a chance to play with it, but it looks like good stuff.
>The problem is that as soon as they publish it, they are locked in -
>if it is undocumented, they can change it all over the place and not
>worry about it.
Yep, though I'd think Nt*() would be fairly stable now after four major
releases.
>IMHO, they'd do well to encourage people to write
>more subsystems - I think I might shell out $200 or so for a Mac
>subsystem, and I have this weird idea that it would be fun to make a
>Linux subsystem (yes, I know this indicates I just might be crazy...).
Oh yes. :) I can just see it now... Linux under NT. :)
>You can easily see the calls by doing a dumpbin /exports on ntdll.dll.
>- there are 225 in that one alone.
Noted, thanks.
Darwin Ouyang
>Yes, the Minix filesystem will probably be very confused in 2038, but hey ---
>considering that we will probably have exabytes harddisks by then, I
>sincerely hope noone is using the Minix filesystem, anyway. Nor FAT, for
>that ;-)
Sure it will. How does FAT256 sound?
:-O <- me vomiting
I don't think that time is going to be too much of a
problem on UNIX systems. There is enough encapsulation to
allow the underlying implementation to be changed without
too much breakage.
Neil
Yes. Looking at your pointer to:
http://204.118.137.100/SystemCharacteristics.html
"A two-tier scheduler is incorporated into the Tera microkernel; it provides
better resource allocation to large tasks (those currently running on more than
a single processor) via a bin-packing scheme, and schedules the smaller tasks
using a traditional Unix approach."
So one portion of the scheduler does your thread thing. The
other portion deals with the smaller issues of life like dispatching
I/O, etc. Do you have a different view?
>You could also say 'I don't care how woderful the Alpha processor
>is, SMP breaks down after a certain number of processors' Why
>does each node only have *4* processors? Why not 64 .. if you
>really want speed!
>
Glad you asked. Gives good insight to just how powerful
the 21264 CPU is. Looking at a post in 10/96 to Usenet
(search dejanews "old" for 21264 and revised) we see:
Memory System:
System Interface 2+ GByte/sec. sustained, 64 bit separate port,
80 cycles load-to-use (with Tsunami desktop chip set
and synchronous DRAM).
16 outstanding memory references, 64 bytes each:
- 8 reads
- 8 writes
With Tsunami system chip set and SDRAMs, effective McCalpin
STREAM bandwidth is 1.6 Gbyte/sec. (uniprocessor).
-----
So for a desktop machine, STREAM is 1.6 Gigabyte/sec. Assuming
the chip-set in Wildfire will do at least that, you can expect
6.4 Gig/sec bandwidth to memory for 4 CPUs. If you placed 30 or so
there, it would no doubt require a tremendous memory system to feed
them. Translation: very expensive. I don't have details, trying
hard at times to figure this out will have to wait
until all details are public to get the true whys and wherefors.
Remember the 21264 at 600 MHz Digital claims will be a
40 SpecInt95 60 SpecFp95 part.
>] 8 OSs can be doing more things at once. 8 schedulers ;-).
>
>8 schedulers, working for only 4 processors each and being
>bogged down with 'mundane' scheduling for thread/process switching.
>
But 8 schedulers can all be dispatching I/O at the same
time. I don't doubt Tera can dispatch more calculations.
Seems Tera can't dispatch disk I/O at the same rate. It really
remains to be seen if Digital's Fortran (which is highly
parallel) can crank out the same ballpark numbers as Tera
in a Wildfire. One thing I am willing to bet on is that
the Wildfire machine will be able to do much higher aggregate
I/O as 8 seperate OS's can dispatch quite a few more I/O to
disk. Regarding your counter-point, I am not quite sure
what you are getting at. Could you please elaborate via example?
>] O/S. Not a problem in UNIX, you can create something to
>] mimic that. Not in general. Granted, you can do a
>] Distributed Lock Manager in UNIX. One not nearly as
>] robust to scale to a Galaxies level as UNIX doesn't have
>] the equivalent RMS layer allowing for really fine-grained
>] locking at a record level. Never mind locking a file,
>] that is trivial ;-).
>
>Why should it have the equivalent to RMS layer to lock records?
>Files in most Unix filesystems are not record-based. If you
>wanted to put a record-based filesystem in Unix you would most
>likely use ioctl(), which is a completely different demon. ;)
>
Record-locking at the OS level doesn't just help with
applications. It also helps to have low-level capabilities
like that for OS issues. Additionally, if something is
written to take advantage of RMS locking, logging onto
NODE A and then NODE B running the same applications, files
in use... locking is seen across the cluster.
Today UNIX uses flock among other means to lock down
a file.. Problem is that lock isn't seen across the cluster.
Seems UNIX is taking the approach it always has and is going
to be adding locking as an API (see www.sun.com for
Full Moon) this will work, I don't dispute that. Sun is
promising a cluster-wide filesystem by 1998 and a single
image in 1999. Question: will current applications that
make the bold assumption of running in a cluster correctly?
In other words, will recompiling an flock call ensure it
running in a cluster?
>
>] > Some programs can make assumptions about locking that could
>] > greatly increase their speed over VMS if specifically
>] > optimized for Unix.
>] >=20
>] Now you are talking. Yes, use the built in and program
>] for it. In UNIX, you have all the tools you need to
>] create indexed files if you wish. One problem, you are
>] doing it as a work-around. It is built into VMS and yes
>] it causes overhead, hence sometimes slower. But RMS is
>] more than just overhead. With RMS I can preallocate a
>] 10000 block file at an application level and tell it to
>] be contiguous on disk and tell it to grow by 2500 blocks
>] each time it needs to grow and do writes in multiple
>] blocks with multiple buffers and many variations on that.=20
>
>Again, is this something different from a filesystem issue?
>I know QNX will let you allocate a contiguous disk file with room
>for growing (when you need to change the size it changes it into
>a normal file that can fragment, but that could be different).
>
Yes it is different because RMS allows you to tune the
file allocation and deallocation at an API level.
How well does the QNX filesystem do in AIX? Solaris? HP/UX?
Do all 3 allow QNX to be the default filesystem? What is the
QNX API like?
>
>] And when Sprialog goes production, turn on write caching
>] and wait 30 seconds before the writes are flushed to disk
>] in one long stream (like writing to an infinite log, all
>] writes are written out with no seeks to update file
>] headers). You can't outwrite VMS. You can out lock and
>] unlock VMS, for now.
>
>I don't know what Sprialog is, but this also seems like a
>criticism of UFS, not Unix in general.
>
Not really. All Unices have a log based filesystem now.
HP/UX being the most recent. Spiralog perhaps one-ups
them. Nice thing is the write back option. Writes are
ordered in memory for up to 30 seconds until flushed to
disk.
For those of a technical bent, Spiralog was featured in-depth
in an October 1996 Digital Technical Journal.
Overview of the Spiralog File System
http://www.digital.com:80/info/DTJM01 (HTML)
http://www.digital.com:80/info/DTJM01/DTJM01P8.PS
http://www.digital.com:80/info/DTJM01/DTJM01PF.PDF
http://www.digital.com:80/info/DTJM01/DTJM01SC.TXT
Design of the Server for the Spiralog File System
http://www.digital.com:80/info/DTJM02 (HTML)
http://www.digital.com:80/info/DTJM02/DTJM02P8.PS
http://www.digital.com:80/info/DTJM02/DTJM02PF.PDF
http://www.digital.com:80/info/DTJM02/DTJM02SC.TXT
Designing a Fast, On-line Backup System for a Log-structured File Syst.
http://www.digital.com:80/info/DTJM03 (HTML)
http://www.digital.com:80/info/DTJM03/DTJM03P8.PS
http://www.digital.com:80/info/DTJM03/DTJM03PF.PDF
http://www.digital.com:80/info/DTJM03/DTJM03SC.TXT
Integrating the Spiralog File System into the OpenVMS OS
http://www.digital.com:80/info/DTJM04 (HTML)
http://www.digital.com:80/info/DTJM04/DTJM04P8.PS
http://www.digital.com:80/info/DTJM04/DTJM04PF.PDF
http://www.digital.com:80/info/DTJM04/DTJM04SC.TXT
Spiralog is not totally a VMS thing I was told. With that said, I
*suppose* it could some day find itself in Digital UNIX. Would be
slick to have the same filesystem on both OSs.
>
>] Architecturally, it looked like UNIX was a win for the longest
>] time. Roaring from the back of the pack comes VMS though.
>
>Or it could be that Digital just has a lot of really bright guys
>working on VMS and are neglecting Digital Unix.
>
No. In fact, Digital UNIX still holds the record for
tpmc (for now) see http://www.tpc.org/ Digital actually
has the best UNIX there is.
>] > ] Yeah, legacy. Rather embarassing that an old legacy VMS
>] > ] will smoke an old legacy UNIX.
>] > Possibly.. no need to place bets now though.
>]
>] Why not? Digital doesn't blow engineering smoke. They
>] are currently sampling a 600 MHz 21164 at 18 SpecInt95
>] and 27 SpecFp95. They say the 21264 will do 40 SpecInt95
>] and 60 SpecFp95 at 600 MHz, you can bet on it.
>
>I'm confused; those only run VMS?
>
No.
>
>] If Digital says one million tpcs for Galaxies + Wildfire,
>] get an NDA and find out all the details. Very large VMS
>] shops are planning appropriately, budgest and all.=20
>] Again, Oracle and Sybase are psyched. You will know I am
>] not pulling your leg when you read Informix has a VMS
>] port underway. Now, I don't know that for *sure* but a
>] BIG birdy strongly hinted that.
>]
>] Betting Digital delivers one million tpcs would be a
>] fairly safe bet.
>
>You say "Roaring from the back of the pack comes VMS though."
>Given that everything Digital says is true, why is it not
>possibly for Unix to 'pull a VMS'?
>
Well it will come close. Looking at Suns stated goals
for Full Moon, they expect to see a single system image
in 1999. That is very aggressive and I wouldn't put it
past them. Might be a while before they do a Galaxies
though and pull the cluster inside a single machine.
So don't be surprised if Galaxies + Wildfire rule the
high-end for about 3 years (1998 through 2000).
However, even though UNIX today can do relfective memory
techniques to ensure write atomiticy it remains to be seen
if they can continue to get around the problem of not
having an RMS like layer. It is a totally different world
regarding write ordering and I/O in general vis-a-vis VMS and
UNIX. Having a VMS Distributed Lock Manager that has undergone
several spins in 14 years is a big help. Having cluster
load balancing, a cluster-wide filesystem and many other
features is a big headstart.
Personally? Lacking a "Records Management Layer" and all that
entails is going to make it *real* tough to do. And if
a UNIX equiv. RMS is created, existing apps are not aware of
it and so how to run these in a cluster? Many itchy little
issues I am sure when a cluster based filesystem is created
for UNIX.
And as the Tera folks criticized in November 1996:
"Regardless of the name, they (1) all suffer the same
basic problem: a truly horrible programming model.
First, they require that applications be rewritten before they
can be run in parallel. Then, to achieve mediocre levels
of performance, they require programs to be carefully
tuned to manage communications and data placement. . .
Finally, these systems all suffer from inadequate
communication bandwidth."
Sweeping generalizations. People have been developing *specifically*
with VMS clusters in mind for years. Not rewriting, targetting.
Addressing communications and data placement, this isn't true for
VMS clusters either. They load balance quite nicely.
The last point was a very legimate criticism and still holds true
today. However, Galaxies plus the very high-speed cross-bar
in Wildfire will break the back of the communication
bandwidth issue (i.e. Lock mastering overhead). That will
send I/O bandwidth much higher than anything on the horizon, which in
turn translates into much higher RDBMS numbers.
Rob
(1) "they" refers to: scalable parallel, massively parallel, or cluster
computers.
Sounds like a very slow, high overhead system to me.
What then is Win32K or Win32 kernel calls on NT? I saw this with
regards to an issue about NT security violation and another in a diagram
about NT's architecture.
> >BTW, you seem to beleive lots of things - sounds like one of the
> >sillier net.rumors I've encountered.
> >
> >
that is what they call WINbrain, as there is no other description for
something like that ...
> >David LeBlanc |Why would you want to have your desktop user,
> >dleb...@mindspring.com |your mere mortals, messing around with a 32-bit
> > |minicomputer-class computing environment?
> > |Scott McNealy
>
he was right, especially when it comes from M$ ... :-)
Regards,
Ernst
=======================================================================
Ernst Winter " Nec scire fas
Zum Steffelacker 11 est onmia"
D-81929 Muenchen Horaz
Germany
Ph#: 49-89-9920-9000 Fax:
E-mail: ewi...@lobo.muc.de
=======================================================================
Operating on a "Gates-Free" PC !!! :-)
True about reliability/scalability, but the cluster will be faster
if the DB running on it is aware that it is part of a cluster and is
able to take advantage of that fact see Oracle.
|> John Wiltshire
|>
--
Archeus Free FRPG - http://www.geocities.com/Area51/3002/
Colin Smith (co...@mellifluous.europe.dg.com)
My opinions are completely my own, bought and paid for.
>On 6 Apr 1997 16:02:22 +1000, Bernd Meyer <bme...@bruce.cs.monash.edu.au> wrote:
>>Yes, the Minix filesystem will probably be very confused in 2038, but hey ---
>>considering that we will probably have exabytes harddisks by then, I
>>sincerely hope noone is using the Minix filesystem, anyway. Nor FAT, for
>>that ;-)
>Sure it will. How does FAT256 sound?
Eeeeewww!
> :-O <- me vomiting
I concur.
>I don't think that time is going to be too much of a
>problem on UNIX systems. There is enough encapsulation to
>allow the underlying implementation to be changed without
>too much breakage.
NT doesn't need any changes - NT's native system and file times were
64-bit to begin with. OTOH, I'll be surprised if _anything_ we're
running today will still be useful in 2038, except as a curiosity in a
museum. We'll probably look upon today's P6-200, 128MB RAM, 4GB HD
machines as being as woefully underpowered as an 8088 640k, 32MB HD
box is now.
If current trends continue (big if), we'll see machines that have on
the order of 1000 times more CPU power, and have terabytes of disk in
desktops along with gigs of RAM.
Ought to be able to do really good voice recognition with such things
- maybe even get the computer to feed the dog 8-)
>In <5i8kkk$n...@access1.digex.net>, cob...@access1.digex.net (Cary B. O'Brien) writes:
>>In article <334873b3....@news.mindspring.com>,
>>David LeBlanc <dleb...@mindspring.com> wrote:
>>>Many people don't know that Win32 isn't the native NT system calls -
>>>the native calls are all NtXXXX() - the Win32 subsystem is just that -
>>>a DLL that maps calls to the real system calls. So it would be quite
>>>possible to graft HP-UX (or anything else) onto the top of NT.
>Sounds like a very slow, high overhead system to me.
You did know we were talking about NT? <g>
It also sounds like a very flexible system, and it could contribute to
stability if the calls are validated before being passed in. However,
you are right - this is one of the reasons NT is so large.
Just wait until MS comes out with the AS/400 VM... <g>
>What then is Win32K or Win32 kernel calls on NT? I saw this with
>regards to an issue about NT security violation and another in a diagram
>about NT's architecture.
Theoretically, one could use a buffer overflow in certain video
subsystem calls to break security because anything in kernel mode is
considered trusted code. Practically, it would be _very_ difficult,
and you'd be most likely to just crash the machine. I think that's
what you're referring to.
BTW, even many of the kernel mode calls are just wrappers - just about
any of the HAL calls are wrappers (since it encapsulates hardware
differences). A little overhead, but it also means I can write a
driver on Intel, recompile and run it on an Alpha. I can also write a
single driver that runs properly on all sorts of odd Intel
architectures.
Hello Eric!
Yes, you're right.
It sprang immediately into my mind, when I tried out the first 128k Mac
in the shop. Our UNIX was 4.1bsd then!
Kind regards
--
Norbert Gruen (umlaut u, ü)
Do Bill Gate$ a favour, support alternatives to Micro$oft!!!
"reply-to"-> Text, "from"->MIME
[Discussion of the "problem" of 32 bit UNIX time turning negative in 2038]
>OTOH, I'll be surprised if _anything_ we're
>running today will still be useful in 2038, except as a curiosity in a
>museum. We'll probably look upon today's P6-200, 128MB RAM, 4GB HD
>machines as being as woefully underpowered as an 8088 640k, 32MB HD
>box is now.
You are not thinking far enough. The IBM PC was introduced only 16 years
ago, rather than 41. And PCs with 640k (an unbelievable amount of memory)
and (huge!) 32M harddisks were probably not commonly available before
late '82 or early '83, i.e. less than 15 years ago.
>If current trends continue (big if), we'll see machines that have on
>the order of 1000 times more CPU power, and have terabytes of disk in
>desktops along with gigs of RAM.
All of these assum a factor of 1000. That's way too little --- speed,
memory and harddisks double every 18 months to two years, and have been
doing so for a long long time. Let's say it doubles every two years ---
then we still have 20 doublings before 2038, meaning a factor of about
a million (and looking back to 1956, this sounds like a reasonable
number, give or take an order of magnitude). I.e. terabytes of RAM,
and exabytes (I think that's the next one after tera :-) of "disk" (probably
without moving parts, though), and the equivalent of close to 1 quadrillion
instructions per second on fast central units. Fortunately, I will be
retired by then, no need to come up with something to do on those monsters...
>Sounds like a very slow, high overhead system to me.
True it could be very slow. There are tricks and optimizations that MS has
done to make NT run at an acceptable speed. BTW, this type of architecture
is much like how Mach, OS/2 PPC, and other microkernel-like OSs work.
>What then is Win32K or Win32 kernel calls on NT? I saw this with
>regards to an issue about NT security violation and another in a diagram
>about NT's architecture.
Part of the Win32 subsystem has been moved to kernel level. This is not
the same as "into the kernel". The Win32 subsystem is still conceptually
separate from the NT kernel, except it is now operating at the same CPU
privilege level to reduce overhead. As noted, this does expose the NT
kernel to more video driver and Win32 subsystem bugs than before.
It's a tradeoff. <shrug>
Darwin Ouyang
> In article <5i7517$4...@lehi.kuentos.guam.net>,
> <cro...@kuentos.guam.net> wrote:
>
> >>the AmigaOS -- none (legacy OS)
>
> >AmigaOS is said to have Unix like qualities and Unix like utilities on
> >shareware, not to mention Unix variants that can run on the Amiga
> >hardware.
A LGPL'd shared library is available for native AmigaOS which emulates
some 500+ **ix system calls. Almost the entire suite of GNU tools and
X11R6 have been ported successfully. See
<URL:http://www.ninemoons.com/ADE/ADE.html>.
> The AmigaOS has preemptive multitasking, but only a single memory
> space (I don't know if virtual memory was ever implemented for it). So
> it's only halfway there :-)
There are a few commercial/shareware utilities to implement virtual
memory. A generic solution, say, a mmu.library supported by the OS and
running on 68020/68851, 68030, 68040 and 68060, would most certainly
break all old applications. This has been discussed on Amiga ml's and
ng's over and over.
Open Systems Resources blew the lid off Microsoft's failure to
document real NT system calls in the Summer '96 issue of "NT Insider".[*]
It is odd to see it publicly admitted, though, because of the rather
bizarre implications to non-Microsoft developers.
Win32 calls are the only publicly documented interface to the NT kernel,
yet they go through a potentially expensive[**] extra layer of indirection
to get there. Also, there are some things you can *only* do through an
NT system call: cancel an outstanding asych I/O, request, for example.
What this means is that if a 3rd party develops something for NT, they
have to make a really hard decision:
1. Use the only documented interface (Win32) and have a slower product.
If the product is popular or makes money, Microsoft can use the real
system calls, making a faster version of the product. One could reasonably
suspect that this is why IIS runs faster than Netscape's erver under NT.
2. Use an undocumented interface (the real NT system calls) and have a faster
product. This means doing a lot of work to reverse-engineer the NT
system call interface. If the product is popular or makes money, Microsoft
can change the real NT system calls out from under the product at the next
release of NT, causing their own product to inherit your market. This is
like the OS/2 -> Win3.0 switcheroo they pulled on Lotus, Borland and IBM.
==== ==== ==== ==== ==== ==== ==== ====
* - Theoretically, you can get to the ironically named "Open Systems Resources"
at http://www.osr.com. The newsletter suggests that you can email them
at "ntin...@osr.com".
==== ==== ==== ==== ==== ==== ==== ====
** - In NT 3.51 and earlier, Win32 calls caused at least 4 extra context
switches, leading to abysmal performance. Apparently, M$ move Win32 "into
the kernel" for NT 4.0, but failed to do it correctly.
See:
http://www.das.harvard.edu/users/faculty/Brad_Chen/ftp_docs.html
"The Measured Performance of Personal Computer Operating Systems"
ACM Transactions on Computer Systems, February 1996.
For information on performance.
See:
http://www.ntinternals.com/crashme.htm
For information on how M$ botched the move of Win32 "into the kernel".
> OS/390 [IBM System 360/370/390 core OS] -- none? (legacy OS)
OS/390 is planned to have some Unix compatibility.
At least IBM WWW site brags about one. And I wouldn't call OS/390 legacy.
> DOS -- none (legacy OS)
DOS is CP/M kernel with some Unix compatibility slapped on:
Unix-like file API, pipes (broken by design, but still pipes :)
> Windoze NT -- has POSIX layer (?)
> OS/2 -- none (?)
EMX toolkit.
Seriously, I agree with your point: today you couldn't go far without
some kind of source-level Unix compatibility.
--
Cheers,
Fat Brother. http://www.cnit.nsu.ru/~fat/
**************************************************************************
* In 1984 mainstream users were choosing VMS over UNIX. Ten years later *
* they are choosing Windows over UNIX. What part of that message aren't *
* you getting? *
* Tom Payne *
Probably could do that now, with an X-10 contraption hooked to a
suffiently medieval looking contraption that looks like the "mousetrap"
game on steroids. (G).
Chris J. Alumbaugh
********************************
As it is SPOKEN.. so it appears
OS/2 Warp 4.0 and Voice Dictation
********************************
* my email address in the header has been hacked*
* please shift all letters 1 letter to the right*
*to decipher correct address to reply by email*
Is running in a single memory space any problem for it? Or does
it implement some sort of hacked virtual memory?
>> The AmigaOS has preemptive multitasking, but only a single memory
>> space (I don't know if virtual memory was ever implemented for it). So
>> it's only halfway there :-)
> There are a few commercial/shareware utilities to implement virtual
> memory. A generic solution, say, a mmu.library supported by the OS and
> running on 68020/68851, 68030, 68040 and 68060, would most certainly
> break all old applications. This has been discussed on Amiga ml's and
> ng's over and over.
How does the VM coexist with the AmigaOS? Is its presence to old
apps mostly that of some amount of RAM being used as a VM cache? Are
multiple memory spaces supported? And if so, is the core OS stuff global?
>dleb...@mindspring.com (David LeBlanc) wrote:
>>Many people don't know that Win32 isn't the native NT system calls -
>>the native calls are all NtXXXX() - the Win32 subsystem is just that -
>>a DLL that maps calls to the real system calls. So it would be quite
>>possible to graft HP-UX (or anything else) onto the top of NT.
>Open Systems Resources blew the lid off Microsoft's failure to
>document real NT system calls in the Summer '96 issue of "NT Insider".[*]
>It is odd to see it publicly admitted, though, because of the rather
>bizarre implications to non-Microsoft developers.
Huh? It took you that long to figure this out? All you have to do is
run dumpbin /exports on the system DLLs. OSR has some smart guys
(even the fat guy with a ponytail <g>), but all you have to do is do a
dumpbin on both imports and exports of a few DLLs to see who is
calling who. This is not rocket science. Now to _use_ those calls,
you need the source.
Consider:
7/24/93 5:11 264,676 ntdll.dll
3.1, build 511 release code -
[c:\temp]c:\MSDEV\bin\DUMPBIN.EXE /exports ntdll.dll | grep Nt
50 31 NtAcceptConnectPort (0000C190)
51 32 NtAccessCheck (0000C1A0)
52 33 NtAccessCheckAndAuditAlarm (0000C1B0)
53 34 NtAdjustGroupsToken (0000C1C0)
54 35 NtAdjustPrivilegesToken (0000C1D0)
There it was all this time....
Out of the 192 NtXXX calls present in the original 3.1 ntdll.dll, 182
are still there. Only 37 calls have been added (which match NtXXX).
Not too bad for nearly 4 years later.
>Win32 calls are the only publicly documented interface to the NT kernel,
>yet they go through a potentially expensive[**] extra layer of indirection
>to get there. Also, there are some things you can *only* do through an
>NT system call: cancel an outstanding asych I/O, request, for example.
Or you can use a driver to cancel an IRP.
>What this means is that if a 3rd party develops something for NT, they
>have to make a really hard decision:
>1. Use the only documented interface (Win32) and have a slower product.
> If the product is popular or makes money, Microsoft can use the real
> system calls, making a faster version of the product. One could reasonably
> suspect that this is why IIS runs faster than Netscape's erver under NT.
One can suspect anything (for some value of anything). Whether it is
reasonable or not is another matter. Or again, you can just go use
the tools you have at hand to verify these things - a dumpbin on
inetmgr.exe shows you're wrong. A dump on W3SVC.DLL also shows
nothing but ordinary Win32 calls. One could also suspect that since
Netscape writes buggy browsers that they also write buggy servers.
The IIS team isn't loony enough to try and hit the shifting calls that
aren't documented, either - they have plenty to do without worrying if
the kernel people are going to break them in the next service pack.
>** - In NT 3.51 and earlier, Win32 calls caused at least 4 extra context
>switches, leading to abysmal performance. Apparently, M$ move Win32 "into
>the kernel" for NT 4.0, but failed to do it correctly.
This was only the video calls.
There _are_ places where they do use undocumented calls - take nbtstat
and netstat, for example. You're just not naming anything where they
do this.
The ADE (Amiga Development Environment) provides a GNU environment that
can compile many Unix apps with minimal changes (the only serious problem
is the lack of a fork() call). It also has an X server.
: >NeXTStep -- Unix (OpenStep fits on top of various Unixes and also NT)
: >the BeOS -- has POSIX layer, bash shell, Unix-style directory syntax
: >GNU Hurd -- ?
GNU Hurd is GNUs OS. Basically a Unix compatible OS built on Mach4.
: >Novell NetWare -- none
: >
: > Several of these I have labeled "legacy OSes"; these are either
: >old Big Iron OSes (OS/390, OS/400, VMS) or are hangovers from small
: >systems (DOS, the MacOS, the AmigaOS), whose ultimate fate is likely to be
: >hosted by various Unixes or NT:
: >
: >DOS:
: >NTVDM in NT
: >
: >the MacOS:
: >the Blue Box in PowerPC NeXTStep (Rhapsody)
: >Fredlabs's VirtualMac in the BeOS
: >
: >the AmigaOS:
: >the A\Box is to run a version of Unix and host the AmigaOS in some way
: >--
:
: As for Apple, regardless of who takes over, via NeXT inspired coup, via
: Oracle takeover, via Sun merger, etc,. what is inevitable is the
: Unixification of the Macintosh.
:
: Note that MkLinux on the Macintosh came out unscathed in the technology
: reorganization. Apple stopped their own AIX updates, but inevitably
: when PPCP comes, Power Macs will be able to run AIX directly. IBM has a
: lot to fear on their own RS/6000 line, if AIX gets to run on a dual
: 250MHz 604e Power Computing Power Tower Pro on PPCP on a much lower
: price than an RS/6000 Model 43P. (Gad so many Power words and P letters
: here.) Running AIX on a Power Mac or Mac clone may put many Macs into
: businesses, considering the 10,000 vertical market apps AIX has.
--
Steve Hodge
s...@cs.waikato.ac.nz
Computer Science Dept, University of Waikato,
Hamilton, New Zealand
BTW I (a software guy) really shouldn't be arguing hardware with
a hardware guy, so feel free to jump in .. anybody? ;)
Also I would have chopped the text down some but didn't see how.
And Bob, you have convinced me that VMS is not 'legacy'. Now,
does it smoke Unix? ;)
] >] > ] > This computer will also have 32 CPUs .. or 256 .. running
] >] > ] > *one* copy of Unix with *one* memory as opposed to 8 copies
] >] > ] > of VMS (with 8 memories I assume). [...]
] >] > ]
] >] > ] One copy of UNIX? Hmmmm, how fast can that ONE copy
] >] > ] of UNIX keep track of resources? The 8 copies of VMS
] >] > ] will be working *somewhat* independent of each other.
] >] > ] Things like low-level interrupts in a single OS will
] >] > ] be somewhat constraining, no? One copy of an OS can
] >] > ] only context switch so fast, eh?
] >]
] >] > From what I understand it is a highly parallel kernel. If
] >] > one OS is doing 8 things at a time, how is that worse than
] >] > 8 OSs doing 1 thing at a time?
] >]
] >] Granted, a single system can have many applications
] >] running at once. In Tera's case threading shines like no
] >] other, lowering memory contention, etc.
] >]
] >] However, the UNIX variant running on a Tera machine must
] >] still contend with the mundane system tasks of swapping
] >] processes for instance, dispatching I/O etc. Tracking
] >] all that. I don't care how wonderfully parallel certain
] >] aspects are. Certain system level tasks don't
] >] parallelize well at all. How many schedulers does this
] >] Tera machine have? One.
]
] >Is that a fact? From what I read, once a process is allocated
] ^^^^^^^^^^^^^^
] >threads on processors, minimal scheduling is needed (the
] >processors switch threads automatically so this 'mundane'
] >scheduling is removed).
]
] Yes. Looking at your pointer to:
]
] http://204.118.137.100/SystemCharacteristics.html
]
] "A two-tier scheduler is incorporated into the Tera
] microkernel; it provides better resource allocation to large
] tasks (those currently running on more than a single processor)
] via a bin-packing scheme, and schedules the smaller tasks using
] a traditional Unix approach."
]
] So one portion of the scheduler does your thread thing.
] The other portion deals with the smaller issues of life
] like dispatching I/O, etc. Do you have a different view?
That sounds right.
But there's no convincing reason why the kernel could not use 8
schedulers/IO dispathers and divide the processes up between them
(sort of like the cluster, in effect).
] >You could also say 'I don't care how woderful the Alpha processor
] >is, SMP breaks down after a certain number of processors' Why
] >does each node only have *4* processors? Why not 64 .. if you
] >really want speed!
] >
]
] Glad you asked. Gives good insight to just how powerful
] the 21264 CPU is. Looking at a post in 10/96 to Usenet
] (search dejanews "old" for 21264 and revised) we see:
]
] Memory System:
]
] System Interface 2+ GByte/sec. sustained, 64 bit separate port,
] 80 cycles load-to-use (with Tsunami desktop chip set
] and synchronous DRAM).
]
] 16 outstanding memory references, 64 bytes each:
] - 8 reads
] - 8 writes
]
] With Tsunami system chip set and SDRAMs, effective McCalpin
] STREAM bandwidth is 1.6 Gbyte/sec. (uniprocessor).
I'm not a hardware guy, so I don't really know if that is good
(type of memory..). But read this from the Tera web page:
"The peak memory bandwidth is 2.67 gigabytes per second, and the
processor can sustain well over 95% of that rate. "
1.6 / (2.0+) <= 80% with one processor, not 4 or even 256.
"The networks and memory systems of the MTA are designed to
sustain the rate of one load or store per clock cycle for each
processor, regardless of the location of data. The compiler
generates parallel code that exploits this. "
So you have 256 processors working with the same memory at almost
full speed, as opposed to only 80% or worse. Do I have that
right? Like I say, I'm not a hardware specialist... (massive
disclaimer =)
] So for a desktop machine, STREAM is 1.6 Gigabyte/sec.
] Assuming the chip-set in Wildfire will do at least that,
] you can expect 6.4 Gig/sec bandwidth to memory for 4
] CPUs.
.. and for a hypothetical 4-cpu Tera (unknown memory) you get
(2.67*4)*0.95 = 10+ Gig/sec
Is that right?
] If you placed 30 or so there, it would no doubt
] require a tremendous memory system to feed them.
] Translation: very expensive. I don't have details,
] trying hard at times to figure this out will have to wait
] until all details are public to get the true whys and
] wherefors.
Isn't this exactly the flaw in that method? If you can't have 32
CPUs in each node and only have 4 to get a total of 256 processors
(Tera) you would need 64 nodes. How well does VMS do with that
many nodes?
.. and when we finally hit a limit of CPU speeds (there is a
limit to speed of light) what will VMS do with 1024 clusters?
Or more.
] >] 8 OSs can be doing more things at once. 8 schedulers ;-).
] >
] >8 schedulers, working for only 4 processors each and being
] >bogged down with 'mundane' scheduling for thread/process
] >switching.
]
] But 8 schedulers can all be dispatching I/O at the same
] time. I don't doubt Tera can dispatch more calculations.
] Seems Tera can't dispatch disk I/O at the same rate. It
Upto 102 Gbyte/S what is Galaxies' again? You might be right, but
I don't see how this would inevitably lead to VMS smoking Unix.
Tera web site:
"The file system has the ability to distribute a single file
across as many disk arrays as are attached to the system."
Which can be processors/16 (or more) .. with their current setup
you can have almost 7.6 Terabytes of data. (Is that good? I
assume they could have bigger sized disk arrays if needed).
With Galaxies cluster each OS does it's own buffering, right? Do
they use the message-passing between clusters to share these
buffers? And if so, doesn't that mean they have 1/nodes the
space to buffer? Or do they use the message-passing interface to
memory map buffers on other nodes? From my assumptions, it seems
overly complex or restrictive to me.
] really remains to be seen if Digital's Fortran (which is
] highly parallel) can crank out the same ballpark numbers
] as Tera in a Wildfire. One thing I am willing to bet on
] is that the Wildfire machine will be able to do much
] higher aggregate I/O as 8 seperate OS's can dispatch
] quite a few more I/O to disk.
Why? You can always split the processes into 8 groups and have 8
IO/scheduling units -- like Wildfire.
] Regarding your
] counter-point, I am not quite sure what you are getting
] at. Could you please elaborate via example?
Normally you give a certain amount of time (round-robin it is
called?) to a process/thread and then switch in another. But if
you have 128 threads that can be executing concurrently then you
are going to need more than 128 threads before this is a problem.
But if you only have 4, this is much easier of a limit to go
over. Plus, switching tasks takes time to reload registers and
possibly change the memory map whereas on the Tera cpu (with less
than 128 threads per cpu) this is done automatically and with no
overhead besides startup.
] >] O/S. Not a problem in UNIX, you can create something to
] >] mimic that. Not in general. Granted, you can do a
] >] Distributed Lock Manager in UNIX. One not nearly as
] >] robust to scale to a Galaxies level as UNIX doesn't have
] >] the equivalent RMS layer allowing for really fine-grained
] >] locking at a record level. Never mind locking a file,
] >] that is trivial ;-).
] >
] >Why should it have the equivalent to RMS layer to lock
] >records? Files in most Unix filesystems are not record-based.
] >If you wanted to put a record-based filesystem in Unix you
] >would most likely use ioctl(), which is a completely different
] >demon. ;)
]
] Record-locking at the OS level doesn't just help with
] applications. It also helps to have low-level
] capabilities like that for OS issues. Additionally, if
Why are low-level semaphores and mutexes not more suitable for OS
level locking?
] something is written to take advantage of RMS locking,
] logging onto NODE A and then NODE B running the same
] applications, files in use... locking is seen across the
] cluster.
Right, and if you have a filesystem that supports record level
locking in Unix .. guess what, locking will be seen by all the
cpus using it.
] Today UNIX uses flock among other means to lock down
] a file.. Problem is that lock isn't seen across the cluster.
So use fcntl.
] Seems UNIX is taking the approach it always has and is going
] to be adding locking as an API (see www.sun.com for
] Full Moon) this will work, I don't dispute that. Sun is
] promising a cluster-wide filesystem by 1998 and a single
] image in 1999. Question: will current applications that
] make the bold assumption of running in a cluster correctly?
] In other words, will recompiling an flock call ensure it
] running in a cluster?
I don't see why it couldn't be made to do so.
] >
] >] > Some programs can make assumptions about locking that
] >] > could greatly increase their speed over VMS if
] >] > specifically optimized for Unix.
] >] >
] >] Now you are talking. Yes, use the built in and program
] >] for it. In UNIX, you have all the tools you need to
] >] create indexed files if you wish. One problem, you are
] >] doing it as a work-around. It is built into VMS and yes
] >] it causes overhead, hence sometimes slower. But RMS is
] >] more than just overhead. With RMS I can preallocate a
] >] 10000 block file at an application level and tell it to
] >] be contiguous on disk and tell it to grow by 2500 blocks
] >] each time it needs to grow and do writes in multiple
] >] blocks with multiple buffers and many variations on
] >] that.=20
] >
] >Again, is this something different from a filesystem issue? I
] >know QNX will let you allocate a contiguous disk file with
] >room for growing (when you need to change the size it changes
] >it into a normal file that can fragment, but that could be
] >different).
]
] Yes it is different because RMS allows you to tune the
] file allocation and deallocation at an API level. How
] well does the QNX filesystem do in AIX? Solaris? HP/UX?
] Do all 3 allow QNX to be the default filesystem? What is
] the QNX API like?
Hmm don't remember how QNX specifies the contiguous file, maybe
with ioctl.
But if you're talking API, that could be added. Is that the only
advantage of RMS over a special unix filesystem, that it has a
standard API?
] >] Architecturally, it looked like UNIX was a win for the
] >] longest time. Roaring from the back of the pack comes
] >] VMS though.
] >
] >Or it could be that Digital just has a lot of really bright
] >guys working on VMS and are neglecting Digital Unix.
]
] No. In fact, Digital UNIX still holds the record for
] tpmc (for now) see http://www.tpc.org/ Digital actually
] has the best UNIX there is.
Right, I just meant 'comparatively neglecting' ;)
[...]
] Personally? Lacking a "Records Management Layer" and all
] that entails is going to make it *real* tough to do. And
] if a UNIX equiv. RMS is created, existing apps are not
] aware of it and so how to run these in a cluster? Many
On one node, I suppose ;)
] itchy little issues I am sure when a cluster based
] filesystem is created for UNIX.
] And as the Tera folks criticized in November 1996:
]
] "Regardless of the name, they (1) all suffer the
] same basic problem: a truly horrible programming
] model. First, they require that applications be
] rewritten before they can be run in parallel.
] Then, to achieve mediocre levels of performance,
] they require programs to be carefully tuned to
] manage communications and data placement. . .
] Finally, these systems all suffer from inadequate
] communication bandwidth."
]
] Sweeping generalizations. People have been developing
] *specifically* with VMS clusters in mind for years. Not
] rewriting, targetting.
And that denies that cluster is a 'truly horrible programming model'?
Or that programs have to be carefully tuned? (If they are
writing specifically for VMS, and not for a general platform
doesn't that support the specific tuning theory?)
] Addressing communications and data placement, this isn't true for
] VMS clusters either. They load balance quite nicely.
I'll take your word for it, but I remember my brother telling me
how he was always trying to find the elusive 'fast node'. Maybe
it's changed since then.
] The last point was a very legimate criticism and still
] holds true today. However, Galaxies plus the very
] high-speed cross-bar in Wildfire will break the back of
] the communication bandwidth issue (i.e. Lock mastering
] overhead). That will send I/O bandwidth much higher than
] anything on the horizon, which in turn translates into
] much higher RDBMS numbers.
So you've got 1 or 2 out of 3. Still one more to shoot down.
] Rob
]
] (1) "they" refers to: scalable parallel, massively parallel, or cluster
] computers.
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
a JAMax "When a true genius appears in the world, you may know him
h o w by this sign, that the dunces are all in confederacy
tan lle against him." --Jonathan Swift
>dleb...@mindspring.com (David LeBlanc) writes:
>
>[Discussion of the "problem" of 32 bit UNIX time turning negative in 2038]
>
>>OTOH, I'll be surprised if _anything_ we're
>>running today will still be useful in 2038, except as a curiosity in a
>>museum. We'll probably look upon today's P6-200, 128MB RAM, 4GB HD
>>machines as being as woefully underpowered as an 8088 640k, 32MB HD
>>box is now.
>
>You are not thinking far enough. The IBM PC was introduced only 16 years
>ago, rather than 41. And PCs with 640k (an unbelievable amount of memory)
>and (huge!) 32M harddisks were probably not commonly available before
>late '82 or early '83, i.e. less than 15 years ago.
>
>>If current trends continue (big if), we'll see machines that have on
>>the order of 1000 times more CPU power, and have terabytes of disk in
>>desktops along with gigs of RAM.
>
>All of these assum a factor of 1000. That's way too little --- speed,
>memory and harddisks double every 18 months to two years, and have been
>doing so for a long long time. Let's say it doubles every two years ---
>then we still have 20 doublings before 2038, meaning a factor of about
>a million (and looking back to 1956, this sounds like a reasonable
>number, give or take an order of magnitude). I.e. terabytes of RAM,
>and exabytes (I think that's the next one after tera :-) of "disk" (probably
>without moving parts, though), and the equivalent of close to 1 quadrillion
>instructions per second on fast central units. Fortunately, I will be
>retired by then, no need to come up with something to do on those monsters...
..and what's the bet that there are still COBOL programs running on
them?
John Wiltshire
>In article <33479472...@news.uq.edu.au>, j...@qits.net.au.nospam (John Wiltshire) writes:
>|> On 5 Apr 1997 19:32:05 GMT, Jonathan A. Maxwell
>|> <jmax...@cslab.vt.edu> wrote in comp.os.ms-windows.nt.advocacy:
>|>
>|> >In comp.os.ms-windows.nt.advocacy John Wiltshire <j...@qits.net.au.nospam> wrote:
>|> >] <jmax...@cslab.vt.edu> wrote in comp.os.ms-windows.nt.advocacy:
>|> >] >Rob Young <you...@eisner.decus.org> wrote:
>|> >] >] pet...@netcom.com (Loren Petrich) writes:
>|> >] >] >
>|> >] >] match. Digital is calling it "Galaxies Software
>|> >] >] Architecture". It will allow their next generation 21264
>|> >] >] based Wildfire (32 CPU) server to contain 8 VMS nodes of
>|> >] >] 4 CPUs each. 8 copies of VMS running inside that machine
>|> >] >] clustered over a high-speed crossbar.
>|> >] >
>|> >] >How will this compare to the Tera MTA, running a microkernel
>|> >] >based Unix? http://204.118.137.100/SystemCharacteristics.html
>|> >] >
>|> >] >This computer will also have 32 CPUs .. or 256 .. running
>|> >] >*one* copy of Unix with *one* memory as opposed to 8 copies of
My mistake - I meant a cluster with x total processors and y total RAM
would be slower than a single machine with the same statistics.
John Wiltshire
>>I.e. terabytes of RAM,
>>and exabytes (I think that's the next one after tera :-) of "disk" (probably
>>without moving parts, though), and the equivalent of close to 1 quadrillion
>>instructions per second on fast central units. Fortunately, I will be
>>retired by then, no need to come up with something to do on those monsters...
>..and what's the bet that there are still COBOL programs running on
>them?
I will bet you a magnum bottle of whatever passes as Moet & Chandon in
2038 that there won't be a single machine produced after 2028 still
running COBOL code on the day the 32 bit UNIX time would roll over.
Are you game?
>dleb...@mindspring.com (David LeBlanc) writes:
>[Discussion of the "problem" of 32 bit UNIX time turning negative in 2038]
>>OTOH, I'll be surprised if _anything_ we're
>>running today will still be useful in 2038, except as a curiosity in a
>>museum. We'll probably look upon today's P6-200, 128MB RAM, 4GB HD
>>machines as being as woefully underpowered as an 8088 640k, 32MB HD
>>box is now.
>You are not thinking far enough. The IBM PC was introduced only 16 years
>ago, rather than 41. And PCs with 640k (an unbelievable amount of memory)
>and (huge!) 32M harddisks were probably not commonly available before
>late '82 or early '83, i.e. less than 15 years ago.
There were a couple of implicit assumptions I made. I think the rate
of change will slow at some point.
>>If current trends continue (big if), we'll see machines that have on
>>the order of 1000 times more CPU power, and have terabytes of disk in
>>desktops along with gigs of RAM.
>All of these assum a factor of 1000. That's way too little --- speed,
>memory and harddisks double every 18 months to two years, and have been
>doing so for a long long time.
I'm not sure I want to assume it will keep doing so for the next 40.
>Let's say it doubles every two years ---
>then we still have 20 doublings before 2038, meaning a factor of about
>a million (and looking back to 1956, this sounds like a reasonable
>number, give or take an order of magnitude). I.e. terabytes of RAM,
>and exabytes (I think that's the next one after tera :-) of "disk" (probably
>without moving parts, though), and the equivalent of close to 1 quadrillion
>instructions per second on fast central units. Fortunately, I will be
>retired by then, no need to come up with something to do on those monsters...
I was figuring on 10 doublings, which amounts to about a factor of
1000 - figured it was a more conservative guess. Still fun to think
about. And yes, I'll almost certainly be retired by that point - I'd
turn 78 that year. OTOH, many of my ancestors lived into their 90s -
maybe I'll still have enough brain cells left running at that point to
be playing with computers.
Not with the current technology/approach. But maybe with 3d
circuits and nanoscopic quantum logic gates ;)
--JAM
> ..and what's the bet that there are still COBOL programs running on
> them?
Our town office (Greenbush, Maine USA) has a property tax package written
in COBOL on a 486. If I find a free package to do the same thing, I
will put them on a Linux system.
Paul Wade - Greenbush Technologies Corporation
http://www.greenbush.com/cds.html Linux CD's sent worldwide
http://www.wtop.com/ Now mirroring Linux Documentation Project
> DOS is CP/M kernel with some Unix compatibility slapped on:
> Unix-like file API, pipes (broken by design, but still pipes :)
subdirectories - (wrongslash)DOS(wrongslash)UTIL
device drivers with ioctl that must be rewritten for every win version
that comes along (actually replaced is more like it)
and so much more!
NT has had only two major releases - 3.0 and 4.0. There was no 1.0
or 2.0 - Microsoft just wanted people to think that the first release
was actually the third, so they'd believe it to be more reliable.
--
John Bayko (Tau).
ba...@cs.uregina.ca
http://www.cs.uregina.ca/~bayko
So now any hope that bgNT will have features beginning to resemble vmsclusters
is probably out the window. Everybody throw your perfectly good clustering
technology out the window in the mad dash to run bill gatesware! Don't forget,
bill gates is NOT holding back the industry, oh NOOO!.
Tom O'Toole - ecf_...@jhuvms.hcf.jhu.edu - tom.o...@jhu.edu
JHUVMS system programmer - http://jhuvms.hcf.jhu.edu/~ecf_stbo/
This message has been brought to you by bill gates, inventor of the internet
'The Internet'... is not a valid Win32 application, bill. Boycott bg shoveware!
Not quite. 3.1, 3.5, 3.51, and 4.0 were all "major" releases
(3.51 fixed 3.5, but also added stuff like the ability
to run Windows 95 apps).
>There was no 1.0
>or 2.0 - Microsoft just wanted people to think that the first release
>was actually the third, so they'd believe it to be more reliable.
Doubtlessly part of it. I also seem to recall a rumor at
the time that MS had some deal with Novell that
encouraged them to keep the version number of "Windows"
at 3.1 to avoid renegotiating license fees(just a rumor
based on my hazy memory).
Kris
So, if all the app developers are just using this higher level bg32 interface,
it should also be quite possible to graft bg32 onto the top of Linux/X, so
linux support for bg32 apps would only be a recompile away... If the main
true (not related to 'the salesrep gave the manager a blowjob' or other
stupid market driven political type selling points) selling point of bgNT are
the huge amount of bg32 applications and that it's cheap, well, linux is far
cheaper AND as far as I know it runs on more hardware! The linux native
interface is also VERY well documented (heh!).
>Open Systems Resources blew the lid off Microsoft's failure to
>document real NT system calls in the Summer '96 issue of "NT Insider".[*]
>It is odd to see it publicly admitted, though, because of the rather
>bizarre implications to non-Microsoft developers.
Contrary to that, VMS has always had an extensively documented native interface
and also well documented (via internals books, driver and I/O manuals and
of course source listings (used to come with until 4.4, but still relatively
cheap)) source listings. This seems just typical of the black box mentality.
Just trust us, you are beholden to us for we art bill gates Inc.
>Win32 calls are the only publicly documented interface to the NT kernel,
>yet they go through a potentially expensive[**] extra layer of indirection
>to get there. Also, there are some things you can *only* do through an
>NT system call: cancel an outstanding asych I/O, request, for example.
heh!
>In article <5ibeh5$3...@teal.csn.net>, bed...@csn.net (Bruce Ediger) writes...
>>dleb...@mindspring.com (David LeBlanc) wrote:
>>>Many people don't know that Win32 isn't the native NT system calls -
>>>the native calls are all NtXXXX() - the Win32 subsystem is just that -
>>>a DLL that maps calls to the real system calls. So it would be quite
>>>possible to graft HP-UX (or anything else) onto the top of NT.
>So, if all the app developers are just using this higher level bg32 interface,
>it should also be quite possible to graft bg32 onto the top of Linux/X, so
>linux support for bg32 apps would only be a recompile away... If the main
>true (not related to 'the salesrep gave the manager a blowjob' or other
>stupid market driven political type selling points) selling point of bgNT are
>the huge amount of bg32 applications and that it's cheap, well, linux is far
>cheaper AND as far as I know it runs on more hardware! The linux native
>interface is also VERY well documented (heh!).
Man, get off the bg[whatever] stuff - you're obsessed with Gates even
more than the Gates fans. I think he's infiltrated your head.
Yes, it would be theoretically possible - you could just reimplement
all the NT Win32 calls under Linux. Only about 500 or so of them.
Let me know when you are done, and then I'll swap you a Linux on NT
subsystem 8-)
Ok, ok. Sorry.
I count NT 3.1, NT 3.5, NT 3.51, and NT 4.0.
That's four major releases (not counting service packs). I gues it's
debatable whether 3.51 is a major release, but I consider it so, as
service packs don't increment the version number. (I consider a .5
increment a major release.)
Sorry if I wasn't being clear.
Darwin Ouyang
Though not high in memory overhead, the AS/400 is high in CPU cycle
overhead, since everything is defined in object talk and messages.
These object messages are independent of the CPU it runs, and
application migration for AS/400 to RISC is way too easy. Any AS/400
app would run on CISC or RISC would run on native speed, without
emulation, without need for recompiled versions in different binaries.
>
>>What then is Win32K or Win32 kernel calls on NT? I saw this with
>>regards to an issue about NT security violation and another in a diagram
>>about NT's architecture.
>
>Theoretically, one could use a buffer overflow in certain video
>subsystem calls to break security because anything in kernel mode is
>considered trusted code. Practically, it would be _very_ difficult,
>and you'd be most likely to just crash the machine. I think that's
>what you're referring to.
That may be not what I was referring. It's a utility called NTKill,
which may or may not exactly be the name, and what it does, is probe and
bombard every Win32k call with null values or overloading and it crashed
on a series of them. Or perhaps it may be the same since Win32k may
have video calls.
>
>BTW, even many of the kernel mode calls are just wrappers - just about
>any of the HAL calls are wrappers (since it encapsulates hardware
>differences). A little overhead, but it also means I can write a
>driver on Intel, recompile and run it on an Alpha. I can also write a
>single driver that runs properly on all sorts of odd Intel
>architectures.
>
Such an approach does make make it easier to go crossplatform, and I
wonder why Microsoft has not made a JavaVM that translates Java calls
into NT messages directly. It may be interesting to translate Java VM
calls directly into Mach messages as well.
Rgds,
Chris
>
>David LeBlanc |Why would you want to have your desktop user,
>dleb...@mindspring.com |your mere mortals, messing around with a 32-bit
> |minicomputer-class computing environment?
> |Scott McNealy
"Devant le comportement irrationnel de sa machine, j'ai compris que se
poser en dfenseur de Windows releve de la plus profonde bassesse. J'ai
honte" --- Eric Bernatchez, "La Presse" newspaper, "Cyberpresse"
column, March 22, 1997, Montreal, Canada.
"Confronted with his machines irrational behaviour, it dawned upon me
that taking the position of Windows advocate is of the lowest possible
ethics. I am ashamed".
***cro...@kuentos.guam.net***
>So now any hope that bgNT will have features beginning to resemble vmsclusters
>is probably out the window. Everybody throw your perfectly good clustering
>technology out the window in the mad dash to run bill gatesware! Don't forget,
>bill gates is NOT holding back the industry, oh NOOO!.
Don't worry, what Digital does for NT and what Microsoft does are two
totally different things. Just ask yourself the question, who's been
doing this for 10 years or more. Digital does not just offer what bg
and co. offer, witness Visual Fortran. www.digital.com/fortran
Dennis
>In <334ced05....@news.mindspring.com>, dleb...@mindspring.com (David LeBlanc) writes:
>Though not high in memory overhead, the AS/400 is high in CPU cycle
>overhead, since everything is defined in object talk and messages.
>These object messages are independent of the CPU it runs, and
>application migration for AS/400 to RISC is way too easy. Any AS/400
>app would run on CISC or RISC would run on native speed, without
>emulation, without need for recompiled versions in different binaries.
That doesn't surprise me.
>>Theoretically, one could use a buffer overflow in certain video
>>subsystem calls to break security because anything in kernel mode is
>>considered trusted code. Practically, it would be _very_ difficult,
>>and you'd be most likely to just crash the machine. I think that's
>>what you're referring to.
>That may be not what I was referring. It's a utility called NTKill,
>which may or may not exactly be the name, and what it does, is probe and
>bombard every Win32k call with null values or overloading and it crashed
>on a series of them. Or perhaps it may be the same since Win32k may
>have video calls.
That's how the buffer overflows were discovered. The guys who found
them work at OSR. The bad calls were fixed in SP2. From the way that
some of these failed, it indicated buffer overflows.
>>BTW, even many of the kernel mode calls are just wrappers - just about
>>any of the HAL calls are wrappers (since it encapsulates hardware
>>differences). A little overhead, but it also means I can write a
>>driver on Intel, recompile and run it on an Alpha. I can also write a
>>single driver that runs properly on all sorts of odd Intel
>>architectures.
>Such an approach does make make it easier to go crossplatform, and I
>wonder why Microsoft has not made a JavaVM that translates Java calls
>into NT messages directly. It may be interesting to translate Java VM
>calls directly into Mach messages as well.
For the most part, they want to avoid the low-level calls as much as
the rest of us. That way, when the kernel developers change
something, you don't break things. If you stick to the documented
interfaces, you don't break across service packs.
>j...@qits.net.au.nospam (John Wiltshire) writes:
>
>>>I.e. terabytes of RAM,
>>>and exabytes (I think that's the next one after tera :-) of "disk" (probably
>>>without moving parts, though), and the equivalent of close to 1 quadrillion
>>>instructions per second on fast central units. Fortunately, I will be
>>>retired by then, no need to come up with something to do on those monsters...
>
>>..and what's the bet that there are still COBOL programs running on
>>them?
>
>I will bet you a magnum bottle of whatever passes as Moet & Chandon in
>2038 that there won't be a single machine produced after 2028 still
>running COBOL code on the day the 32 bit UNIX time would roll over.
>
>Are you game?
Sure - though I may have to write a COBOL program just to prove my
point. ;-)
Seriously, COBOL has been around for about 30 years now. I think it
will be in a vast decline over the next 30 but somewhere out there
someone will have a need for that COBOL business logic they wrote 50
years ago...
Have to find you in 2038 though.
John Wiltshire
Huh? Are you entirely all right today, Dave?
You would not need to reimplement the Win32 calls, just the NT*()
calls. And since these are no doubt designed for efficiency and
use rather than selling a proprietary format, it should not be too
difficult to map these onto Linux system calls. Plus, these are
all system-type calls anyhow (or should be) so that makes it even
easier.
] David LeBlanc |Why would you want to have your desktop user,
--
thur Mail Address: LordA...@vt.edu or jmax...@vt.edu
n r
a JAMax "Though it be long, the work is complete and finished
h o w in my mind. I take out of the bag of my memory what
tan lle has previously been collected into it." --Mozart
>ecf_...@jhuvms.hcf.jhu.edu (Like a tea tray in the sky...) wrote:
>
>>So now any hope that bgNT will have features beginning to resemble vmsclusters
>>is probably out the window. Everybody throw your perfectly good clustering
>>technology out the window in the mad dash to run bill gatesware! Don't forget,
>>bill gates is NOT holding back the industry, oh NOOO!.
>
>Don't worry, what Digital does for NT and what Microsoft does are two
>totally different things. Just ask yourself the question, who's been
>doing this for 10 years or more. Digital does not just offer what bg
>and co. offer, witness Visual Fortran. www.digital.com/fortran
Microsoft sold Powerstation Fortran to Digital as far as I know - or
they did some deal with them to license the Visual Studio 97 GUI.
John Wiltshire
Excuuuuse me??? Do you buy their 4.0 shit???
This is the 2nd major revision.
3.1 == 1.0
3.5 == 1.1
3.51 == 1.2 (from what NT advocates said, it's more than 1.11)
4.0 == 2.0
--
Illya Vaes (iv...@hr.ns.nl) Not speaking for anyone but myself
Holland Railconsult BV, Railtraffic Systems, Control Systems
Postbus 2855, 3500 GW Utrecht
Tel +31.30.2358586, Fax 2357202 "Do...or do not, there is no try" - Yoda
>>>What then is Win32K or Win32 kernel calls on NT? I saw this with
>>>regards to an issue about NT security violation and another in a diagram
>>>about NT's architecture.
>>Theoretically, one could use a buffer overflow in certain video
>>subsystem calls to break security because anything in kernel mode is
>>considered trusted code. Practically, it would be _very_ difficult,
>>and you'd be most likely to just crash the machine. I think that's
>>what you're referring to.
>That may be not what I was referring. It's a utility called NTKill,
>which may or may not exactly be the name, and what it does, is probe and
>bombard every Win32k call with null values or overloading and it crashed
>on a series of them. Or perhaps it may be the same since Win32k may
>have video calls.
This utility did in fact kill NT, because it sent random values into Win32
API calls. When parts of the Win32 subsystem were moved to kernel level,
parameter validation was not correctly implemented on a few API calls.
This program discovered the flaws, which were promptly fixed by MS. (A
Good Thing(tm).) NTCRASH does not work on NT4.0 post-SP2.
>>BTW, even many of the kernel mode calls are just wrappers - just about
>>any of the HAL calls are wrappers (since it encapsulates hardware
>>differences). A little overhead, but it also means I can write a
>>driver on Intel, recompile and run it on an Alpha. I can also write a
>>single driver that runs properly on all sorts of odd Intel
>>architectures.
>Such an approach does make make it easier to go crossplatform, and
I think a similar driver model is in the works for OS/2. (OS/2 PPC already
had a similar system.)
>I wonder why Microsoft has not made a JavaVM that translates Java calls
>into NT messages directly. It may be interesting to translate Java VM
>calls directly into Mach messages as well.
Asymetrix has a Java VM that compiles directly to native code. I tried it
out and CaffeineMark 2.5 against Netscape 4.0 PR3 it was marginally faster
in some tests and marginally slower in others. (BTW Netscape 4.0 PR3 has
the fastest Java VM I've seen on Window NT. About 1650 on Caffeinemark 2.5
on a P100. MSIE 3.02 gets about 1450.)
Darwin Ouyang
Yes and I can "lib/list blah.stb" on vms to get a list of entry points, but the
point is, the native interfaces are undoucumented, which is unacceptable crap
when you compare this situation to the history of VMS and major commercial
unices. The absurdity of this situation is pointed up in sharp relief when you
have NT/OLE bigot snakeoil salesmen like Datamation's David E. Y. Sarna calling
bgNT an 'OPEN SYSTEM'. If it wasn't so funny it would be sad, ok it's sad too.
I think it's about 3 times that many, but the point is that there
are a _lot_ of Win32 calls, and Microsoft is going to keep adding
to them.
"Unauthorized Windows 95(tm)" by Andrew Schulman, IDG Books, 1994,
ISBN 1-56884-305-4, although there is apparently an updated version.
pg 20: Similarly, a Civil Investigative Demand I received from the DOJ
requested "All correspondence, including electronic mail messages,
to and from Microsoft Corporation... that discusses or relates
to competition in the development or sale of personal computer
operating systems or graphical user interfaces; the compatibility
or incompatibility of any Microsoft product or any non-Microsoft
product; or the disclosure or non-disclosure of information relating
to software interfaces." I was able to supply them with some
fascinating email from Microsoft VP Brad Silverberg. My favorite
is Brad's explanation from October 1993 of why he must keep on
expanding the Windows API: "Once Windows is frozen and no longer
moving forward, it can easily be cloned and thus reduced to a
commodity. Microsoft doesn't want to be in the BIOS business."
> Yes, it would be theoretically possible - you could just reimplement
> all the NT Win32 calls under Linux. Only about 500 or so of them.
> Let me know when you are done, and then I'll swap you a Linux on NT
> subsystem 8-)
I was under the impression that that is exactly how WINE works, although
I think they're still busy doing Win 3.11 calls, they'd definitely done
quite a bit of Win32 6 months ago when I last saw it. WINE consists of a
program loader to load those wierdo Windows .EXE files and a library which
converts Win calls to X calls (etc).
--
:sb)
>I will bet you a magnum bottle of whatever passes as Moet & Chandon in
>2038 that there won't be a single machine produced after 2028 still
>running COBOL code on the day the 32 bit UNIX time would roll over.
>
>Are you game?
Let me say this, and let us make no mistake about it:
I just defined a way to let our software handle DTG's (Date/Time Groups) up
to 2147. (Because YYYYMMDDHH fits into 32 bits until the year 2147).
I thought we could get away with that. By that time (i.e. 150 years - that
is 30 five-year-plans - from now, we will all be using 64-bit platforms (like
they're being offered today - should I mention Digital Linux ?)). We'll all
use 64 bit INTEGERs and this coding will extend into the end-of-time.
Of course, this still needs a SMOP (small matter of programming :-)
--
Toon Moene (mailto:to...@moene.indiv.nluug.nl)
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Phone: +31 346 214290; Fax: +31 346 214286
g77 Support: mailto:for...@gnu.ai.mit.edu; NWP: http://www.knmi.nl/hirlam
> Not with the current technology/approach. But maybe with 3d
> circuits and nanoscopic quantum logic gates ;)
And Tunneling ;)
>> Yep, though I'd think Nt*() would be fairly stable now after four major
>> releases.
>Excuuuuse me??? Do you buy their 4.0 shit???
>This is the 2nd major revision.
>3.1 == 1.0
>3.5 == 1.1
>3.51 == 1.2 (from what NT advocates said, it's more than 1.11)
>4.0 == 2.0
As I said, four major *releases*. (release != revision. Revision implies
change. Release merely means availability. )
(BTW I consider OS/2 2.1 a major release, just as I consider Windows 3.1 a
major release. Now 3.51 is debatable I guess, but I counted that too -
it added the ability to run some Win95 apps.)
Darwin Ouyang
That's part of my alt.fan.bill-gates style, by gadfly persona, you should know
that by now.
>Yes, it would be theoretically possible - you could just reimplement
>all the NT Win32 calls under Linux. Only about 500 or so of them.
>Let me know when you are done, and then I'll swap you a Linux on NT
>subsystem 8-)
sooo... what's the difference between that and putting unix on top of bgnt?
Well frankly, it makes me sick that dec has gotten into bed with bill gates
and is using their customer base to subsidize the promulgation of bgNT. But
whatever even dec does for bill gates, it's clear it ain't going to come
anywhere close to a real vmscluster for a long time, given 'wolfpack' and
given the new bill gates deal with hp. So trade rag reading meeting going
morons who migrate from vms to bgNT will be throwing away a lot, but I'm
sure they will come up with some way to cover it up.
>dleb...@mindspring.com (David LeBlanc) wrote:
>>Yes, it would be theoretically possible - you could just reimplement
>>all the NT Win32 calls under Linux. Only about 500 or so of them.
>I think it's about 3 times that many, but the point is that there
>are a _lot_ of Win32 calls, and Microsoft is going to keep adding
>to them.
A good thing too. It is a sign of a growing, viable OS.
[snip quote I've seen at least 20 times]
David LeBlanc |Why would you want to have your desktop user,
I don't know what the jobs look like nowadays, but about a year ago I was
looking through computer-related job ads in the St. Paul, Minnesota area.
About half of them were for COBOL programmer, half of them for someone
with experience on the AS/400, and there was considerable overlap.
RPG II was another biggie.
I've done a little COBOL in high school, didn't like it.
But I think it will be around for a long time.
--
"Perhaps not eating people is the first step to making friends."
- Omnipitus