I would hate this asshole, and hate working for him. In my ex-
perience, guys like this who are overbearing, hard-driving, and
obsessive invariably self-destruct, and fail.
A smooth and steady hand is a MUCH better long term influence on
the success of a project, in my opinion.
However, Cutler delivered NT, a remarkable advance in OS development.
You can't take that away from him or his team.
It's a curious dichotomy. Anybody reading this group work for this guy?
Ben Rainfeather
A lot of people don't like him at all, most especially the UNIX crowd.
He's been quite effective, though -- NT is his third OS in wide
distribution.
jim rost
ji...@world.std.com
--
"It's 1183 and we're all barbarians." - _The Lion in Winter_
I know about NT and VMS; what is the third one?
Rick Martin | "Live large and prosper."
Rick....@marcam.com | -- Fred Hickman, CNN
-------------------------------+--------------------------------------
All opinions expressed here are mine alone.
>Ben.Rainfeather writes:
>>I would hate this asshole, and hate working for him. In my ex-
>>perience, guys like this who are overbearing, hard-driving, and
>>obsessive invariably self-destruct, and fail.
>A lot of people don't like him at all, most especially the UNIX crowd.
>He's been quite effective, though -- NT is his third OS in wide
>distribution.
Keep in mind that you (at least Ben) have never met Dave, and are
presumably forming your opinion based on Zachary's book. Zachary
would have had a very boring book if his characters weren't given
three dimensions, even if that meant stretching the truth in some
cases.
The couple of times I've talked in person to Cutler, he's come accross
as a nice guy.
Later,
Heath
RSX-11
Mike
I have not read "Showstopper", but I used to work for Dave Cutler when he
headed the DEC West organization. I liked him, and would work for him
again.
Phil Hays
===============================================================================
Can't go back and Ya can't stand still.
What happens if he leaves MS? Does he form his own company and make a
competing OS? DNT? Fortunately for Unix it is open. Anyone can get source
code to the darn thing, and NOBODY can use the OS to control THEIR
marketshare in, say, word processors.
--
These opinions are mine, and do not represent those of SSi or TDK.
Edward Henderson | .. I am come that they might have life,
ed.hen...@tus.ssi1.com | and that they may have it more abundantly.
| John 10:10b
I can see this quickly turning into a flame war, but I'm gonna make one quick
point.
Basically this works both for and against you. Look at how many versions of
unix there are out there. Just on the x86 platform. Just in major
derivations (ie, very little similar kernel code) there are: BSD, Linux, AIX,
and Sys V. There are a ton of offshoots of each of those.
At least with NT you know what you are talking about when you mention the
product. I think that MS wants to keep it this way. There are problems
with it, for instance their research license is so bad that very few
universities will sign the NDA to get the sources. (basically MS gets
copyrights for any changes that you make).
From what I understand that is the result of stupid laywers though.
I run Unix (NetBSD) and Windows NT depending on what I'm doing...
alex
(oh, and i worked for microsoft business systems over the summer, so i might
be biased, but then again i work for mach during the school year, which is a
lot more unixy, and completely open, so i might be biased that way too).
> Fortunately for Unix it is open. Anyone can get source
> code to the darn thing, and NOBODY can use the OS to control THEIR
> marketshare in, say, word processors.
That's because nobody can write a word processor that runs on
all versions of UNIX. 8-)
> --
> These opinions are mine, and do not represent those of SSi or TDK.
> Edward Henderson | .. I am come that they might have life,
> ed.hen...@tus.ssi1.com | and that they may have it more abundantly.
> | John 10:10b
PJDM
--
Peter Mayne | My statements, not Digital's.
Digital Equipment Corporation |
Canberra, ACT, Australia | "AXP!": Bill the Cat
# In <D04rt...@world.std.com>, ji...@world.std.com (jim frost) writes:
# >A lot of people don't like him at all, most especially the UNIX crowd.
# >He's been quite effective, though -- NT is his third OS in wide
# >distribution.
# I know about NT and VMS; what is the third one?
RSX11-M, for the PDP. Dave also did a lot of compiler development at Digital
after VMS was released. But even before RSX, Dave was developing OS's for
Dupont on PDP's.
Dave is the kind of guy who forgot more about coding OS's while taking a
dump than most of his staff would know, especially the Unix folks. Unix
is a minimal, bare-bones OS - there just isn't that much to it. The VMS
and NT schools of thought are to provide much more to the buyer, and make
it easier to use, thus a more demanding development effort.
An interesting comment from Bill Gates in last week's Infoworld referred to
Dave Cutler as the person hired to develop OS2 3.0. But since IBM took over
OS2 and own the name, it was called NT.
--
far...@access.digex.net
Money for nothing and your chicks for free.
>>>I would hate this asshole, and hate working for him. In my ex-
???
[dito]
> What happens if he leaves MS? Does he form his own company and make a
>competing OS? DNT? Fortunately for Unix it is open. Anyone can get source
^^^
that will be XOU probably. :-)
>code to the darn thing, and NOBODY can use the OS to control THEIR
>marketshare in, say, word processors.
>--
Greetings, and have a nice sunny day!
*Sijmen Koffeman (k...@tabcom.iaehv.nl.(UUCP)* "..Concrete skull, soft inside.."
: I would hate this asshole, and hate working for him. In my ex-
: perience, guys like this who are overbearing, hard-driving, and
: obsessive invariably self-destruct, and fail.
Well, he'd better hurry up and fail, before he reaches retirement. The
man has one of the most successful technical records in the industry.
RSX-11, VMS, one of the original VAX system architects, producer of
DEC's best compiler products... all before taking on NT.
He may be difficult to work with, but I don't recommend getting between
him and a successful product shipment. You'd be squished.
-chuck
ps: If there is a God, and He has a computer, Dave Cutler probably wrote
the OS.
pps: Even so, I'm still a dyed in the wool Unix bigot...
Don't forget VAX ELN (RSX-11 done right on a VAX)....
I've met Dave Cutler and, if there ever were anyone I'd be willing
to work for, it's Dave. Far better to have a sharp, technically
aggressive manager obsessed with quality than most of the empty
suits you find in management at most companies. That said, I confess
I'm just as happy not to work for him, either. I just plain prefer
to set my own agenda, take my decisions and my risks; it's no accident
I decided to start my own business.
But no question: while I'm sure Dave Cutler is not one to suffer
fools, I doubt that's a negative with any good engineers who might
work for him. As an engineering manager, he'd make my list as
being about as good as it gets.
Regards,
Doug Hamilton KD1UJ hami...@bix.com Ph 508-440-8307 FAX 508-440-8308
Hamilton Laboratories, 21 Shadow Oak Drive, Sudbury, MA 01776-3165, USA
I dunno. Show-stopper! was rather awash with paragraphs that started with:
"XXX was a ... when he XXX. He stood XXX and had XXX hair and
an XXXX personality. One time XXX .."
Must have been 20 of them or more. Maybe one for every character he
introduced. The prologue was some of the best of the book, giving a better
picture of the situation than most of the rest did.
I enjoyed it, though. Big Blues was a much better read -- you actually
felt for some of the stars of that book.
--
"IBM's install program is just fine." [and later:] "I will say it again: The
install program is fine. But OS/2 just won't run on some pieces of hardware."
-Steve Withers, IBM employee, explaining the problems that
even faithful OS/2 users are having installing OS/2 3.0,
Ben.Rainfeather wrote:
: After reading "Showstopper", I can't quite come to a conclusion
BTW, Some of the people who worked with him on DEC's version of RSX also
worked with him again on the VMS developent, and again on NT development.
I guess he can't be all that bad to work for.
Dan
Indeed. There's more than a little VAXELN philosophy in Windows NT.
--
Jerry Hudgins 101 Rowland Way, Suite 300
Z-Code Software Division Novato, CA 94945-5010 USA
Network Computing Devices, Inc. Voice 415-899-7932, FAX 415-898-8375
Sometimes, the hard driving types will infect you with their zeal for
the task at hand. When an entire team get this type of focus and
zeal, amazing things start to happen. After it is all over and done,
you look back and appreciate being part of a team that accompished the
difficult task.
I have done this in the past, and when a team jells like this, there
is almost no problem that they won't overcome. No problem. How do
you think that the military get their troops to do the amazing things
that they do?
Of couse, I'm likening business situations to military ones in
intensity (comming under gun fire will make that completely
different), but there needs to be dedication to the team from each of
the individuals. Or else the team is not one and will not accomplish
anything.
IMHO.
Erik.
--
Erik Ohrnberger, EDS er...@dps.cos.eds.com
800 Tower Drive Troy, MI 48007-7019 #include <std.disclaimer>
Mob rule isn't any prettier merely because the mob calls itself a government.
It ain't charity if you are using someone else's money.
That was the case back in the last 1970's, but if you still believe
that you haven't been paying attention.
>An interesting comment from Bill Gates in last week's Infoworld referred to
>Dave Cutler as the person hired to develop OS2 3.0. But since IBM took over
>OS2 and own the name, it was called NT.
I thought this was pretty well-known. That's the great thing about
Microkernels though -- if you want a different OS, just write a new
API layer.
jim frost
I'd have to agree with that :-). You might notice the radical
difference in product quality between NT and your typical Microsoft
product. NT was big and slow when first released, but it *worked* and
the first revision went a long way towards fixing the "big and slow".
Clearly someone who knew what they were doing was driving the NT
release, and Cutler's been around since before there was a Microsoft.
|> The third one was RSX, which was a major PDP-11 operating system for
|> DEC before VAX's. He also did a compiler or two while he was at DEC.
RSX was a shipping product before Cuttler joined DEC. All Dave did on
RSX (according to him) was the tape device drivers.
========================================================================
Dave Rogers Internet : da...@rsd.dl.nec.com
M & R Software, Inc. or : ma...@ix.netcom.com
Plano, Texas AMPRnet : kc5...@dfwgate.ampr.org
AX.25 : kc5iye@kc5iye.#dfw.tx.usa.noam
CIS : 76672,2455
In the absence of leadership, we have decided to follow ourselves.
One thing I was wondering -- if you are changing your OS from, say a
32-bit to a 64-bit OS, can it be done without affecting the microkernel?
In other words, if MS decided to do it for NT (given that all incoming
processors except Intel are 64-bit), would it be just a case of recoding
the API?
--
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Logo on a computer lab out here: Intel Outside!
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>jim frost writes
>;I thought this was pretty well-known. That's the great thing about
>;Microkernels though -- if you want a different OS, just write a new
>;API layer.
>One thing I was wondering -- if you are changing your OS from, say a
>32-bit to a 64-bit OS, can it be done without affecting the microkernel?
>In other words, if MS decided to do it for NT (given that all incoming
>processors except Intel are 64-bit), would it be just a case of recoding
>the API?
That would depend on the architecture and the OS designers, I think.
At a minimum the OS implementors must have implemented context
switching that saves all of the register information when doing 64-bit
operations. This seems pretty likely.
If the chip has a distinction between 32-bit and 64-bit modes then the
OS designers must also recognize a distinction in order for a 64-bit
application to be run in the correct mode. This seems less likely.
To fully support 64-bit applications the OS implementors must also
have implemented a virtual memory manager that understands 64-bit
addresses. At worst they might only allow a subset of the address
space to be used (which is quite common even in 32-bit environments).
That won't be a big limitation unless you really, really need a 64-bit
address space.
If the OS designers make the kernel reasonably aware of the 64-bitness
of the chip then yes, you can probably just change the API. On the
other hand you might choose to do nothing to the API and just let the
applications call into the kernel using 32-bit values.
Que ? Windows NT 3.5 is already running on Alpha which is a 64-bit
processor.
// JTN (j...@jd.se)
---
* Thomas Nimstad * #pragma message("Standard disclaimer in use")
* Juristdata AB, SWEDEN * Professional development for Windows NT
* Phone +46-500 412150 * Fax +46-500 412848
>Suvro writes:
>>jim frost writes
>>;I thought this was pretty well-known. That's the great thing about
>>;Microkernels though -- if you want a different OS, just write a new
>>;API layer.
>>One thing I was wondering -- if you are changing your OS from, say a
>>32-bit to a 64-bit OS, can it be done without affecting the microkernel?
>>In other words, if MS decided to do it for NT (given that all incoming
>>processors except Intel are 64-bit), would it be just a case of recoding
>>the API?
>Que ? Windows NT 3.5 is already running on Alpha which is a 64-bit
>processor.
This is true but slightly misleading. The Alpha is indeed a 64-bit
processor. But NT running on it is a 32-bit system. Except perhaps
somewhere under the covers (maybe in the hardware adaptation layer),
NT doesn't use the 64-bit capabilities of the Alpha. For example,
the virtual address space under NT is still on 2**32, not 2**64.
>Suvro writes:
>>jim frost writes
>>;I thought this was pretty well-known. That's the great thing about
>>;Microkernels though -- if you want a different OS, just write a new
>>;API layer.
>>One thing I was wondering -- if you are changing your OS from, say a
>>32-bit to a 64-bit OS, can it be done without affecting the microkernel?
>>In other words, if MS decided to do it for NT (given that all incoming
>>processors except Intel are 64-bit), would it be just a case of recoding
>>the API?
>That would depend on the architecture and the OS designers, I think.
>At a minimum the OS implementors must have implemented context
>switching that saves all of the register information when doing 64-bit
>operations. This seems pretty likely.
>If the chip has a distinction between 32-bit and 64-bit modes then the
>OS designers must also recognize a distinction in order for a 64-bit
>application to be run in the correct mode. This seems less likely.
Windows NT as a 32 bit operating system runs on DEC Alpha (AXP) which
is a 64 bit architecture. There are distinctions between 32 bit and
64 bit modes on the alpha but they are not based on a true MODE
setting. Instead, there are some 32 bit addressing instructions
but once the data is inside of the machine (out of memory) then the
differences between 32 bit and 64 bit disappear.
>To fully support 64-bit applications the OS implementors must also
>have implemented a virtual memory manager that understands 64-bit
>addresses. At worst they might only allow a subset of the address
>space to be used (which is quite common even in 32-bit environments).
>That won't be a big limitation unless you really, really need a 64-bit
>address space.
>If the OS designers make the kernel reasonably aware of the 64-bitness
>of the chip then yes, you can probably just change the API. On the
>other hand you might choose to do nothing to the API and just let the
>applications call into the kernel using 32-bit values.
It would be nice if DEC published some papers on how they did do the
32 bit NT on top of 64 bit alpha...it would be interesting to read
about the problems they did come across and solve.
phil
=============================================================================
p...@esca.com || 206-822-6800 || 206-889-1809 x2023 (voice mail)
=============================================================================
>In article <12-01-1994.37020@virtec>, d...@virtec.com (Dan Sullivan) writes:
>
>|> The third one was RSX, which was a major PDP-11 operating system for
>|> DEC before VAX's. He also did a compiler or two while he was at DEC.
>
>RSX was a shipping product before Cuttler joined DEC. All Dave did on
>RSX (according to him) was the tape device drivers.
Really? I wish Cutler personally had written the NT tape device drivers, and
NTBackup. The mediocre tape support is my biggest gripe with NT. Otherwise
I'm very impressed. Reading _Inside Windows NT_ back before NT was
released convinced me that Cutler was pretty sharp, and I'm not disappointed
with the released OS (especially with NT 3.5).
(To be fair, most of the NT tape problems I've had to deal with were with
a Colorado Jumbo 250 floppy-type tape drive, which isn't so great to start with,
but the drive certainly works better under DOS/Windows than NT.)
---
===============================
| Chris Stanley | I used to be apathetic about my .sig
| sta...@email.ncsc.navy.mil | but now I just don't care.
| Speaking only for myself! |
===============================
Yeah, I hear they shoot deserters at Microsoft........
Keep The Faith
Lava
: It would be nice if DEC published some papers on how they did do the
: 32 bit NT on top of 64 bit alpha...it would be interesting to read
: about the problems they did come across and solve.
If not, perhaps Linus Torvald could tell what he does for the 64-bit
Linux/ALPHA project ?
Jim Paradis, DEC, is doing the 32-bit Linux/ALPHA port.
The articles by Jim Paradis I've been sent by our in-house Linux maven
come from the comp.os.linux.development & comp.os.linux.misc news groups.
--Jerry,
Gerald (Jerry) R. Leslie
Staff Engineer
Dynamic Matrix Control Corporation (my opinions are my own)
P.O. Box 721648 9896 Bissonnet
Houston, Texas 77272 Houston, Texas, 77036
713/272-5065 713/272-5200 (fax)
gle...@isvsrv.enet.dec.com
jle...@dmccorp.com
Hmm... I was working for DEC back when the Alpha chip was under
development, and I remenber reading some of their design info to the
effect that the Alpha was designed so that it could, when running a
32-bit operating system, process two 32 bit instructions simultaneously...
load one set of instructions/data into the lower 32 bits of the
registers and another in the upper 32 bits... two separate instruction
pipelines, etc. That's why DEC could claim a peak of 300 MIPS on a 150
megahertz processor.
Was the Alpha actually shipped with this capability? Or was the whole
thing just another of those pesky hallucinogenic experiences? :-)
(Come to think of it, I've got one of the AXP Architecture handbooks
at home, so I COULD actually look this up...)
John
I think you're confusing things a bit; this isn't something that is
only done with 32-bit instructions, it's done with 64-bit instructions
as well. They do it by providing multiple evaluation units that can
work in parallel, a "superscalar" architecture. Superscalar
architectures have been common in RISC designs for a number of years
and virtually all modern RISC chips, and quite a number of today's
CISC chips, are superscalar.
Even under optimal conditions a superscalar design won't multiply
performance by integral factors (eg 300 mips at 150MHz in a dual-issue
design). I remember DEC's claims being more down-to-earth, something
like 200 mips, which is still optimistic but about what you'd expect
for virtually perfect code. Most applications would be doing well to
drive it at about 150 mips.
jim frost
ji...@world.std.com
--
http://www.std.com/homepages/jimf
> In article <1994Dec6.1...@esca.com>, p...@esca.com (Phil Hystad) says:
>>Windows NT as a 32 bit operating system runs on DEC Alpha (AXP) which
>>is a 64 bit architecture. There are distinctions between 32 bit and
>>64 bit modes on the alpha but they are not based on a true MODE
>>setting. Instead, there are some 32 bit addressing instructions
>>but once the data is inside of the machine (out of memory) then the
>>differences between 32 bit and 64 bit disappear.
There are *no* 32 bit addressing instructions. Neither are there
any distinctions between 32 bit and 64 bit modes. The Alpha AXP
architecture has only one mode, and it's 64 bit.
> Hmm... I was working for DEC back when the Alpha chip was under
> development, and I remenber reading some of their design info to the
> effect that the Alpha was designed so that it could, when running a
> 32-bit operating system, process two 32 bit instructions simultaneously...
> load one set of instructions/data into the lower 32 bits of the
> registers and another in the upper 32 bits... two separate instruction
> pipelines, etc. That's why DEC could claim a peak of 300 MIPS on a 150
> megahertz processor.
>
> Was the Alpha actually shipped with this capability? Or was the whole
> thing just another of those pesky hallucinogenic experiences? :-)
> (Come to think of it, I've got one of the AXP Architecture handbooks
> at home, so I COULD actually look this up...)
The 21064 is a dual-issue processor, no matter what kind of operating
system (32 or 64 bit) is running on top of it. All instructions are
32 bits long. At no time do any of the 64-bit registers (integer or
floating) get treated as 2x32-bit registers in any way, shape, or form.
Since the processor runs at 150MHz, and the processor is dual issue, then
peak MIPS is 150x2=300 peaks MIPS.
FYI the 21164 is a quad issue CPU running at 300MHz, which gives
4x300=1200 peak MIPS, or 1.2 BIPS.
Note that the number of instructions that can be issued per cycle is
implementation-specific, not architectural. The handbook mentions that
up to 10 way issue is a reasonable expectation before non-CPU hardware
becomes a bottleneck.
>
> John
Oops. My apologies. Obviously, my own memory dropped a bit or two
there. You're absolutely right. (Shamed me into going back and
digging up my Alpha Architecture Handbook, you did. ;-)
So, given what the handbook says in explanation of multiple-issue
implementations, what would this mean with respect to NT. It looks as
if the multiple-issue aspect of the chip should be transparent to the OS.
Is this correct? If so, does this mean NT apps could have up to 1200
MIPS peak available processing power on a 21164?
Also, if the multi-issue IS transparent to the OS, does it mean that a
single thread could get 1200 MIPS peak?
Quick! Stop me before I jump to another conclusion!!!
John
The idea is that the multiple dispatch should be invisible to whatever
is running on the hardware, modulus "implementation specific"
variations outlined in the architecture manual. Your application
should simply run faster.
In theory applications on an n-issue processor could achieve n-times
the performance of a single-issue processor, but in actuality they
never will. Dual-issue designs rarely show more than a 50%
improvement even when heavily optimized; I would be surprised if the
quad-issue implementation beats 200%.
Why is this so? Because the more instructions you're dispatching
simultaneously the more likely it is that there are dependencies or
branches that will stall the dispatch. I've worked with a lot of RISC
code and have seen how seldom a dual-issue design is fully utilized,
much less quad-issue or greater.
>Also, if the multi-issue IS transparent to the OS, does it mean that a
>single thread could get 1200 MIPS peak?
Sure. But as I said I think you'd be hard-pressed to get half that.
> So, given what the handbook says in explanation of multiple-issue
> implementations, what would this mean with respect to NT. It looks as
> if the multiple-issue aspect of the chip should be transparent to the OS.
> Is this correct? If so, does this mean NT apps could have up to 1200
> MIPS peak available processing power on a 21164?
>
> Also, if the multi-issue IS transparent to the OS, does it mean that a
> single thread could get 1200 MIPS peak?
Yes, yes, and yes.
> Quick! Stop me before I jump to another conclusion!!!
>
> John
PJDM
> In theory applications on an n-issue processor could achieve n-times
> the performance of a single-issue processor, but in actuality they
> never will. Dual-issue designs rarely show more than a 50%
> improvement even when heavily optimized; I would be surprised if the
> quad-issue implementation beats 200%.
>
> Why is this so? Because the more instructions you're dispatching
> simultaneously the more likely it is that there are dependencies or
> branches that will stall the dispatch. I've worked with a lot of RISC
> code and have seen how seldom a dual-issue design is fully utilized,
> much less quad-issue or greater.
Well, HP/Intel have bet the farm against this argument with their
forthcoming VLIW CPU. If they think they can do it, why not a standard
RISC design?
> jim frost
> ji...@world.std.com
> --
> http://www.std.com/homepages/jimf
PJDM
>> In theory applications on an n-issue processor could achieve n-times
>> the performance of a single-issue processor, but in actuality they
>> never will. Dual-issue designs rarely show more than a 50%
>> improvement even when heavily optimized; I would be surprised if the
>> quad-issue implementation beats 200%.
>>
>> Why is this so? Because the more instructions you're dispatching
>> simultaneously the more likely it is that there are dependencies or
>> branches that will stall the dispatch. I've worked with a lot of RISC
>> code and have seen how seldom a dual-issue design is fully utilized,
>> much less quad-issue or greater.
>Well, HP/Intel have bet the farm against this argument with their
>forthcoming VLIW CPU. If they think they can do it, why not a standard
>RISC design?
They think they can make a faster CPU that way, not necessarily one
that can issue four or more instructions per cycle. VLIW can get
improved performance over standard RISC purely by forcing the
instruction stream to fit certain predetermined parameters so that you
can disregard some mechanisms that must exist with a free-format
stream. That'll drop chip complexity and allow improved performance,
no doubt about it, but you won't always be able to fill the whole
instruction word. In fact, since VLIW probably just means two 32-bit
instructions chained together, they're not betting at all. We already
know that you can quite often group two instructions at a time,
although three or more becomes increasingly unlikely.
Somewhere I have articles by IBM detailing how well they managed to do
with POWER and POWER2, the chips used today in the higher-end
RS/6000's. From memory, POWER chips can issue one branch, one
integer, and one floating-point instruction per cycle. In practice
they usually do about 1.2 instructions per cycle (usually overlapping
branch and integer operations). I think POWER2 can do one branch, two
integer, and two floating-point instructions per cycle, but I'm a
little fuzzy on that because I haven't worked directly with those
units. I recall that IBM found they could drive POWER2 at 1.6
instructions per cycle, a clear improvement but well below the
200-300% improvement in non-FP performance the chip can be driven at
optimally.
The 1.2 figure is by far the most important: it means that on average
there's a branch for every five integer instructions (a little longer,
actually, because in some cases the branch cannot be done in parallel
with an integer instruction because of a dependency, but for our
purposes the figure is accurate). That means that without branch
prediction you would expect no better than 600% improvement *no matter
how many parallel units you have*. Because those six instructions
(five integer plus the branch) almost always have at least one
dependency and often have two or more you'll be lucky to hit 300% and
that would be very rare.
Given accurate branch prediction or the ability to follow both branch
directions (both of which are used in modern processor designs) you
can get better fill rates because you're more likely to be able to
find independent instructions, but dependencies become far more common
as the code path you can handle gets longer. The chips also get a lot
more complicated, so there's a limit to just how much parallelization
you can build into them.
# I wish Cutler personally had written the NT tape device drivers, and
# NTBackup. The mediocre tape support is my biggest gripe with NT.
[...]
# (To be fair, most of the NT tape problems I've had to deal with were with
# a Colorado Jumbo 250 floppy-type tape drive, which isn't so great to start
# with, but the drive certainly works better under DOS/Windows than NT.)
My experience is just the opposite. I've had several experiences with the
Jumbo 250 DOS program where it crashes, goes out into space, and is generally
slow as hell. NT Backup is rock-solid, and very fast, but doesn't have the
compression.
From what I can tell, Arcada wrote the NT Backup software, not Microsoft.
--
far...@access.digex.net
Money for nothing and your chicks for free.
>(To be fair, most of the NT tape problems I've had to deal with were with
>a Colorado Jumbo 250 floppy-type tape drive, which isn't so great to start with,
>but the drive certainly works better under DOS/Windows than NT.)
Ditch that slow junk and get a SCSI tape back-up - you'll be glad you
did. I *hated* my Jumbo under DOS, and at 120MB/tape vs > 1GB storage,
(and the noise level), the Jumbo just stunk under NT. I'll grant you
that it won't be cheap.... NT backup and QIC-80 doesn't seem to be a
good match.
--
David Charles LeBlanc
Georgia Institute of Technology, Atlanta Georgia, 30332
Internet: gt6...@acme.gatech.edu
sta...@phoebus.ncsc.navy.mil (Chris Stanley) wrote:
>
>
>In article <1994Dec5.1...@rsd.dl.nec.com>, Dave
>Rogers writes:
>
>>In article <12-01-1994.37020@virtec>, d...@virtec.com
>(Dan Sullivan) writes:
>>
>>|> The third one was RSX, which was a major PDP-11
>operating system for
>>|> DEC before VAX's. He also did a compiler or two
>while he was at DEC.
>>
>>RSX was a shipping product before Cuttler joined DEC.
>All Dave did on
>>RSX (according to him) was the tape device drivers.
>
>
>Really? I wish Cutler personally had written the NT
>tape device drivers, and NTBackup. The mediocre tape
>support is my biggest gripe with NT. Otherwise I'm
>very impressed. Reading _Inside Windows NT_ back
>before NT was released convinced me that Cutler was
>pretty sharp, and I'm not disappointed with the
>released OS (especially with NT 3.5).
"My first operating system project was to build a real-time
system
called RSX11M..."
From David Cutler's foreword to the above book.
>
>(To be fair, most of the NT tape problems I've had to
>deal with were with a Colorado Jumbo 250 floppy-type
>tape drive, which isn't so great to start with, but
>the drive certainly works better under DOS/Windows than
>NT.)
>
To clarify...RSX-11D was shipping several years prior to RSX-11M.
I don't know whether or not Cutler worked much on D, but the lore
I heard from an instructor at DEC school in 1975 was that Cutler
wrote the basic kernel/dispatcher for M over a weekend.
>deleted text<
Regards,
Mike
What do you think about NT clustering a la VAX/VMS?
Will NT ever equal or surpass Unix as a TCP/IP network participant?
What are micorsofts long term plans (off the record) not the normal ms bull..
It seems that MS is shirking NT in favor of chicago (jerks, pita) especially
since many of there apps still are not ported to nt and will not be for awhile
it seems, and have you seen much advertising on windows NT except for the
occasional magazine add.... NO.... but I'll bet you 10 to 1 that when
chicago comes out you will see a huge advertising blitz... remember when
win 3.1 came out?????
Laurence G. Kahn
Senior Software Engineer
Dynamics Research Corp.
tim> I've been following this thread for the last week or so and I
tim> have a question. If you could ask David Cutler any question
tim> about NT, what would it be? The reason I ask this is that I have
tim> an interview scheduled with him and I would like to ask questions
tim> that are meaningful to NT users. So what do you say? Now is your
tim> chance to ask David Cutler! Please limit your questions to NT
tim> issues only.
Sure!
1) Why was NFS left out of NT?
2) Why was an SMTP agent left out of NT?
3) Why was a telnet demon left out of NT?
4) Why was disk compression left out of NT?
5) Why was a disk defragger left out of NT?
That's all.
Riri
--
====================
Sarir (Riri) Khamsi
kha...@ll.mit.edu
w:617-981-4011
h:617-861-7440
====================
Tim Daniels
>Sure!
>
>1) Why was NFS left out of NT?
>
>2) Why was an SMTP agent left out of NT?
>
>3) Why was a telnet demon left out of NT?
>
>4) Why was disk compression left out of NT?
>
>5) Why was a disk defragger left out of NT?
My question to you (from your first three questions) is "why do you seem to
want to make NT into some sort of Unix clone?" NFS and Telnet are both
purely Unix features - why the heck SHOULD they be in NT?
Chris
--
--------------------------------------------------------------------------
| Chris Marriott, Warrington, UK | Author of SkyMap v2 shareware |
| ch...@chrism.demon.co.uk | astronomy program for Windows. |
| For more info, see http://www.winternet.com/~jasc/skymap.html |
| Author member of Association of Shareware Professionals (ASP) |
--------------------------------------------------------------------------
>My question to you (from your first three questions) is "why do you seem to
>want to make NT into some sort of Unix clone?" NFS and Telnet are both
>purely Unix features - why the heck SHOULD they be in NT?
If you want to construct a multi-platform environment, NFS and telnet
are pretty much inevitable. TCP/IP is currently the network protocol
of choice for multi-vendor situations, as well as internet access.
Telnet is a basic part of it, and is in no way Unix-specific. The
issue with NT is primarily whether you want to have remote non-GUI
command sessions. If you do, telnetd is the obvious way to do it.
(rlogind would be the Unix-specific method.)
NFS was invented by Sun, and is certainly spec'ed from a Unix point of
view. However it is implemented in Unix, PC's, VMS, and IBM
mainframes. A lot of people are less than thrilled by some of its
design, but it's still the most widespread multi-vendor protocol for
remote file access, and one of the few non-proprietary ones.
smtp, and telnet etc. are not purely unix features,any good OS has support for
both including but not limited to VMS and dialects of IBM mainframe OS's,
If you want nt to be easy to interconnect to other systems out there
(I assume MS does or why bother with ppp and ras etc), then these
things are really necessary. Now to NFS if only someone would write
an nfs client for nt that used NIS or an accepted security mechanism so
that a brain dead pc-nfs server would not be necessary on the unix side....
When connecting from one sun to another for instance no such beast is necessary....
Why was fax left out of NT
Why was defrag left out of NT
Why was antivirus shell left out of NT
Why was support for drivespace and dblspace drives left out of NT.
Why do Microsoft insist on only supplying half useful programs like tape
backup systems with no software compression, file managers with no view,
edit options, etc.
Is ALL desktop software (a la Office 32) going to be cross compatible
between NT and Windows 95?
That's all I can think of right now.
MD
# 1) Why was NFS left out of NT?
Perhaps because they wanted the product to ship this century. Who knows?
Maybe it will be included in a later version.
# 2) Why was an SMTP agent left out of NT?
You know, I haven't actually used this or tested it out, but my TCP/IP
manual for NT 3.5 Server explicitly describes how to configure and use
the SNMP agent. What version are you running?
# 3) Why was a telnet demon left out of NT?
Perhaps because NT is designed for workstation/server functionality, and not
as a multiuser time-sharing interactive system.
# 4) Why was disk compression left out of NT?
Because disk compression sucks. With the price of disks rock-bottom and still
falling, who needs the headache of compression? Ever hear of DoubleSpace?
Not only was it a bug-ridden, technical fuckup of a product, but it also
resulted in a lawsuit a year ago that cost Microsoft over $110,000,000.00!
Perhaps the memories were not pleasant for them.
# 5) Why was a disk defragger left out of NT?
I don't know. Why was it left out of Unix? Why was it left out of the
Mac OS's? Why was it left out of VMS? Perhaps the IBM line of systems
come with free defraggers? Oh well.
Because another company hadn't developed it yet. The only reason DOS has
a defragger is because MS bought it from another company.
pay for something.
>Tim Daniels
"Please compare NT's SMP capabilities, esp. with regard to I/O handling
and interrupt latency, to OS/2's."
Heath
>>3) Why was a telnet demon left out of NT?
>NT doesn't really have enough multiuser functionality to make it
>worthwhile. I'd be surprised it this lasted, though.
I have heard on the net that there will be a telnet daemon in the
NT resource kit which is due out in January.
>>5) Why was a disk defragger left out of NT?
>It might not actually be necessary. I'd really like to know for sure,
>though.
I would think that it might be necessary for FAT filesystems not for
NTFS. Do UNIX file systems typically require defragmentation? I had
thought not. It is a "PC" thing. %}
-charles
How about because some people appreciate them as the minimal acceptable
level of connectivity that they are?
I'd really like remotable GUI (telnet is insufficient -- if your console
app opens a GUI window, you're SOL), NFS and AFS connectivity, and bundled
support for Telnet that kills an app whenever it makes a GUI call. I'd
also like *real* Netware 4.x support.
In fact, the latter is really important. A lot of sites are finally moving
to NW4 and a lot of Netware boys who would otherwise be really impressed
by NT are concluding that "NT sucks." I've seen this first hand.
--
"IBM's install program is just fine." [and later:] "I will say it again: The
install program is fine. But OS/2 just won't run on some pieces of hardware."
-Steve Withers, IBM employee, explaining the problems that
even faithful OS/2 users are having installing OS/2 3.0,
Microsoft had their own network filesystem (which is actually superior
to NFS, but then again just about everything is superior to NFS).
>2) Why was an SMTP agent left out of NT?
Microsoft has their own mail system.
>3) Why was a telnet demon left out of NT?
NT doesn't really have enough multiuser functionality to make it
worthwhile. I'd be surprised it this lasted, though.
>4) Why was disk compression left out of NT?
I'd guess performance. Disk compression trades hardware for speed.
NT already requires the hardware, so you might as well work on
performance. Time-to-market might have had a lot to do with it, too.
>5) Why was a disk defragger left out of NT?
It might not actually be necessary. I'd really like to know for sure,
though.
jim frost
ji...@world.std.com
--
http://www.std.com/homepages/jimf
Did you really believe that anyone would find your POSIX
implementation useful for anything? Do you plan to do a realistic
POSIX implementation (i.e. one that can also access the Win32 API)?
>ch...@chrism.demon.co.uk (Chris Marriott) writes:
>
>>My question to you (from your first three questions) is "why do you seem to
>>want to make NT into some sort of Unix clone?" NFS and Telnet are both
>>purely Unix features - why the heck SHOULD they be in NT?
>
>If you want to construct a multi-platform environment, NFS and telnet
>are pretty much inevitable. TCP/IP is currently the network protocol
>of choice for multi-vendor situations, as well as internet access.
And NT *has* a pretty complete TCP/IP protocol suite built-in. It's
what I'm using to send this via RAS and PPP!
>Telnet is a basic part of it, and is in no way Unix-specific. The
>issue with NT is primarily whether you want to have remote non-GUI
>command sessions. If you do, telnetd is the obvious way to do it.
>(rlogind would be the Unix-specific method.)
I'm afraid that this is where I must disagree. Telnet is *not* an
integral part of TCP/IP; Telnet is a "remote logon" tool which happens
to *use* TCP/IP. The fact is that NT isn't designed for remote interactive
use, so Telnet becomes irrelevant. Win32 is not a "distributed" GUI, never
has been, and probably never will be!
get real!
fred
>>It might not actually be necessary. I'd really like to know for sure,
>>though.
>I would think that it might be necessary for FAT filesystems not for
>NTFS. Do UNIX file systems typically require defragmentation? I had
>thought not. It is a "PC" thing. %}
Many (possibly most at this point) UNIX systems use BSD FFS or a
derivative. In general BSD FFS does not need to be defragmented;
except under unusual circumstances you won't see more than about 15%
performance degredation as a result of fragmentation (that number is
low enough that it's imperceptible to most users).
The techniques used in BSD FFS can be applied to just about anything,
though -- for all practical purposes you could simply replace the
block allocator of the FAT filesystem with a smarter one and
fragmentation problems will go away. In reality BSD FFS is pretty
much UFS (the original UNIX File System, which was plagued with
fragmentation problems) with a few things moved around and a much
smarter block allocator. I've seen several UFS implementations where
they just replaced the block allocator.
Assuming that NTFS does something about fragmentation latency problems
there's absolutely no reason why they couldn't have stuck a similar
system in to handle FAT filesystems. The fact that FAT filesystems
appear to become extremely fragmented under NT is a fairly good
indicator that they *have* done something like that -- the BSD block
allocator in particular tends to create a lot of highly localized
fragments.
I don't know for certain that they made any effort to work around the
problem, but it's possible and there are strong indicators that
*something* was done to FAT's allocation scheme.
If MS doesn't supply everything, people bash them for incompleteness.
Its hard to please religious zealots.
Brian Tarbox
--
"If the world is night, shine your life like a light"
-Indigo Girls
>My question to you (from your first three questions) is "why do you seem to
>want to make NT into some sort of Unix clone?" NFS and Telnet are both
>purely Unix features - why the heck SHOULD they be in NT?
This is simple. My main desktop OS is NT, but all of my development
work is on Unix/X-Windows which I get to using an X server on NT. I
don't want to make NT a Unix clone - I want it to work better with
it. My NT box holds backup files for my Unix box and I am doing that
with ftp right now. This involves doing a tar and compress on the Unix
box and then running a scheduled ftp from the NT box to get the
files. It would also be nice to telnet into my NT box at work from home and do
all of the things I can do w/ Unix and X (ie, distributed GUI).
No, I'm not getting into a Unix vs. NT war, it's just that we are
given part of the picture under NT w/ built in TCP/IP, telnet client
and ftp server & client stuff, why not go the rest of the way? I'm sure
Microsoft has the resources to do NFS, telnetd and SMTP for NT if they
wanted. If they couldn't do it themselves, they could easily pay
someone else, or just buy them out. :-)
What X-Server do you like the best tight now (and how much is it)?
>> I
don't want to make NT a Unix clone - I want it to work better with
it <<
Great answer BTW. I get tired of all of the Tastes great/less filling arguments in the OS business. Most
people make their decisions based on vendors, and not on the products themselves. Also, very very few
people have actual in-depth experience with the OSs they classify as "inferior".
>> It would also be nice to telnet into my NT box at work from home and do
all of the things I can do w/ Unix and X (ie, distributed GUI). <<
I find this to be the single most frustrating and significant shortcoming of NT (other than being new and
under construction, meaning many bugs). Do you see any resolution of the failure to leverage a powerful
NT machine across a workgroup? It's very frustrating to have significant computing power sitting there but
only to use it a " remote hard drive".
So I don't get attacked for being "against" NT, I like this OS a lot, and given a few more years work it
certainly *should* be the best option for most applications.
Harold
I am sick and tired of pointing this out but you can do this already on NT.
You just don't know how and you haven't look carefully enough.
Go to cica and get xwindemo.exe and ataman software's telnet deamon. Then
go to microlib.cc.utexas.edu and get the X11R6 NT release and you have X for
NT. Now the only thing left is either find your most favourite X app compiled
for NT or do it yourself and port it.
People NT supports X protocol compliant programs today. It just doesn't come
with the OS. There are good enough telnet daemons out there.
Muzaffer
Except that it:
(a) is unreliable: What if your "console app" opens a GUI window? PTerm does
this (the setup window)
(b) doesn't work for NT apps, and I wouldn't be running NT if I was going
to be running X apps
(c) isn't bundled with the system
I don't want X and telnet for NT. I want remotable Win32 app capability,
both console and terminal. Really, this shouldn't be *that* hard to do.
Isn't the entire Win32 subsystem supposed to work through LPCs? Just
use RPCs and figure out a way to re-direct IO. So what if the end-result is
more bandwidth intensive than X?
This is something MS has to do. It's a real shame that NT doesn't have the
*same or equal* capability as Unix+X, even if it's not using the same
*mechanism* (i.e., Telnet and X).
This is a connectivity feature. I'm stunned that this was left out of
NT3.5, and it makes Windows 95 a lot less attractive than it otherwise
would be (though admittedly, it's not going to hurt them).
Sick or not, thanks for pointing out what looks like a workable, cost-free alternative. It is still somewhat
irritating that Microsoft doesn't see fit to either bundle this, or to offer a Microsoft-provided solution at extra
cost (or no cost with NT Server), to assist with purchasing, installation, and support issues.
Also, it doesn't really count if I have to find an XApp for a protocol which is not sponsored by the OS
vendor directly, as this does not really encourage the development of distributed apps for NT, which is
what I feel is important. Lastly, having to manually install this "dsitributed-support" on every client (or X
Server in this case) does not classify itself as this service being provided by the OS vendor.
Thanks again, Harold
I completely agree and your points are well-stated. How about OS/2 in this regard? What are the present
and future options? It would be nice to see OS/2 get the jump on NT in a crtitical area such as this.
Harold
It sucks. Like Windows 3.1, which has packages that attempt to remote
applications for OTM users (none of which work well), OS/2 has at least one,
and I think two, packages that do this for PM apps.
>What are the present
>and future options? It would be nice to see OS/2
>get the jump on NT in a crtitical area such as this.
It would be nice, if only because MS might feel some pressure to get the
feature into systems sometime in 1996, but not across their OS line for
several years.
OK now we seem to have found the real issue: Microsoft doesn't support and
encourage X. Yes, this is true. The windowing system they believe in
different than that of X. But saying that NT doesn't support X is a completely
different statement. Also NT has all kinds of ways of distributing apps (DCE
RPC support being one of the most important). If you want a distributed apps,
use of those. You know disributing windowing system is not the only solution
to get distributed apps, it just happens to be popular way on another OS.
As Ritchie said: "If you want X, you know where to go"
Muzaffer
Maybe missing something obvious here, but what's OTM?
>> "IBM's install program is just fine." [and later:] "I will say it again: The
install program is fine. But OS/2 just won't run on some pieces of hardware." <<
Actually, I tried to install the Warp Beta for a week, and had to eventually give it up (GW2K P5-90, ATI
Mach32, built-in ISA/IDE controller). I have installed 12 other OSs on this machine; some took quite a while
and special "tips and tricks" to install, but Warp is the only one I've ever had to abandon. It looked great
during the install but when trying to re-boot it just could never get the video display working. I tried all the
VGA, 8514, and ATI drivers (including the latest one from ATI), but no success. Is this one of those "won't
run on" machines?
Also, I think you're right about W95. When it comes to their desktop OS, Microsoft is now in the old AT&T
position (and that of most attorneys): "We don't care. We don't have to".
Thanks, Harold
I'd say that's more like the *only* way. And a virtually unsupported one by the ISVs at that. It also wouldn't
address the issue of a" terminal-only" type of user, who could just "log-in" to an NT machine and execute
whatever distributed-capable apps found. I never said that I was looking for X on NT, I was saying that
there should be a mechanism for a user to "log-in" as a "terminal" and run an application *on* the server,
not *from* the server. Witness Sun's slogan: "The network is the computer", and then picture NT's total
lack of equity in regard to this concept.
Harold
To understand some of the issues about NT's evolution, check out
a very good book - SHOW STOPPERS! by G. Pascal Zachary.
> And NT *has* a pretty complete TCP/IP protocol suite built-in. It's
> what I'm using to send this via RAS and PPP!
The RAS and TCP/IP SLIP/PPP User Interface models in NT sure could
use some work! As well as some testing...but then...one always had
to expect tweaking with UNIX connections...I don't see much difference
> I'm afraid that this is where I must disagree. Telnet is *not* an
> integral part of TCP/IP; Telnet is a "remote logon" tool which happens
> to *use* TCP/IP. The fact is that NT isn't designed for remote interactive
> use, so Telnet becomes irrelevant. Win32 is not a "distributed" GUI, never
> has been, and probably never will be!
Keyword "probably" or "not"? It seems with Client/Server being the
defacto model for distributed processing...NT would do well to offer
a local GUI (window manager or whatever) to a remote processor.
As for David Cutler, Friend or Fiend - probably BOTH!
(and probably damn proud of it too!)
- John Lynker
Over the modem.
--
"IBM's install program is just fine." [and later:] "I will say it again: The
install program is fine. But OS/2 just won't run on some pieces of hardware."
: OK now we seem to have found the real issue: Microsoft doesn't support and
: encourage X. Yes, this is true. The windowing system they believe in
: different than that of X. But saying that NT doesn't support X is a completely
: different statement. Also
Exactly. For example:
If you sell 20 concurrent licenses for 200 users to access from a unix server
via X, you sell just that -- 20 licenses. Under the MS model, you sell 20
licenses for the 'back end' on the server, *and* 200 licenses for the 'front
end' on each users workstation even though in either case only 20 users get
to run at once.
Gee, I wonder why Microsoft uses a model other than distributed graphics.
It may just work better too.