> Cutler had to be dragged kicking and screaming into allowing some
> paging of the kernel.
>
I wonder how much of the stick that Dave Cutler gets is completely
justified. I can see a few good reasons for not having a pageable
kernel, not the least of which is that Microsoft is writing an operating
system into which third-party device drivers are loaded. Others may not
think of this as a development concern, but it certainly is to
Microsoft. One of the things that Microsoft is, institutionally, very
well aware of is the daft things that third party programmers do. It
devotes a lot of time to accommodating daft programming practices, and
at least some thought to how to avoid creating opportunities for further
such time sinks. Making kernel mode pageable would have opened up a
whole vista of new possibilities for third party device and filesystem
driver programmers to do things wrongly. Maybe that was forseen.
Certainly history has shown that it did indeed work out that way. If
one hangs out in the various kernel-mode programming discussion fora,
and reads the books, articles, and other documentation on the subject,
one realizes how often "No, you may not do that at DISPATCH_LEVEL or
higher." is the answer.
The retrospective view is useful, too. The concern that a pageable
kernel was addressing was how much RAM PC/AT compatibles generally had
at the time. In 2004, Raymond Chen wrote an article noting that the
pageable kernel prevents Windows NT from booting and running from a USB
drive, something that people didn't want to do back in the early days of
Windows NT but certainly want to do nowadays. One commenter observed in
response that Linux, with a non-pageable kernel, can be booted and run
from such a drive, observing: "Looks like Dave's conservative
engineering design sense might have been right after all.".
IMHO, a lot. He had blinders on the topic of modern memory management.
The Tenex/Tops-20 folks had it right, and they were all DEC. Now at the
time, DEC was not really a single company, so a bit of the NIH spirit
made sense from a corporate view.
But his decisions hurt both VMS and NT, or rather his lack of experience
in the state of the art. And in some cases, the start of the art we are
talking about is 1969 art, implemented in the late 70s for VMS and late
80s for NT.
> The retrospective view is useful, too. The concern that a pageable
> kernel was addressing was how much RAM PC/AT compatibles generally had
> at the time.
When NT was in public beta test, it required 32MB of ram, miminal
configuration. This was a problem, because in 1991 or so, most PC
motherboards could not directly address more than 16MB, due to hardware
constraints. The commodity PCs used a windowing scheme where memory was
read from the disk into the bottom 16MB and moved up. Highly
inefficient. Perhaps IBM's machines did it properly, but it was rare for
generic PC/AT to do it well at all. Micron did it, but they were
primarily a memory vendor making PCs to sell more memory.
I bought a 486/66 Micron with SCSI everything and 32MB of ram
specifically to beta test NT.
--
Pat Farrell
http://www.pfarrell.com/
Why is it that the closing theme from Camelot runs through my mind...about
how in a halcyon time there was something fair and wonderful that is gone
forever except in memory...
> But his decisions hurt both VMS and NT, or rather his lack of experience
> in the state of the art. And in some cases, the start of the art we are
> talking about is 1969 art, implemented in the late 70s for VMS and late
> 80s for NT.
1969-1970 seems to have been a critical turning point. It saw the release
of Multics (although the project started in 1964), and the inception of
Tenex (later TOPS-20) and UNIX.
Also in that time, TOPS-10 and ITS, which were much earlier technology,
had mid-life kickers in which you can draw a clear "before/after" line.
I wonder if anyone will ever write an accurate history of that time, as
opposed to the various bogus histories that overly tout self-promoters
(Jobs, Gates, Stallman, Cutler, etc.) to the exclusion of others. Most of
these so-called histories totally ignore the PDP-10, a platform that
utterly dominated from the late 1960s until the late 1970s, and remained
important until the late 1980s. And the notion that free software and
software sharing didn't exist prior to the GNU religion is downright
offensive.
-- Mark --
http://panda.com/mrc
Democracy is two wolves and a sheep deciding what to eat for lunch.
Liberty is a well-armed sheep contesting the vote.
I don't believe it was ignorance as much as it was a deliberate choice.
I always describe Cutler as having 20/20 tunnel vision - he knew what
he considered to be most important, and focussed on getting that done.
Other stuff that was just less important didn't get any attention.
One thing people generally fail to mention is that Cutler delivered
the initial versions of VMS (and, later, NT) reasonably close to the
originally-targeted schedule; a feat which was practically unheard of
with large software projects. Adding complicated memory management
would have slowed down the schedule, so it got left off.
I never worked directly with Dave Cutler, but I was working in the
same group as him for a while (Languages & Tools; he was working on
the C project, while I was working on the VMS debugger). I found him
to be dogmatic, and convinced he was right 100% of the time - just
like most software engineers. But over all I suspect he had a better
accuracy rate than average. I also saw nothing to cause me to doubt
the anecdotes telling of his arrogance and poor interpersonal skills.
>I wonder if anyone will ever write an accurate history of that time, as
>opposed to the various bogus histories that overly tout self-promoters
>(Jobs, Gates, Stallman, Cutler, etc.) to the exclusion of others. Most of
>these so-called histories totally ignore the PDP-10, a platform that
>utterly dominated from the late 1960s until the late 1970s, and remained
>important until the late 1980s. And the notion that free software and
>software sharing didn't exist prior to the GNU religion is downright
>offensive.
You write "the PDP-10, a platform that utterly dominated from the late
1960's to the 1970's.".
I assume you are speaking with your tongue in your cheek, since clearly
the IBM 3[67]0 family and clones dominated the period in question, with the PDP-10
relegated to fourth or fifth tier after Burroughs, Sperry, Honeywell,
Bull and CDC.
scott
You are, of course, correct.
Anyone that takes the time to leaf through some
Datamation magazines of that era would be lucky
to find any reference to PDP-10's.
A lot of the reason that software was free in that era was that in the
early 1970's a developer might make $10,000 but even something like an
early PDP-11 was $100,000 with peripherals let alone the cost of a
mainframe. So in that era, especially at the hardware manufacturers it
was assumed "hardware is what costs and makes money".
But even in that era there was a lot of software that was charged for.
And there was a lot of software that started as free that then had a new
version that was charged for. And there never was the concept of a
copy-left in that era, most developers would have been astonished if
someone proposed it.
The problem was that with the advent of the micro-processor and the
increasing complexity of software this situation was changing by mid to
late 1970's. When GNU started campaigning for software to be "free" the
economics of the environment Stallman considered the "nature of the
world" were already obsolete, with software costing more than hardware
in the real world. RMS's solution to this was to tell managers that
programmers should be paid less than half the going rate, this did not
endear the GNU effort to many of us.
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Using Datamation as an historical reference is like using the National
Enquirier.
More to the point, since Datamation ceased to exist 12 years ago (and
became irrelevant years earlier) and the young'uns wouldn't know:
Datamation was read mostly for its humor; and its primary audience were
people who worked in administrative computing (which was indeed dominated
by IBM).
It did not in any way reflect reality in other areas of computing. As far
as Datamation was concerned, timesharing did not exist, and networking
meant RJE.
Datamation almost totally ignored minicomputers. It dismissed the
personal computer revolution and microprocessors for quite a while until
these became impossible for even the most hide-bound to ignore.
I do remember sometimes reading DATAMATION at the public library
when I was in high-school. Those years most of my programming
was on OS/360, and there really wasn't much else in many
libraries related to actual programming.
> It did not in any way reflect reality in other areas of computing. As far
> as Datamation was concerned, timesharing did not exist, and networking
> meant RJE.
Your previous comment didn't mention timesharing, though.
> Datamation almost totally ignored minicomputers. It dismissed the
> personal computer revolution and microprocessors for quite a while until
> these became impossible for even the most hide-bound to ignore.
Well, there wasn't much of the microprocessor revolution in
1974, which was when I read DATAMATION. The only one I actually
remember reading was about a PL/I compiler with a COME FROM statement.
-- glen
A circular religious argument not unexpected from
someone that believed that PDP-10's dominated the
era.
Maybe it was just hyper-caution. No one knew ho big NT was going to be,
and no one knew what the effect of paging kernel code would be. Once
there was a running system people could play with it and try out things
like that.
I remember that features as turning up in an April issue of SIGPLAN
Notices in the late '70s...
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
Not ever have been close to Dave Cutler, I can only reflect on some
properties of RSX and VMS, of which I have some experience.
Cutler seems to have been more focused on micro-kernels, which would
lead to a design where you do not want or need paging in the kernel.
Things which would bloat the kernel, and motivate having a pageable
kernel are outside of the kernel in RSX and VMS. Things like file system
code are in a separate process, for instance. Process context in VMS is
pageable, including even the page table for a process.
Now, microkernels did come into fashion later on, and is once more very
much hot now. You want to load/unloade modules, drivers and whatnot in a
running kernel, and not configure that statically before even booting.
And the kernel wants to be small (but I haven't seen a single one who
actually is, in my mind).
Of course, how you design things are always based on what your targets are.
Cutler was/is definitely not unfamiliar with the wish and requirements
of having a small footprint of a kernel. RSX stands as a very good
testimony to that.
As far as I can tell, DC seems to be able to rub a lot of people the
wrong way. No denying that.
But giving him the stick for his technical abilities are usually more
because he did things in a different way than the PDP-10 crowd at DEC.
Something they seem to never forgive him for.
But the technical merits in the attacks seldom seem to be very high.
Sorry, but that's my point of view. And I happen to like TOPS-20 too, in
addition to RSX. And I think there are some things that are very nice
about VMS as well.
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol
When I was an undergrad I did a paper for a CS class, essential
a compare and contrast for VAX/VMS paging vs. IBM OS/VS2 paging.
VAX uses pagable page tables, IBM uses segments and pages,
with pretty much the same result: two levels of virtual storage.
The IBM z/Archtecture, 64 bit extension of S/370, has five
levels of virtual storage, though some are bypassed for current
memory sizes.
-- glen
So enlighten us about the IBM S/360 systems used to build the Internet.
Tell us how such tools as document processing and email were invented on
S/360. Tell us about how the world's timesharing was on TSS/360, Call-OS,
and APL\360. Tell us about the seminal work on symbolic algebraric
manipulation and artificial intelligence done on S/360.
Oh, and tell us how S/360 launched the computer game industry with such
programs as adventure and zork.
Apparently, in the world of Jim Stewart, everybody used S/360 until they
switched to Windows.
Ever heard of SNA? BNA? TCP/IP may have originated on DEC equipment,
but it sure as hell wasn't a DEC idea (n.b. DECnet).
Document processing: CMS Script on CP-67/TSO (followed by UW script on MVS)
troff for the CAT typesetter on Unix V7 and the Documenters Workbench (PDP-11).
ADSINP on Burroughs medium systems (circa 1970).
Email: Plato NOTES (CDC Cyber systems)
>S/360. Tell us about how the world's timesharing was on TSS/360, Call-OS,
There was certainly a lot of timesharing on CP67/TSO, TSS/360, CICS and burroughs
CANDE.
>Oh, and tell us how S/360 launched the computer game industry with such
>programs as adventure and zork.
See again, PLATO on Cyber. An IBM salescritter mailed me a fortran listing
of Star Trek in 1975. I played Hammurabi on a B-5500 (UW Eau Claire) in 1973.
>Apparently, in the world of Jim Stewart, everybody used S/360 until they
>switched to Windows.
Mark, The bulk of the computer industry never used a PDP-10 during the
70's or 80's. Everything you've mentioned was first done on a IBM,
Burroughs or CDC machine in the 50's or 60's (networking as opposed to
the 'internet' which is just vendor neutral networking).
scott
As long as you consider "dominated the era" to really
mean "technically superior" and I consider it to mean
"market share", we'll never agree.
Setting that aside, and it's a big set-aside, I question
how much the PDP-10 was responsible for building the
internet. My understanding is that PDP-11's, Vaxen and
IMP's built the early internet. Granted, the PDP-10's
made major contributions to time-sharing and AI, and
they were beautiful machines, but that doesn't mean they
dominated the era.
Budweiser dominates the beer market but I prefer Spatan
Optimator.
Vaxen were much later.
Many of the early nodes on the Arparnet/Dapranet were aimed at
connecting PDP-10 and Tenex systems. The IMPs were minicomputers, or in
IBM speak, IO controllers.
The very first IMPs were specialized, then PDP-8s, and later PDP-11.
I don't buy this argument. NT was OS/3, and designed squarely to take on
Novel Netware. At that point, it was all about departmental servers, all
of which were multi-user.
The Tops-10 and Tops-20/Tenex systems were in common production with 1
MW or 1.5MW of memory, roughly 4MB to 8 MB depending on how you count
bytes in a 36 bit word. They clearly knew how to use, and knew why you
used paged/swappable kernel code to support users in a multi-user,
multi-tasking world with limited memory.
When it comes to Internet history, Jim Stewart is blowing farts out his
anus and claiming that they are facts.
I was there in the 1970s.
Look at
http://en.wikipedia.org/wiki/File:Arpnet-map-march-1977.png
PDP-10 dominates the picture (note variants, such as "DEC-1080",
"DEC-2050", "MAXC", etc.), especially of systems offering services.
There were many fewer PDP-11 systems systems than PDP-10 systems on the
ARPAnet. Most were client only systems ("users"). Only a handful of very
large PDP-11 systems (mostly running nascent UNIX) offered any services.
The same is true for many of the other machines listed. I recall only one
IBM S/360 with services, and that was accomplished via a micro acting as a
front end.
The NIC's maps and host tables were, of course, always horribly
inaccurate. They showed long-defunct machines (e.g., PDP-1) and omitted
new PDP-10 systems which continually popped up. We never bothered making
our own maps, but we did make our own host tables.
> Vaxen were much later.
VAX/VMS had a negligible presence on the Internet.
Almost all Internet VAXen ran BSD UNIX. Their advent wasn't until
1979-1980. By that time, ARPAnet had been in operation for a decade.
Most VAXen did not get onto the network until the TCP/IP transition on
January 1, 1983. Prior to that time, if they had connectivity to the
network, it was often over a local area network to a PDP-10 system on
which they could log in and then access ARPAnet. This was the day and age
of "small-i" internet where SMTP relaying was essential as messages hopped
from VAX to PDP-10 to PDP-10 to VAX.
Even after the TCP/IP transition, VAXen remained second class citizens for
some time due to the miserable PDP-11 based routers. It wasn't until
microprocessor routers, later in the 1980s, that it was reasonable to be
on a non-ARPAnet network and not resort to small-i internet. Even then,
it wasn't until NSFnet in the late 1980s that the ARPAnet finally began to
wither away.
> Many of the early nodes on the Arparnet/Dapranet were aimed at
> connecting PDP-10 and Tenex systems. The IMPs were minicomputers, or in
> IBM speak, IO controllers.
IMPs were Honeywell 316 and 516 16-bit computers, and served two purposes;
one as a packet switch to route packets on the ARPANet (over
point-to-point links to other IMPs), and the other being to connect other
computers to the ARPAnet. From the point of view of a computer on the
ARPAnet, the IMP was no more than a hub (although it was very much more
internally).
ISTR that BBN made a processor that was used as an IMP in the waning years
of the ARPAnet.
> The very first IMPs were specialized, then PDP-8s, and later PDP-11.
No.
PDP-8s were never used as IMPs; nor to my knowledge was a PDP-8 ever
connected to the ARPAnet.
PDP-11s were (briefly) used as Internet routers before being replaced by
microcomputer based devices (e.g., early cisco routers). They were never
IMPs.
I doubt it.
>as
> opposed to the various bogus histories that overly tout self-promoters
> (Jobs, Gates, Stallman, Cutler, etc.) to the exclusion of others. Most
> of these so-called histories totally ignore the PDP-10,
Because the PDP-10 will be ignored.
>a platform that
> utterly dominated from the late 1960s until the late 1970s, and remained
> important until the late 1980s. And the notion that free software and
> software sharing didn't exist prior to the GNU religion is downright
> offensive.
You are being kind. :-)
/BAH
Yep.
> I always describe Cutler as having 20/20 tunnel vision - he knew what
> he considered to be most important, and focussed on getting that done.
Yep.
> Other stuff that was just less important didn't get any attention.
> One thing people generally fail to mention is that Cutler delivered
> the initial versions of VMS (and, later, NT) reasonably close to the
> originally-targeted schedule; a feat which was practically unheard of
> with large software projects. Adding complicated memory management
> would have slowed down the schedule, so it got left off.
>
> I never worked directly with Dave Cutler, but I was working in the
> same group as him for a while (Languages & Tools; he was working on
> the C project, while I was working on the VMS debugger). I found him
> to be dogmatic, and convinced he was right 100% of the time - just
> like most software engineers.
You didn't meet my guys ;-).
> But over all I suspect he had a better
> accuracy rate than average. I also saw nothing to cause me to doubt
> the anecdotes telling of his arrogance and poor interpersonal skills.
>
I was one who told him no and made it stick. Not many people were
able to do that.
/BAH
/BAH
Snobbery will get you nowhere. PDP-10s were not designed
for huge data processing tasks. Datamation focused on that
which was IBM-centric.
/BAH
/BAH
ROTFLMAO. Much better answer than the one I just wrote.
/BAH
/BAH
/BAH
I consider it to mean numbers of users who had access to
computer services/minute.
>
> Setting that aside, and it's a big set-aside, I question
> how much the PDP-10 was responsible for building the
> internet.
A LOT.
> My understanding is that PDP-11's, Vaxen and
> IMP's built the early internet. Granted, the PDP-10's
> made major contributions to time-sharing and AI, and
> they were beautiful machines, but that doesn't mean they
> dominated the era.
>
Sigh! JMF's first task at DEC was to make PDP-10s, PDP-12s,
PDP-8s, PDP-11s, IBMs systems talk to each other. That was
in 1970 at a site now called ORNL.
/BAH
Yes, indeed; particularly on the KA10. IIRC, the most notable of these
PDP-8 based front ends was called X680/I using a PDP-8/i.
However, this was never how any system was connected to the ARPAnet.
Although I have little doubt that the same hackers who implemented Kermit
on the PDP-8 could figure out how to do an NCP, AFAIK nobody ever did. An
ARPAnet connection also required a special hardware interface (described
in BBN Report 1822, hence an "1822 interface") but once again AFAIK nobody
ever did that for a PDP-8.
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
> __________ Information from ESET Smart Security, version of virus
signature
> database 4988 (20100331) __________
>
> The message was checked by ESET Smart Security.
>
> http://www.eset.com
>
Yup.
> 1979
> saw a transition from Tops10 to Tops20,
More accurately, 1979 saw a transition from Tenex to TOPS-20.
There were never more than a handful of TOPS-10 systems on the network.
IIRC only one (Rutgers) switched from TOPS-10 to TOPS-20. CMU added a
TOPS-20 system to their farm and it stayed online for a while after the
demise of the TOPS-10 systems.
Most ARPAnet TOPS-10 systems went offline after the TCP/IP transition on
January 1, 1983. Sometime later, a few reappeared after a crash project
to implement TCP/IP for TOPS-10, but others stayed gone for good. All
TOPS-10 systems vanished by 1990; and the plug was pulled on WAITS and ITS
at about that time.
TOPS-20 has never totally vanished from the Internet, but it's an oddball
today.
> and unix came in use.
Yes. Unix was very much an oddball prior to that.
> Some odd machines like cybers, primes, etc were also on the
> net after the tcp/ip transition.
I don't recall any Cybers or Primes offering services, although there may
have been one or two oddballs. IIRC, there was a Cyber that you could
access by connecting to a Tenex system on the ARPAnet that had RJE
capability to the Cyber.
> We saw better performance from a 3-head QNX (with arcnet) 80x86 machine
> than from a VAX 785 with unix. Running usenet, conferencing systems,
> ftp archines, telnet logins etc. This was early 1984.
I am not the slightest bit surprised. VAX's goose was cooked even as the
PDP-10 was killed, but the PHBs at Digital didn't realize that.
> Sun's also had a brief career as routers, using VME boards for
> E1 and T1 lines, and their built-in ethernet cards. I still have
> some of those VME boards somewhere.
Talking about the earliest routers as SUN vs. cisco is rather misleading,
as they have a common ancestor. There's a long long story about this,
which others have already told many times and is too long for this email.
Time Sharing dates from the sixties, if I'm not mistaken CTSS was
already a reality at MIT back in 1963 when I graduated from college -
on IBM 7094 systems, which predated the IBM/360 line and which I had
the privilege to work with when I was a young man: the IBM 7044, in
fact, was my first ever computer, I programmed it in Fortran under the
IBSYS monitor. Artificial intelligence dates from the late 50's and I
believe that McCarthy's LISP paper was published in 1960 - again,
running on 7090 series computers that predated even the IBM/360. And
then came Multics! If you're interested, there's a neat interview by
Professor Corbato' at http://www.cio.com.au/article/325323/cio_blast_from_past_40_years_multics_1969-2009
where he talks about the evolution of Time Sharing and of that very
serious candidate to be the greatest Operating System ever written.
And Corbato's scheduling algorithm, or some variation thereof, is
still the choice scheduler for many contemporary operating systems. I
still remember, when I was a young man and I told my boss that I
wanted to learn operating systems internals, he handled me a copy of
Corbato's original paper!
Modern PC-like environment and applications were probably originated
at the Xerox PARC. I was working in Europe back in 1980 or so and we
got a prototype of the Xerox Star, which had a mouse, a graphics
display, pulldown menus, and the whole nine yards. This was even
before the IBM PC was launched, and before the Apple Lisa saw the
light of day.
I lived through those times, I was a young professional back in the
sixties. The "wow" machine in those days wasn't the PDP-10 but the
Burroughs 5500 - vintage 1964, if I remember - and the Burroughs 6500
which came a few years after it, with its Algol based architecture and
its fabulous MCS Operating System. And the GE 600 Series, of course.
Looking at it today from where I stand, well, le plus ca change le
plus c'est la meme chose. I don't see that much OS technology that
wasn't already available back then. But hey, I may be wrong!
Alberto.
Yeah, but I try hard not to be an arrogant
jerk.
And if you'll reread my post, you'll see that
it might be my understanding is incorrect, not
my claim of fact.
In any case, I thought about this whole thread
long and hard last night. What really mattered
was the people, not the processor. Was the PDP-10
itself critical to the the accomplishments that
you listed, or was it clever people that had easy
access to good computing hardware?
Could the AI groups have done their work on a
pair of 360/67's? Could they move TCP/IP packets
and resolve hosts, given a proper interface?
Seems like you're willing to make the researchers
(and that would include yourself) subservient to
the computer.
IMP's were network controller cards with a thyroid problem... ;-)
--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+
I remember devices called "terminal concentrators" that were
attached to our IBM 370/155 at college. For our DEC-20/50, I do
*not* know how the terminals were supported. Someone posted that
the PDP-11 *inside* the DEC-20 handled terminal I/O for the -20.
There's a better argument earlier in this thread. I'll repeat it,
because it seems that people have skipped completely over it so that
they can stick with the same "Dave Cutler doesn't know anything."
mindset that they are comfortable with. There are a few good reasons
for not having a pageable kernel that I can see, not the least of which
is that Microsoft is writing an operating system into which third-party
device drivers are loaded. Others may not think of this as a
development concern, but it certainly is to Microsoft. One of the
things that Microsoft is, institutionally, very well aware of is the
daft things that third party programmers do. It devotes a lot of time
to accommodating daft programming practices, and at least some thought
to how to avoid creating opportunities for further such time sinks.
Making kernel mode pageable would have opened up a whole vista of new
possibilities for third party device and filesystem driver programmers
to do things wrongly. Maybe that was forseen. Certainly history has
shown that it did indeed work out that way. If one hangs out in the
various kernel-mode programming discussion fora, and reads the books,
articles, and other documentation on the subject, one realizes how often
"No, you may not do that at DISPATCH_LEVEL or higher." is the answer.
Bill Clinton would love this. I guess it depends on what your
definition of "dominated" is. Certainly -10's were popular in
universities and research organizations. On the other hand, in 40 years
I encountered exactly *one* -10, at a timesharing outfit. Naturally I
loved it, but there weren't many out in the real world.
No it didn't.
We were a Beta site for NT 1.0, and I was running it on a Gateway
machine (probably a Pentium 90; possibly a 486 DX2/66) with 16MB.
By the published spec sheet, I can believe it.
But no one would call what NT beta did on 16 MB "running"
It was as impractical as the DEC 2040s that DEC sold with 96 KW of memory.
The SCSI "requirement" was real through all the NT beta, but it loosened
over time.
I'm a little bit reminded of the days when just about everybody used
ASCII except IBM -- which meant something like 90% of the computers in
the world used EBCDIC.
To the best of my recollection, I never saw an IBM computer when I was
an undergrad. DEC-10, VAX, PDP-11, DG Nova, CDC, Harris... yes. IBM,
no. It would be easy to forget how big IBM was, if I were to go from my
own university recollections.
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
Now, now, SABRE was American, Pan Am's system was PANAMAC. The
original SABER (they changed the spelling at some point) ran on 7090s
but was ported pretty quickly to 360s.
> Even before TCP/IP became fashionable, during the seventies, people
> were working on the communications architectures of the times: SNA,
> DCA, BNA, and so on, which evolved into X25 and X21.
Well, yeah. I gather they were using telex links before that.
R's,
John
Yup. IBM was very, very good at doing that kind of
computing service delivery.
> SNA only did a little part of it. They were always
> computing islands, until the PC's built some rickety bridges.
>
> The Internet built the interstates and the autobahns of
> computing from the very start. The PDP10s ruled that kingdom.
And 11s and 8s and 12s.
> It was small in the beginning, but look at the relative sizes
> now. We had a scramble after may 17th, but that setback was
> temporary.
>
> -- mrr
> Who worked for the company that said no to world wide
> exclusive rights for the entire world wide web.
>
>
<grin>
/BAH
11s replaced the 8 functionality pretty quickly. DC72s were PDP-8
based; DC76s were the PDP-11 replacement.
> Although I have little doubt that the same hackers who implemented
> Kermit on the PDP-8 could figure out how to do an NCP, AFAIK nobody ever
> did. An ARPAnet connection also required a special hardware interface
> (described in BBN Report 1822, hence an "1822 interface") but once again
> AFAIK nobody ever did that for a PDP-8.
I was documenting how comm got started. :-)
/BAH
Yes. If you reread one of my posts, I tried to
talk about that.
> Was the PDP-10
> itself critical to the the accomplishments that
> you listed, or was it clever people that had easy
> access to good computing hardware?
>
All of the above. Kiddies, who would have become
interested in something else, were exposed to a
PDP-10 early in college life. You could do
anything you wanted including shoot your foot,
without (usually) affecting any other user on the
system. If you really had the computer biz itch,
you could get a job at the computer center and
play with the gear and software as part of your
job. Many of us never stopped playing.
> Could the AI groups have done their work on a
> pair of 360/67's? Could they move TCP/IP packets
> and resolve hosts, given a proper interface?
that would have depended on the schedules of the
computer center w.r.t. delivery of computing
services.
>
> Seems like you're willing to make the researchers
> (and that would include yourself) subservient to
> the computer.
It was a *man's* toy. There were lots of work
that had to be done once other groups in universities
discovered that computers could do a lot of the grunt
work.
/BAH
And I'm counting all the individuals who had exposure to a computer.
If they all had to drop off card decks and come back a week later
to pick up the erroneous results, most would have considered
the biz as a required PITA instead of fun.
/BAH
Schools, who couldn't afford to buy^Wrent an IBM system had to buy
time on another university's IBM system. That was real money instead
of funny money; so computer time was parceled out with great care.
Only a few "users" would have access to that mainframe.
/BAH
Should have been a little clearer; this is the view from upper
rightopondia. We didn't have many ITS or Tenex installations;
tops10 was the workhorse up until ca 1979.
Most systems weren't directly visible on the arpanet either, until
the tcp/ip transition we had to do login hopping through weeeird
interfaces. Mail connectivity was established pretty early though.
>More accurately, 1979 saw a transition from Tenex to TOPS-20.
>
>There were never more than a handful of TOPS-10 systems on the network.
>IIRC only one (Rutgers) switched from TOPS-10 to TOPS-20. CMU added a
>TOPS-20 system to their farm and it stayed online for a while after the
>demise of the TOPS-10 systems.
>
>Most ARPAnet TOPS-10 systems went offline after the TCP/IP transition on
>January 1, 1983. Sometime later, a few reappeared after a crash project
>to implement TCP/IP for TOPS-10, but others stayed gone for good. All
>TOPS-10 systems vanished by 1990; and the plug was pulled on WAITS and ITS
>at about that time.
The last tops10 system around here was the Oslo University one, shut
down in 1986. The console printer is still there, though, with the
last printout still attached.
>TOPS-20 has never totally vanished from the Internet, but it's an oddball
>today.
>
>> and unix came in use.
>
>Yes. Unix was very much an oddball prior to that.
>
>> Some odd machines like cybers, primes, etc were also on the
>> net after the tcp/ip transition.
>
>I don't recall any Cybers or Primes offering services, although there may
>have been one or two oddballs. IIRC, there was a Cyber that you could
>access by connecting to a Tenex system on the ARPAnet that had RJE
>capability to the Cyber.
Again, the view from here. The national data company, ND had a
disastrously bad TCP/IP stack, probably contributed a lot to it's demise.
>> We saw better performance from a 3-head QNX (with arcnet) 80x86 machine
>> than from a VAX 785 with unix. Running usenet, conferencing systems,
>> ftp archines, telnet logins etc. This was early 1984.
>
>I am not the slightest bit surprised. VAX's goose was cooked even as the
>PDP-10 was killed, but the PHBs at Digital didn't realize that.
>
>> Sun's also had a brief career as routers, using VME boards for
>> E1 and T1 lines, and their built-in ethernet cards. I still have
>> some of those VME boards somewhere.
>
>Talking about the earliest routers as SUN vs. cisco is rather misleading,
>as they have a common ancestor. There's a long long story about this,
>which others have already told many times and is too long for this email.
I didn't mention Cisco at all. I never saw a cisco until late 1985,
until that time it was all unix based machines, mostly suns. By 1988
Cisco had taken over.
-- mrr
<snip>
Again, you all are forgetting about the network software which
became ANF-10.
/BAH
I don't remember what the config was for the system that JMF worked
on at ORNL. I would suspect that they had an 8/I on their KA back
then.
> Although I have little doubt that the same hackers who implemented
> Kermit on the PDP-8 could figure out how to do an NCP, AFAIK nobody ever
> did. An ARPAnet connection also required a special hardware interface
> (described in BBN Report 1822, hence an "1822 interface") but once again
> AFAIK nobody ever did that for a PDP-8.
An 8 would be a tad small for NCP. But I'll bet that the developers
accessed the system they worked on through an 8 ;-).
/BAH
/BAH
I don't remember any of the ARPAnet-connected KA10s using X680/i or any of
other PDP-8 based front ends.
By the way, "NCP" in the context of ARPAnet refers to "Network Control
Protocol", a predecessor to TCP/IP that was specific to 1822-format
networks.
NCP was a very simple protocol, with a 40-bit header of which 32-bits was
the destination socket number. Transmitting sockets were always odd,
receiving sockets were always even; and a socket uniquely identified the
connection on the system (there could be only one connection to a socket).
The connection protocol (ICP) involved connecting to a well-known socket,
reading 32-bits for a new socket to actually use, closing the connection
to the well-known socket (so others could use it), then opening a pair of
connections to get a bidirectional link.
TCP's design was highly influenced by lessons learned from NCP, and
especially NCP's complex and fragile ICP.
UW did have its own mainframe for business operations -- a Burroughs.
It was used for *nothing* else. The CDC was the academic-side big
computer.
> Mark Crispin wrote:
> > On Tue, 30 Mar 2010, Pat Farrell posted:
> >> Jim Stewart wrote:
> >>> Setting that aside, and it's a big set-aside, I question
> >>> how much the PDP-10 was responsible for building the
> >>> internet. My understanding is that PDP-11's, Vaxen and
> >>> IMP's built the early internet.
> >
> > When it comes to Internet history, Jim Stewart is blowing farts out his
> > anus and claiming that they are facts.
> >
> > I was there in the 1970s.
>
> Yeah, but I try hard not to be an arrogant
> jerk.
>
> And if you'll reread my post, you'll see that
> it might be my understanding is incorrect, not
> my claim of fact.
>
> In any case, I thought about this whole thread
> long and hard last night. What really mattered
> was the people, not the processor. Was the PDP-10
> itself critical to the the accomplishments that
> you listed, or was it clever people that had easy
> access to good computing hardware?
>
> Could the AI groups have done their work on a
> pair of 360/67's?
IBMs were leased. Would IBM continue to support a computer that had
some academics' experimental hardware hooked up to it? Could new and
experimental device drivers be added to IBM's OS? These might be as
important as the machine's architecture.
-- Patrick
> Mark Crispin wrote:
> > On Tue, 30 Mar 2010, Jim Stewart posted:
> >> Anyone that takes the time to leaf through some
> >> Datamation magazines of that era would be lucky
> >> to find any reference to PDP-10's.
> >
> > Using Datamation as an historical reference is like using the National
> > Enquirier.
>
> ROTFLMAO. Much better answer than the one I just wrote.
The National Enquirer just makes stuff up for (dubious) entertainment
value. Datamation didn't. Datamation just concentrated on the
corporate data processing market, where IBM did indeed dominate. But
the academic and research markets were a lot more fun and interesting,
and they were dominated by PDP-10s.
-- Patrick
Well post anti-trust they had too. And I know certainly in the UK, we had
lots of weird kit hooked into various University mainframes using a wide
variety of interfaces. The oldest I remember NUNET/NUMAC (I think) used
PDP/11s acting as IBM2708/3708 concentrators. There were "Browns Boxes" for
X.25 and some how Cambridge Ring got connected in at Leeds but I think that
was actually an Amdahl at that time...
>
> -- Patrick
I guess I wasn't clear. Suppose that IBM was willing
to supply machines with roughly the same performance
as a PDP-10, under the same terms and conditions as
DEC would. And assume the machines would be given to
the CS department where they would be available to
the students and researchers under the same conditions.
That wasn't until the 1980s IIRC.
ROTFL!
I remember quite well what IBM's "equivalent" for a DEC-20 was: the 4341.
We had one in our computer room. It made a great table. We used it to
spread out print sets, listings, etc. on it, pile manuals on it, etc. I
sometimes would sit on it.
The PHBs yelled at us for doing that, but we did so anyway. We finally
compromised on clearing off the 4341 when IBM dignitaries would show up,
and putting it all back once they left.
The 4341 was otherwise totally useless and unused. The only reason that
it was there was that IBM gave it to us to try to convince us to switch
from the DEC-20.
I still have my "IBM Virtual Machine Facility/370: Quick Guide for Users",
GX20-1926-6. It's a hoot to read. IBM at that time had no clue.
But that was in 1982, well after the work inventing Arpanet.
-- Patrick
Possibly... but an IBM that would be willing to supply computers on
that basis would be so different from the real IBM as to be
unrecognizable. Much of DEC's competitive advantage in the 1960s and
1970s was in making and supporting computers for experimental
applications. So the computers were less expensive, field service was
more accomodating about experimental hardware being attached. The
people are important -- but that means the people working for the
vendor as well as the researchers.
IBM's competitive advantage was in highly reliable data processing.
Businesses going with IBM paid a premium for that reliability.
-- Patrick
The counter-argument is that (quite rarely) the National Enquirer actually
publishes a factual story, and that (rather more commonly) total nonsense
appeared in Datamation (and I'm not talking about the obvious humor
articles).
> Datamation just concentrated on the
> corporate data processing market, where IBM did indeed dominate.
True. But even in that market, Datamation frequently got a lot wrong. It
was quite dismissive of the personal computer revolution until it became
impossible for even them to ignore.
> But
> the academic and research markets were a lot more fun and interesting,
> and they were dominated by PDP-10s.
Again true.
As I/O devices, I believe so. There are plenty of stories about
adding I/O devices to S/360 systems. Once the device is connected,
there isn't really any driver needed. All you need is the ability
to write channel programs, and the OS to allow you to execute
one for the specified device.
That which is called device drivers on most systems is done by
access method services for OS/360. It is done in user mode
(that is, not supervisor mode) and only when the final channel
program is ready is it checked (to make sure it does only what
it should do) and executed.
-- glen
As for SNA and DCA, they were quite cool, and they worked like a
charm. On IBM and Univac gear. On whatever communication links we
could get from the Post Offices of the time.
Alberto.
This reliability was primarily in their peripherals.
Nobody who ever dealt with any IBM OS would call it reliable.
I still cringe at the memory of an IBM 360/67 running OS/360+HASP; and
with Call-OS (shudder!), APL\360, ATS, and CourseWriter as timesharing
systems each doing (SHUDDER!!) PSW stealiing.
A good day was when the system would crash only once while you were using
it.
The whole of the i386 architecture was designed with that kind of
operation in mind, but people instead were sort of blinded by RISC and
went in the direction of not using all that cool hardware.
Even today, I still believe that it would make a lot of sense to run i/
o from Ring 1, on those machines that still implement it.
Alberto.
On Mar 31, 4:17Â pm, Jonathan de Boyne Pollard <J.deBoynePollard-
newsgro...@NTLWorld.COM> wrote:
> >> No one knew ho big NT was going to be, and no one knew what the
> >> effect of paging kernel code would be. Once there was a running
> >> system people could play with it and try out things like that.
>
> > I don't buy this argument. NT was OS/3, and designed squarely to take
> > on Novel Netware.
>
> ... which didn't have a pageable kernel, either. Â So your argument isn't
> sold, too. Â (-:
>
> There's a better argument earlier in this thread. Â I'll repeat it,
> because it seems that people have skipped completely over it so that
> they can stick with the same "Dave Cutler doesn't know anything."
> mindset that they are comfortable with. Â There are a few good reasons
> for not having a pageable kernel that I can see, not the least of which
> is that Microsoft is writing an operating system into which third-party
> device drivers are loaded. Â Others may not think of this as a
> development concern, but it certainly is to Microsoft. Â One of the
> things that Microsoft is, institutionally, very well aware of is the
> daft things that third party programmers do. Â It devotes a lot of time
> to accommodating daft programming practices, and at least some thought
> to how to avoid creating opportunities for further such time sinks.
> Making kernel mode pageable would have opened up a whole vista of new
> possibilities for third party device and filesystem driver programmers
> to do things wrongly. Â Maybe that was forseen. Â Certainly history has
> shown that it did indeed work out that way. Â If one hangs out in the
> various kernel-mode programming discussion fora, and reads the books,
> articles, and other documentation on the subject, one realizes how often
> "No, you may not do that at DISPATCH_LEVEL or higher." is the answer.
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
> __________ Information from ESET Smart Security, version of virus
signature
> database 4993 (20100401) __________
>
> The message was checked by ESET Smart Security.
>
> http://www.eset.com
>
Your bigotry is showing. Tell me again how many commercial enterprises
ran their business on PDP-10/20?
And the idea that Academe was "dominated" by PDP-10/20 (your words)
also is silly. While a few high-profile universities had PDP-10/20
gear, the majority by far didn't (mine had IBM and PCM gear, later
supplemented with PDP-11's and VAXen).
Does anyone have the actual number of PDP-10's build and shipped
along with a breakdown on commercial vs. educational?
scott
And on the other hand, IBM has a long history of putting its boxes in
universities - so this really comes down to a question of which
university you attended. The Watson Lab at Columbia was established in
the '40s, I think, and they had a 360 back in 1968.
University of Michigan got theirs in, what, 1967? Actually, I guess
that was the Model 67 - they already had a Model 50. And so on.
A lot of universities were on BITNET, though that didn't start until
1982, which is rather late by AFC standards (and those of this
discussion specifically).
--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University
AIX 3 has a pageable kernel, so it's not like the technique was
unknown among the OSes NT was designed to compete with.
> There's a better argument earlier in this thread. I'll repeat it,
> because it seems that people have skipped completely over it so that
> they can stick with the same "Dave Cutler doesn't know anything."
> mindset that they are comfortable with. There are a few good reasons
> for not having a pageable kernel that I can see, not the least of which
> is that Microsoft is writing an operating system into which third-party
> device drivers are loaded.
That was the justification for the HAL, and it's possible it was used
to justify the non-pageable kernel too. But it's a weak justification.
Just keep third-party device drivers pinned if you're worried that
they won't handle paging well. You'd still have a mostly-pageable kernel.
It's possible to claim that any conservative aspect of NT's design was
intentional, to shield the OS from lousy programmers; just as it's
possible to claim that any incautious one is a sign that the NT team
didn't care about security or robustness. Both are worth considering,
but without stronger evidence don't make much of an argument.
It is not bigotry to observed the proven fact that IBM sucked.
> Tell me again how many commercial enterprises
> ran their business on PDP-10/20?
I personally did business in one form or another with a few hundred that
did. I no longer keep all of my contact lists from 30+ years ago. But
many of the names would surprise you. For example, they included all of
the major Wall Street banks.
A lot of these sites also used IBM gear. The one thing that IBM did not
suck at doing was for printing paychecks and other such tasks that were
ameniable to COBOL batch processing.
> And the idea that Academe was "dominated" by PDP-10/20 (your words)
> also is silly. While a few high-profile universities had PDP-10/20
> gear, the majority by far didn't (mine had IBM and PCM gear, later
> supplemented with PDP-11's and VAXen).
I see. You went to the one school that didn't have one, and generalized.
Or perhaps you were just not allowed to use a PDP-10.
That was the case at Stanford until 1976; ordinary undergraduates were
forced to use Wylbur on IBM gear. PDP-10 access was a dainty of faculty,
grad students, and the few undergrads who managed to land a job at one of
the three facilities. That situation is what finally led to a student
march on the computer center and subsequent agreement to buy a PDP-10 that
students could use.
Similar situations existed elsewhere; undergraduates in the early 1970s
weren't considered worthy of being allowed anything more than punch cards.
> Does anyone have the actual number of PDP-10's build and shipped
> along with a breakdown on commercial vs. educational?
Over 3000 PDP-10s were built. That's a lot of mainframes by any account.
Barb would know better, but from DECUS attendees it was probably about 60%
commercial and 40% educational. Some of the commerical installations
(e.g., CompuServe) were huge.
Columbia was also a big DEC-20 shop starting in the mid 1970s.
Clearly the IBM gear did not address all their computing needs.
The school I went to had a PDP-10 (later replaced by VAX) for
student use. (That is, free student accounts.) The 370/158
required real money, so was mostly used by faculty for research
projects, though also by students for class work.
I was used to OS/360 (and WYLBUR) before then, but not at all
to the PDP-10.
> Similar situations existed elsewhere; undergraduates in the early 1970s
> weren't considered worthy of being allowed anything more than punch cards.
There were two choices for 370 batch. One was cards, the other was
job submission from the PDP-10. (There was no path back to the 10.)
When the line printer of the PDP-10 died, the spooler was converted
to spool through to the 370.
-- glen
How can a university with a business school *not* have an IBM 370
or clone back in the 1970's??? That is the computer that the COBOL
programmers would be *most* likely to use out in the business
world. The biggest part of computing at my university was done on
an IBM 370. That included the business dept., math dept.,
engineering dept., physics dept., and chemistry dept.
--
+----------------------------------------+
| Charles and Francis Richmond |
| |
| plano dot net at aquaporin4 dot com |
+----------------------------------------+
Amen, BAH! But to be fair, our IBM 370 did run APL/360 and had
*some* interactive access. Plus there was terminal usage through
WYLBUR, an interactive editing and job submission utility. WYLBUR
had a little 4GL called EXEC that one could use for housekeeping
tasks.
I liked using the DEC-20 and it was all the better because *most*
computing was done on the IBM. Less competition for access.
At the college I attended, the IBM 370 used EBCDIC of course. But
*all* ther terminals were ASCII terminals and the "terminal
concentrator" had to translate back and forth. We had quite a few
DECWriters (LA-36's I think) and I *loved* the keyboard on
those!!! Just the right spring tension under each key.
No. I visited a large number of schools in the 70's in the midwest
and none of them had PDP-10's. Considering that there were over
3000 non-profit colleges and universities at that point in history,
and give your 40% number and assuming that there was only one PDP-10
at each of the (40% of 3000 is 1200). Assuming that many of the
PDP-10 sites had more than one system further reduces the absolute
number of colleges and universities that had PDP-10 systems to
less than 1000.
>
>Or perhaps you were just not allowed to use a PDP-10.
Actually, since I worked for the comp center as an undergrad and postgrad,
I was allowed to use pretty much anything they had.
>
>That was the case at Stanford until 1976; ordinary undergraduates were
>forced to use Wylbur on IBM gear. PDP-10 access was a dainty of faculty,
>grad students, and the few undergrads who managed to land a job at one of
>the three facilities. That situation is what finally led to a student
>march on the computer center and subsequent agreement to buy a PDP-10 that
>students could use.
>
>Similar situations existed elsewhere; undergraduates in the early 1970s
>weren't considered worthy of being allowed anything more than punch cards.
>
>> Does anyone have the actual number of PDP-10's build and shipped
>> along with a breakdown on commercial vs. educational?
>
>Over 3000 PDP-10s were built. That's a lot of mainframes by any account.
There were more burroughs medium systems (B2500/B3500/B4700) built than
that, and burroughs was definitely second tier. That's not counting
the B3800/B4800/B2900/B3900/B4900/V300/V500's that followed through
the rest of the 70s, 80s and 90s.
They were far easier to use than IBM iron, but not timesharing powerhouses.
scott
The second doesn't follow from the first.
scott
> Joe Pfeiffer wrote:
>>
>> To the best of my recollection, I never saw an IBM computer when I was
>> an undergrad. DEC-10, VAX, PDP-11, DG Nova, CDC, Harris... yes. IBM,
>> no. It would be easy to forget how big IBM was, if I were to go from my
>> own university recollections.
>
> How can a university with a business school *not* have an IBM 370 or
> clone back in the 1970's??? That is the computer that the COBOL
> programmers would be *most* likely to use out in the business
> world. The biggest part of computing at my university was done on an
> IBM 370. That included the business dept., math dept., engineering
> dept., physics dept., and chemistry dept.
I wasn't in Business. They did have an HP 3000 running a BASIC
interpreter (my Data Structures professor got us accounts on it, so we
could be forced to implement a stack by hand to simulate recursion);
they may well have had an IBM, but I didn't hear about it.
>> IBMs were leased. Would IBM continue to support a computer that had
>> some academics' experimental hardware hooked up to it? Could new and
>> experimental device drivers be added to IBM's OS? These might be as
>> important as the machine's architecture.
>
>Well post anti-trust they had too. And I know certainly in the UK, we had
>lots of weird kit hooked into various University mainframes using a wide
>variety of interfaces. The oldest I remember NUNET/NUMAC (I think) used
>PDP/11s acting as IBM2708/3708 concentrators. There were "Browns Boxes" for
>X.25 and some how Cambridge Ring got connected in at Leeds but I think that
>was actually an Amdahl at that time...
This was how mainframe networks were built. A cobbled-together mess
of protocols, none from IBM. Philips had some gear that was very
popular in banking/insurance networks in the mid 1970's, using
datex (circuit switched 1200 bps) to access a remote network, and
local end stations with local intelligence and some spooling.
Similar stuff was used for travel agencies; but they used multidrop
bisync; but still with some local intelligence. And the transaction
screens became a nightmare of cryptical commands to get it all into
one transaction.
None of this had a chance to scale beyond the corporate "network"
it was deployed in. I saw a merger between two banks up close, where
they really tried to integrate. No such luck. They had to replace
so they only had one network.
Stats from internet exchange points still notice the encapsulated
bisync, x25, frame relay etc. in the protocol fields. These legacy
bits used 3% of the ix bandwidth as late as 2005.
-- mrr
OK, so Uncle Bob's Pig Farm and College of Swineology couldn't afford
anything else after the lease payments for a 360/25 to RJE to some 360/90
in a real school.
That in no way changes the fact that PDP-10s dominated computing at real
colleges and universities in the 1970s.
Only to the witless.
If the pre-existing IBM gear addressed their computing needs, then they
wouldn't have subsequently needed to buy (multiple) DEC-20 systems.
Hey!!! The DEC-20 at the college I attended *had* a card reader!!!
No one used it, but the -20 *had* one. ;-) It also had a drum
line printer (150 lpm IIRC), but it was in the computer center and
few sent their printouts there. If one *really* had to have a
printout, you could transfer the listing to the IBM 370 and print
it on a nearby printer.
Didn't Xerox Data Systems (nee Scientific Data Systems) have any
sort of impact on academic and research markets??? ISTM that Xerox
fumbled this market too, but there were a lot of systems out there
before Xerox gave up.
Well, in a tautological way, it does.
P: Columbia bought computers other than IBM
Q: IBM Gear did not address all of Columbia's computing needs
That IBM gear COULD not have addressed their needs cannot be inferred.
Another thing that can't be inferred is WHY did IBM gear not address all
of their needs.
One possible interpretation, perhaps that favored by MRC is "IBM Gear
was so bletcherous and cretinous that it could not have possibly met
their needs"
Another interpretation is "Columbia's computing needs include exposure
to non-IBM kit"
Yet another is: "The PDP-10 Architecture was so clearly and obviously
winning that not having it around was inconceivable", or, less spun
"There were things afoot in the PDP-10 community that Columbia had to be
a part of"
A combination of the second and third seem (in my arrogant opinion) the
most salient.
Your bigotry is shining brightly today.
FWIW, the school I chose _invented_ the digital computer. It also has a very
well respected Vet Med college.
Have a nice day (Barb often uses a less polite response).
scott
I can think of a dozen reasons, right off hand, that would lead
to buying a DEC-20. First being a better deal from DEC than the
competition.
scott
A fourth is 'DEC gave us a good deal' and 'IBM didn't'.
scott
> Patrick Scheible wrote:
> > jmfbahciv <jmfbahciv@aol> writes:
> >
> >> Mark Crispin wrote:
> >>> On Tue, 30 Mar 2010, Jim Stewart posted:
> >>>> Anyone that takes the time to leaf through some
> >>>> Datamation magazines of that era would be lucky
> >>>> to find any reference to PDP-10's.
> >>> Using Datamation as an historical reference is like using the National
> >>> Enquirier.
> >> ROTFLMAO. Much better answer than the one I just wrote.
> >
> > The National Enquirer just makes stuff up for (dubious) entertainment
> > value. Datamation didn't. Datamation just concentrated on the
> > corporate data processing market, where IBM did indeed dominate. But
> > the academic and research markets were a lot more fun and interesting,
> > and they were dominated by PDP-10s.
> >
> > -- Patrick
>
> Didn't Xerox Data Systems (nee Scientific Data Systems) have any
> sort of impact on academic and research markets??? ISTM that Xerox
> fumbled this market too, but there were a lot of systems out there
> before Xerox gave up.
I don't have firm numbers. There were a few on the Arpanet map
previously referenced, but only a few.
-- Patrick
Aren't affordable computers among Columbia's needs?
I suspect the truth is more like, Columbia needed a system that was
good at timesharing, user-friendly for undergrads and novice computer
users, and DEC-20s were the obvious choice.
-- Patrick
I'm baffled by this statement. Folks getting a Business degree are not
likely to be programming in any language. Sure, lots of business used
Cobol, that is one of the reasons Cobol was designed.
I see no connection between business degrees and the details of how the
business programs were implemented. That was left to the geeks while the
business majors became a "Master of The Universe"
--
Pat Farrell
http://www.pfarrell.com/
[snip]
>I still cringe at the memory of an IBM 360/67 running OS/360+HASP; and
>with Call-OS (shudder!), APL\360, ATS, and CourseWriter as timesharing
>systems each doing (SHUDDER!!) PSW stealiing.
^^^^^^^^^^^^^
Please define this term.
[snip]
Sincerely,
Gene Wirchenko
Ah, when I first got my "Drivers License" at the UW Teaching Lab (1978?)
there was still a Sigma 5 there. The local pronunciation of "!" was
"Bang".
Moments after learning that a job on the machine started with !JOB, a
very innocent-looking young lady in the class with me asked if the
command to initiate time-sharing was !GANG.
The various degrees dealing with either programming or administering
business computers are typically in the College of Business, not
whereever the CS department is. And, frankly, both the CS department
and the COB are normally happier with this arrangement.
/BAH
I was thinking of the first time anybody typed MAKE FOO.MAC.
>
> By the way, "NCP" in the context of ARPAnet refers to "Network Control
> Protocol", a predecessor to TCP/IP that was specific to 1822-format
> networks.
>
That's how I read NCP. Sounds a tad simpler than DECnet.
I don't know what ANF-10's looked like.
> NCP was a very simple protocol, with a 40-bit header of which 32-bits
> was the destination socket number. Transmitting sockets were always
> odd, receiving sockets were always even; and a socket uniquely
> identified the connection on the system (there could be only one
> connection to a socket). The connection protocol (ICP) involved
> connecting to a well-known socket, reading 32-bits for a new socket to
> actually use, closing the connection to the well-known socket (so others
> could use it), then opening a pair of connections to get a bidirectional
> link.
>
> TCP's design was highly influenced by lessons learned from NCP, and
> especially NCP's complex and fragile ICP.
Are those lessons being relearned these days?
/BAH
I think a lot of schools had those kinds of systems. One for the
data processing schools had to do w.r.t. scheduling kids and classes
and other bookkeeping things. The profs and students had to make
do with whatever the computer center could provide. In my school,
the systems were separated by miles and in separate buildings.
Even DEC used a Burroughs for its business processing for a long
time until somebody (probably KO) noticed.
/BAH
/BAH