Google Groups no longer supports new usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reading Gordon Bell's VAX strategy document

1,048 views
Skip to the first unread message

John Dallman

unread,
24 Sept 2023, 10:10:4524/09/2023
to
Gordon Bell, who was Vice-President of Engineering at DEC 1972-83 is
still alive and documenting much of his life on the web. There's DEC
stuff at https://gordonbell.azurewebsites.net/Digital/DECMuseum.htm

Something particularly interesting is this document on DEC strategy as of
1979:

https://gordonbell.azurewebsites.net/Digital/VAX%20Strategy%20c1979.pdf

At the time, DEC's other active product ranges were PDP-8, DEC-10/DEC-20
and PDP-11. They had decided in 1975 to create an architecture that built
upwards from the PDP-11, rather than building lower-cost DEC-10 machines.
The reasons for doing that were the large installed base of PDP-11s and
the convenience of 8-bit bytes for data communications, especially with
IBM mainframes.

As of 1978/70 they had achieved this and were deciding what to do next.
The strategy expressed in this document is to continue to sell the other
ranges, but concentrate development efforts on the VAX family, and that's
what basically happened. Using a single architecture is seen as a
competitive advantage against IBM's proliferation of incompatible
architectures, which is pretty reasonable, since IBM saw the same problem.


Bell regards competition from "zero cost" microprocessors such as the
8086 and 68000 as likely more significant than other minicomputer
companies, but fails to make a plan to deal with them. DEC was eventually
defeated by 80386 and later PCs and RISC workstations, and that failure
seems to start here. He assumes that DEC can dominate the market for
terminals for its minis by using PDP-11 and VAX microprocessors, but
doesn't seem to realise that compatible terminals can be built at much
lower cost using third-party microprocessors. In any case, the
replacement of minis by PCs and workstations meant that the terminal
market basically vanished.

The idea of running VMS on a terminal with a total of 64KB of RAM and ROM
in 1982 seems implausible now, but it seems to have been the reason for
512-byte pages. Bell praises the extremely compact VAX instruction set
and its elaborate function calls, without appreciating the ways they will
come to inhibit pipelining and out-of-order execution, and thus doom the
architecture to uncompetitive performance.

John

Johnny Billquist

unread,
24 Sept 2023, 10:29:0524/09/2023
to
It's always easy to see mistakes after the fact.
When the VAX was designed, as well as around 1980, memory was still
rather expensive. An instruction set that lead to smaller binaries was a
big win at that point in time.

What you could possibly argue was that DEC didn't enough see or
anticipate the drop in price of memory, which would lead to totally
different constraints and optimal points.

VMS was never expected to run on something with 64K. You couldn't even
run a reasonable PDP-11 on that little memory at that point. (I said
responable, for anyone dragging out a minimal RT-11 system.)

But VAX was most definitely designed for getting programs more memory
efficient. More addressing modes, more things done in microcode to deal
with things in a single instruction. Very variable length
instructions... All was about memory cost. Which made a lot of sense
between 1970 and 1985. After that, memory was becoming so cheap there
was no reason for the optimization angle the VAX had taken. And you had
the rise of RISC.

Johnny

Dave Froble

unread,
24 Sept 2023, 12:07:3824/09/2023
to
Well, you're right about "after the fact" ...

I cannot remember exactly when the first C-VAX came out, but when it did, DEC
then made the fatal mistake. I'm not sure they actually had any options. The
company was rather "top heavy" with many employees to support.

If DEC had went after the low end market with the C-VAX, I really feel that DEC
would still be with us today. Since PC users don't have large budgets for
support and such, DEC would have had to downsize the labor force, and that was
something they would have a hard time with.

Doesn't matter, cause that was and is the direction the market was heading, even
back then. It could not be resisted.

But, figure a low cost VAX vs the PCs of the day. With good marketing, and
pricing, Intel would not have become what they are today.

It was going to happen anyway, DEC resisting, just killed the company.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

John Dallman

unread,
24 Sept 2023, 12:10:0424/09/2023
to
In article <ueph3e$lj1$1...@news.misty.com>, b...@softjar.se (Johnny
Billquist) wrote:

> What you could possibly argue was that DEC didn't enough see or
> anticipate the drop in price of memory, which would lead to totally
> different constraints and optimal points.

Indeed. Nor did they look at the history of the art of making computers
faster. The VAX architecture was implemented readily enough at first, but
made pipelining, out-of-order and other ideas that had been invented in
the 1950s and 1960s hard to add.

> VMS was never expected to run on something with 64K. You couldn't
> even run a reasonable PDP-11 on that little memory at that point.
> (I said responable, for anyone dragging out a minimal RT-11 system.)

I agree it seems crazy, but that's what the paper says, on page 14:

. . . a range of 64 Kbytes of RAM and ROM for VMS in the terminal
to as much as 32 Mbytes in the large configuration . . .

John

gah4

unread,
24 Sept 2023, 12:20:1724/09/2023
to
On Sunday, September 24, 2023 at 7:10:45 AM UTC-7, John Dallman wrote:

(snip)

> As of 1978/70 they had achieved this and were deciding what to do next.
> The strategy expressed in this document is to continue to sell the other
> ranges, but concentrate development efforts on the VAX family, and that's
> what basically happened. Using a single architecture is seen as a
> competitive advantage against IBM's proliferation of incompatible
> architectures, which is pretty reasonable, since IBM saw the same problem.

(snip)

> The idea of running VMS on a terminal with a total of 64KB of RAM and ROM
> in 1982 seems implausible now, but it seems to have been the reason for
> 512-byte pages. Bell praises the extremely compact VAX instruction set
> and its elaborate function calls, without appreciating the ways they will
> come to inhibit pipelining and out-of-order execution, and thus doom the
> architecture to uncompetitive performance.

I was thinking not so long ago, and wrote it into some thread, about how
bad VAX design is for pipelining and OoO execution.

A tiny change would have made a bid difference, if someone thought
about it earlier.

The way VAX addressing modes work, is each operand has a mode byte
followed by any offsets that go with it. Then the next mode byte and offset.
If instead, all mode bytes were together, followed by the offsets, it
would have been much easier to do OoO processing.

The most important thing you want, when you start reading an instruction,
is to know where the next one is. For IBM S/360, you always know from the
first byte where the next instruction is. Even with the above change, it isn't
easy for VAX, but would be close.

You could put the first 8 or so bytes in a register, and then enough logic
to decode the address mode bytes, and know where the next one was.

VAX is very well designed for serial, microprogrammed processing,
where you read a mode byte, process it along with its offset, then go on
to the next one. There are probably even better ways to change it, but
that one would have been (if done early) so easy, and yet wasn't.

VAX is also nicely designed to make it easy for assembly programmers,
just at the time that just about everyone was moving away from assembly
programming. And the 512 byte page size was too small.



Chris Townley

unread,
24 Sept 2023, 13:43:3324/09/2023
to
My first Vax (after PDP 11/44) was 8530 with a massive 32Mb, and 3 off
428Mb disks

hat handles 60 to 80 production users, 2 test systems (not my idea) and
a backup for our Central Ingres set up!

It was still way faster than the old PDP

--
Chris

John Dallman

unread,
24 Sept 2023, 14:04:5324/09/2023
to
In article <uepms6$1ds91$1...@dont-email.me>, da...@tsoft-inc.com (Dave
Froble) wrote:

> Well, you're right about "after the fact" ...

Yup.

> I cannot remember exactly when the first C-VAX came out, but when
> it did, DEC then made the fatal mistake. I'm not sure they
> actually had any options. The company was rather "top heavy" with
> many employees to support.

CVAX systems were available in 1987, according to Wikipedia. But it's a
multi-chip set, more expensive to manufacture and build boards for than
an Intel 80386.

> If DEC had went after the low end market with the C-VAX, I really
> feel that DEC would still be with us today.

Maybe. The MS-DOS hardware and software industry was already very well
established, and competition had driven hardware prices down a lot. DEC
would have had to pick some niches to target and win several of them.

> Since PC users don't have large budgets for support and such, DEC
> would have had to downsize the labor force, and that was something
> they would have a hard time with.

They'd also have to make system administration easy for first-timers, and
condense the documentation a lot. It's a hard thing to do while you're
cutting staff.

John

bill

unread,
24 Sept 2023, 18:39:4024/09/2023
to
On 9/24/2023 1:43 PM, Chris Townley wrote:
>
>
> My first Vax (after PDP 11/44) was 8530 with a massive 32Mb, and 3 off
> 428Mb disks
>
> hat handles 60 to 80 production users, 2 test systems (not my idea) and
> a backup for our Central Ingres set up!
>
> It was still way faster than the old PDP
>

That's like comparing an IBM 1401 and 4331. The first VAX I worked
with was an 11/750 and compared to my 11/44's it was a real dog.

bill

Arne Vajhøj

unread,
24 Sept 2023, 18:48:5624/09/2023
to
On 9/24/2023 10:29 AM, Johnny Billquist wrote:
> On 2023-09-24 16:10, John Dallman wrote:
>> The idea of running VMS on a terminal with a total of 64KB of RAM and ROM
>> in 1982 seems implausible now, but it seems to have been the reason for
>> 512-byte pages.

> VMS was never expected to run on something with 64K. You couldn't even
> run a reasonable PDP-11 on that little memory at that point. (I said
> responable, for anyone dragging out a minimal RT-11 system.)

What was the smallest VAX memory wise?

I think I have heard about 256 KB VAX 780's. Can anyone confirm?

Arne


abrsvc

unread,
24 Sept 2023, 23:06:0924/09/2023
to
The 780 had hex height boards that were 256KB each. Ours was one of the largest of those at the universities at the time with 1-1/4 MB (5 boards) when it first arrived. Supported 50+ terminals with it too!!

Dan

John Dallman

unread,
25 Sept 2023, 02:48:4325/09/2023
to
In article <6c594b48-68c2-4400...@googlegroups.com>,
ga...@u.washington.edu (gah4) wrote:

> The most important thing you want, when you start reading an
> instruction, is to know where the next one is. For IBM S/360,
> you always know from the first byte where the next instruction
> is. Even with the above change, it isn't easy for VAX, but
> would be close.

I wonder if that was deliberate for S/360? The original paper on the
architecture does not mention anything about the encoding; a really old
copy of the "Principles of Operation" would be interesting.

John

gah4

unread,
25 Sept 2023, 03:14:2025/09/2023
to
On Sunday, September 24, 2023 at 11:48:43 PM UTC-7, John Dallman wrote:
> In article <6c594b48-68c2-4400...@googlegroups.com>,

(I wrote)

> > The most important thing you want, when you start reading an
> > instruction, is to know where the next one is. For IBM S/360,
> > you always know from the first byte where the next instruction
> > is. Even with the above change, it isn't easy for VAX, but
> > would be close.

> I wonder if that was deliberate for S/360? The original paper on the
> architecture does not mention anything about the encoding; a really old
> copy of the "Principles of Operation" would be interesting.

Some parts were amazingly lucky in that virtual machines work well.

One interesting one is that when S/360 machines have nothing else to
do, they stop executing instructions. There is no idle loop.

The reason for that, is that for rented machines they charged based on
actual usage. There is a meter that measures how often it is not in
a wait state.

But S/360 came not so long after Stretch, designed to be fast and where
many pipelined processor ideas started. Early on, there was the 360/92,
later replaced by the model 91, using pipelining techniques.

One not talked about much, but that I have known for a long time, is
that hexadecimal floating point is more convenient for pipelining.

S/360 has three instruction lengths, which you know from the first
two bits of the opcode, Even if you are not designing a fancy processor,
it is nice to know where the next instruction is. While S/360 is not RISC,
compared to VAX, it looks very RISCy.

The book by Blaauw and Brooks, "Computer Architecture" describes
many architectures, but includes some details that actually went
into S/360, because they designed it.

The one I am remembering now is the big endian choice, which they
believe is the right choice. The VMS DUMP program shows why.

It might be in there.





Paul Hardy

unread,
25 Sept 2023, 09:37:3225/09/2023
to
Arne Vajhøj <ar...@vajhoej.dk> wrote:
> What was the smallest VAX memory wise?
> I think I have heard about 256 KB VAX 780's. Can anyone confirm?

I system managed VAX 11/780 serial 000047 in 1979. The original order was
for 256K memory, but we upped it to 768K (3/4 MB) before delivery. It ran
the complete computing of the company, including six programmers, and we
sold time on it to at least four other high tech Cambridge companies -
Shape Data, GDS, Nine Tiles, and whatever Dick Newell’s company was called
at the time (CIS?).

--
Paul at the paulhardy.net domain

Single Stage to Orbit

unread,
25 Sept 2023, 11:01:2125/09/2023
to
I seem to remember Microsoft also used VAX machines to build Windows in
the early days. Was that true?
--
Tactical Nuclear Kittens

John Dallman

unread,
25 Sept 2023, 14:29:3925/09/2023
to
In article <cb83004a-46f6-424f...@googlegroups.com>,
ga...@u.washington.edu (gah4) wrote:

> One not talked about much, but that I have known for a long time, is
> that hexadecimal floating point is more convenient for pipelining.

Do you have a citation for that? I've been updating Wikipedia's page on
hexFP, simply because I'd dug into the idea a bit, and started to realise
why it lost more precision than the architects had expected.

> The book by Blaauw and Brooks, "Computer Architecture" describes
> many architectures, but includes some details that actually went
> into S/360, because they designed it.

Ordered a used volume 1.

John

Arne Vajhøj

unread,
25 Sept 2023, 16:16:0325/09/2023
to
On 9/25/2023 10:14 AM, Single Stage to Orbit wrote:
> On Mon, 2023-09-25 at 14:37 +0100, Paul Hardy wrote:
>> Arne Vajhøj <ar...@vajhoej.dk> wrote:
>>> What was the smallest VAX memory wise?
>>> I think I have heard about 256 KB VAX 780's. Can anyone confirm?
>>
>> I system managed VAX 11/780 serial 000047 in 1979. The original order
>> was for 256K memory, but we upped it to 768K (3/4 MB) before
>> delivery. It ran the complete computing of the company, including six
>> programmers, and we sold time on it to at least four other high tech
>> Cambridge companies
>
> I seem to remember Microsoft also used VAX machines to build Windows in
> the early days. Was that true?

Create cross-compiler/cross-assembler for VMS VAX and run
on a VAX 8000 series or 6000 series?

I have never heard about it. And I am skeptical about it - it seems
like lot of extra development and more upload/download for a
modest speed advantage.

Arne

gah4

unread,
25 Sept 2023, 16:21:4925/09/2023
to
On Monday, September 25, 2023 at 11:29:39 AM UTC-7, John Dallman wrote:
> In article <cb83004a-46f6-424f...@googlegroups.com>,
> ga...@u.washington.edu (gah4) wrote:

> > One not talked about much, but that I have known for a long time, is
> > that hexadecimal floating point is more convenient for pipelining.

> Do you have a citation for that? I've been updating Wikipedia's page on
> hexFP, simply because I'd dug into the idea a bit, and started to realise
> why it lost more precision than the architects had expected.

I don't, though there is a little in the Blaauw and Brooks book.

Some time ago, I was thinking about how to do floating point,
and especially fast floating point, on an FPGA.
(Not much need for slow floating point.)

The biggest part of an adder is the barrel shifter for pre,
and post-normalization. That is, logic to shift N digits
in one operation. (No clocked shift register.) It is much easier
(and smaller) in a higher radix.

In an FPGA with LUT4 logic, that takes N levels for
pre-normalization, one level to do the add/subtract, and
N levels for post normalization.

As with the previous question, if they were planning the 360/91
(well, 92) from the beginning, that might have been considered.

The 91 does floating point addition in two clock cycles, multiply in three,
single precision divide in 12, and double precision in 18.
The divide algorithm is based on using the parallel multiplier
used for multiply. Normalization isn't the biggest part of
those, so it isn't quite as obvious as you might want.

The other advantage of hexadecimal floating point, is that it
is a lot easier to read in hex dumps.

In a microprogrammed machine, you can shift in hex digits.

gah4

unread,
25 Sept 2023, 16:27:0325/09/2023
to
On Monday, September 25, 2023 at 8:01:21 AM UTC-7, Single Stage to Orbit wrote:

(snip)

> I seem to remember Microsoft also used VAX machines to build Windows in
> the early days. Was that true?

The early days of MS were on PDP-10s. The Living Computer Museum
has the first KS-10 that MS used. Paul Allen really liked the PDP-10,
and that was the first computer Paul and Bill did much of the work on.

I suspect that as DEC went to VAX, MS would have done that, too,
but I don't know that one at all.

I suspect that if they had a choice, it would have been PDP-10 forever.


John Dallman

unread,
25 Sept 2023, 17:24:4425/09/2023
to
In article <01509bf320b2688f715edda...@munted.eu>,
alex....@munted.eu (Single Stage to Orbit) wrote:

> I seem to remember Microsoft also used VAX machines to build
> Windows in the early days. Was that true?

Using big machines would have been more plausible in the earlier days of
Microsoft, when they produced BASIC interpreters and other languages for
8- and 16-bit micros, and did a lot of CP/M implementations. By the time
they were developing Windows in the mid- to late 1980s, the machines that
it ran on were more adequate for development than in the early days.

John

Dave Froble

unread,
25 Sept 2023, 19:52:2425/09/2023
to
Alpha Ultimate workstations (AlphaServer 1200) were used for the initial 64 bit
WEENDOZE.

I seem to remember Gates and co using PDP-10 systems for some early stuff. The
early Basic for sure.

comp.os.vms

unread,
25 Sept 2023, 20:15:0625/09/2023
to comp.os.vms to email gateway
[]

For a very interesting read about VAX architecture history

<https://ipfs.io/ipfs/QmdA5WkDNALetBn4iFeSepHjdLGJdxPBwZyY47ir1bZGAK/comp/vax.html>


Regards,

Kerry Main
Kerry dot main at starkgaming dot com




Dan Cross

unread,
25 Sept 2023, 20:15:5025/09/2023
to
In article <01509bf320b2688f715edda...@munted.eu>,
Single Stage to Orbit <alex....@munted.eu> wrote:
>On Mon, 2023-09-25 at 14:37 +0100, Paul Hardy wrote:
>> Arne Vajhøj <ar...@vajhoej.dk> wrote:
>> > What was the smallest VAX memory wise?
>> > I think I have heard about 256 KB VAX 780's. Can anyone confirm?
>>
>> I system managed VAX 11/780 serial 000047 in 1979. The original order
>> was for 256K memory, but we upped it to 768K (3/4 MB) before
>> delivery. It ran the complete computing of the company, including six
>> programmers, and we sold time on it to at least four other high tech
>> Cambridge companies
>
>I seem to remember Microsoft also used VAX machines to build Windows in
>the early days. Was that true?

The early days of Windows NT are well-documented in the book,
"Show Stopper!" by G. Pascal Zachary. In short, they used OS/2
and 386 machines; NT was self-hosting within a couple of years.

- Dan C.

Chris Townley

unread,
25 Sept 2023, 20:37:2225/09/2023
to
I still miss OS/2 - it was great to use, but a bugger to program the
presentation layer, or whatever it was called

--
Chris

gah4

unread,
26 Sept 2023, 01:15:2126/09/2023
to
On Sunday, September 24, 2023 at 7:29:05 AM UTC-7, Johnny Billquist wrote:

(snip)

> VMS was never expected to run on something with 64K. You couldn't even
> run a reasonable PDP-11 on that little memory at that point. (I said
> responable, for anyone dragging out a minimal RT-11 system.)

> But VAX was most definitely designed for getting programs more memory
> efficient. More addressing modes, more things done in microcode to deal
> with things in a single instruction. Very variable length
> instructions... All was about memory cost. Which made a lot of sense
> between 1970 and 1985. After that, memory was becoming so cheap there
> was no reason for the optimization angle the VAX had taken. And you had
> the rise of RISC.

IBM S/360 was designed around about 1963.

There are machines down to 8K bytes, all magnetic core memory.

The first use of semiconductor RAM in a commercial computer is
the memory protection keys in the 360/91, four bits for every 2K
of core. That is built from 16 bit bipolar RAM chips.



gah4

unread,
26 Sept 2023, 01:22:2926/09/2023
to
On Monday, September 25, 2023 at 5:37:22 PM UTC-7, Chris Townley wrote:

(snip)

> I still miss OS/2 - it was great to use, but a bugger to program the
> presentation layer, or whatever it was called

I did character mode programming on OS/2 back to version 1.0.

One of the early ones that I did was to port GNU utilities including diff
and grep. That wasn't quite as nice as a full Unix system, but not so bad
for development and debugging.

Much of that was working on programs that would run under DOS for
others, but I ran mine on OS/2.

At some point, I would allocate segments directly from OS/2 instead
of using malloc(). That allowed for full memory protection, read or write,
outside of any array.


Single Stage to Orbit

unread,
26 Sept 2023, 04:01:2226/09/2023
to
On Tue, 2023-09-26 at 00:15 +0000, Dan Cross wrote:
> > I seem to remember Microsoft also used VAX machines to build
> > Windows in
> > the early days. Was that true?
>
> The early days of Windows NT are well-documented in the book,
> "Show Stopper!" by G. Pascal Zachary.  In short, they used OS/2
> and 386 machines; NT was self-hosting within a couple of years.

Yes thanks, looks like they did use PDPs for the early 8 bit stuff.
--
Tactical Nuclear Kittens

Single Stage to Orbit

unread,
26 Sept 2023, 04:01:2226/09/2023
to
On Mon, 2023-09-25 at 13:27 -0700, gah4 wrote:
> > I seem to remember Microsoft also used VAX machines to build
> > Windows in
> > the early days. Was that true?
>
> The early days of MS were on PDP-10s. The Living Computer Museum
> has the first KS-10 that MS used.  Paul Allen really liked the PDP-
> 10,and that was the first computer Paul and Bill did much of the work
> on.
>
> I suspect that as DEC went to VAX, MS would have done that, too,
> but I don't know that one at all.
>
> I suspect that if they had a choice, it would have been PDP-10
> forever.

Yes, I misremembered for sure. Yes I think they did all their early 8
bit stuff on the PDP. Thanks.
--
Tactical Nuclear Kittens

Neil Rieck

unread,
26 Sept 2023, 08:26:4926/09/2023
to
On Sunday, September 24, 2023 at 10:10:45 AM UTC-4, John Dallman wrote:
> Gordon Bell, who was Vice-President of Engineering at DEC 1972-83 is
> still alive and documenting much of his life on the web. There's DEC
> stuff at https://gordonbell.azurewebsites.net/Digital/DECMuseum.htm
>
> Something particularly interesting is this document on DEC strategy as of
> 1979:
>
> https://gordonbell.azurewebsites.net/Digital/VAX%20Strategy%20c1979.pdf
>
> At the time, DEC's other active product ranges were PDP-8, DEC-10/DEC-20
> and PDP-11. They had decided in 1975 to create an architecture that built
> upwards from the PDP-11, rather than building lower-cost DEC-10 machines.
> The reasons for doing that were the large installed base of PDP-11s and
> the convenience of 8-bit bytes for data communications, especially with
> IBM mainframes.
>
> As of 1978/70 they had achieved this and were deciding what to do next.
> The strategy expressed in this document is to continue to sell the other
> ranges, but concentrate development efforts on the VAX family, and that's
> what basically happened. Using a single architecture is seen as a
> competitive advantage against IBM's proliferation of incompatible
> architectures, which is pretty reasonable, since IBM saw the same problem.
>
>
> Bell regards competition from "zero cost" microprocessors such as the
> 8086 and 68000 as likely more significant than other minicomputer
> companies, but fails to make a plan to deal with them. DEC was eventually
> defeated by 80386 and later PCs and RISC workstations, and that failure
> seems to start here. He assumes that DEC can dominate the market for
> terminals for its minis by using PDP-11 and VAX microprocessors, but
> doesn't seem to realise that compatible terminals can be built at much
> lower cost using third-party microprocessors. In any case, the
> replacement of minis by PCs and workstations meant that the terminal
> market basically vanished.
>
> The idea of running VMS on a terminal with a total of 64KB of RAM and ROM
> in 1982 seems implausible now, but it seems to have been the reason for
> 512-byte pages. Bell praises the extremely compact VAX instruction set
> and its elaborate function calls, without appreciating the ways they will
> come to inhibit pipelining and out-of-order execution, and thus doom the
> architecture to uncompetitive performance.
>
> John

For people needing more information on this topic, purchase a copy of "DEC Is Dead, Long Live DEC" (2003-2004) by Edgar H. Schein. Appendix E was written by Gordon Bell. The book was commissioned by Ken Olson as a post-mortem warning to other American companies.

https://neilrieck.net/docs/recommended_books_technology.html#dec

The Coles Notes version of the story centers around missed opportunities, and mistakes, by DEC as the industry shifted from CISC to RISC. Then bad advice at the top decided to bet the farm on CISC in the form of the water-cooled Aquarius (VAX-9000). A huge amount of money was also wasted when DEC decided to manufacture their own chips (Alpha) at Hudson Mass.

I work for a Canadian telecom so we spread our purchases across many companies. I still recall working on a VAX-8550 dual-host cluster (1987-1988) in Toronto when people down the hall had just purchased a 32-bit SPARC server from SUN, which was much smaller than our VAX cluster but was much faster. Over the next 15 years my employer bought a lot of SUN hardware which grew larger but was always faster. I work in Canada where there are two official languages (English + French) and it goes without saying that UNIX always did a better job supporting international character sets, at a time when many American companies refused to move beyond ASCII.

But for me, DEC's hatred for C, UNIX and TCPIP was just plain stupid since 16-bit PDP and 32-bit VAX were responsible for creating ARPAnet.
https://neilrieck.net/links/cool_computer.html#internet
Working on a VAX, once Bill Joy had rewritten all the new libraries in C, they raced from university to university.

Back in 1992, I was working on a VAX-6000 when my employer asked me to install a TCP/IP stack. We were instructed to buy the software from Process Software because DEC's product was still considered experimental. Once on TCPware, we stuck with that product on VAX and Alpha. We would have stayed with it for Itanium but since TCPware didn't support IPv6 we migrated to MultiNet.

Neil Rieck
Waterloo, Ontario, Canada.
http://neilrieck.net

Dan Cross

unread,
26 Sept 2023, 10:14:2226/09/2023
to
In article <944e6c54-4d47-4bf8...@googlegroups.com>,
Neil Rieck <n.r...@bell.net> wrote:
>[snip]
>But for me, DEC's hatred for C, UNIX and TCPIP was just plain stupid since 16-bit PDP and 32-bit VAX were responsible for creating ARPAnet.
>https://neilrieck.net/links/cool_computer.html#internet
>Working on a VAX, once Bill Joy had rewritten all the new libraries in C, they raced from university to university.

This is surprising to me. My sense was always that there was
more done on the ARPANET with the PDP-10 than the -11, though
there were certainly a lot of PDP-11 hosts in the early days.
Still, I'd put the PDP-10 as more responsible for ARPANET than
the -11.

Certainly, once 4.1c BSD got TCP/IP and that escaped to
universities, Unix on VAX (and then whatever BSD was ported to;
Sun for instance got TCP/IP from Berkeley) became dominant on
the Internet.

>Back in 1992, I was working on a VAX-6000 when my employer asked me to install a TCP/IP stack. We were instructed to buy the software from
>Process Software because DEC's product was still considered experimental. Once on TCPware, we stuck with that product on VAX and Alpha. We
>would have stayed with it for Itanium but since TCPware didn't support IPv6 we migrated to MultiNet.

In some respects, I think that DEC's vision for the world was in
fact too early: highly networked, workstations, terminals and
hosts all interconnected. It was quite compelling, but just a
tad too early to pick up TCP/IP etc.

- Dan C.

Rich Alderson

unread,
26 Sept 2023, 18:09:0126/09/2023
to
Neil Rieck <n.r...@bell.net> writes:

> But for me, DEC's hatred for C, UNIX and TCPIP was just plain stupid since
> 16-bit PDP and 32-bit VAX were responsible for creating ARPAnet.

> https://neilrieck.net/links/cool_computer.html#internet

> Working on a VAX, once Bill Joy had rewritten all the new libraries in C,
> they raced from university to university.

A great deal of the work on the ARPANET and early Internet was done on PDP-10
family computers running BBN's TENEX, DEC's TOPS-20 (a TENEX derivative),
Stanford AI Lab's WAITS (a Tops-10 derivative), and MIT AI Lab's ITS.

The 16 bit computers used as routers on the ARPANET were Honeywell products,
not DEC.

Unix(TM) did not get TCP/IP until the 1980s, a dozen years after the ARPANET
began, and several years after TCP/IP was defined. The standards were hosted
on a PDP-10 at SRI, and model implementations were generally done on PDP-10s.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Lars Brinkhoff

unread,
27 Sept 2023, 02:46:2327/09/2023
to
Rich Alderson wrote:
> Neil Rieck wrote:
>> But for me, DEC's hatred for C, UNIX and TCPIP was just plain stupid
>> since 16-bit PDP and 32-bit VAX were responsible for creating
>> ARPAnet.
>
> A great deal of the work on the ARPANET and early Internet was done on
> PDP-10 family computers
>
> Unix(TM) did not get TCP/IP until the 1980s, a dozen years after the
> ARPANET began

To this I'd like to add, first there were a few PDP-11s running Unix
with the NCP protocol on the pre-TCP ARPANET. E.g. University of
Illinois and RAND. Second, VAX computers weren't even around when
ARPANET got started. In fact, were there any at all on the ARPANET
before the 1983 flag day?

Dan Cross

unread,
27 Sept 2023, 09:09:0827/09/2023
to
In article <7w5y3wj...@junk.nocrew.org>,
Almost certainly. The initial TCP/IP implementation work for
Unix was being done at BBN at the time, and I imagine that meant
VAXen connected to ARPANET.

- Dan C.

Johnny Billquist

unread,
27 Sept 2023, 17:37:4227/09/2023
to
Well, before flag day, ARPANET wasn't speaking TCP/IP...

Johnny

Lars Brinkhoff

unread,
28 Sept 2023, 02:09:4228/09/2023
to
Johnny Billquist wrote:
> Dan Cross wrote:
>>> In fact, were there any [VAXen] at all on the ARPANET before the
>>> 1983 flag day?
>> Almost certainly. The initial TCP/IP implementation work for Unix
>> was being done at BBN at the time, and I imagine that meant VAXen
>> connected to ARPANET.

I meant VAX machines talking the NCP protocol.

> Well, before flag day, ARPANET wasn't speaking TCP/IP...

Yet, there were experiments with TCP long before the flag day so it's
not a 100% either/or situation. I get the feeling (but I have no
evidence handy) some subset of nodes got started using TCP before NCP
was shut down. On several occasions in 1982 (and maybe earlier?) BBN
arranged for NCP "brown-outs" to encourage speedy development.

Johnny Billquist

unread,
28 Sept 2023, 06:58:0828/09/2023
to
There were of course development, and testing done between machines and
so on. But that was not "ARPANET". ARPANET was running NCP until flag
day, when it officially switched to IP. And at some point after that,
all of ARPANET because just the 10.* addresses on the Internet, and then
ARPANET was turned off, and it was decided that 10.* should not exist on
the Internet anymore...

Johnny

Single Stage to Orbit

unread,
28 Sept 2023, 09:01:2328/09/2023
to
On Thu, 2023-09-28 at 12:58 +0200, Johnny Billquist wrote:
> There were of course development, and testing done between machines
> and so on. But that was not "ARPANET". ARPANET was running NCP until
> flag day, when it officially switched to IP. And at some point after
> that, all of ARPANET because just the 10.* addresses on the Internet,
> and then ARPANET was turned off, and it was decided that 10.* should
> not exist on the Internet anymore...

That 10.* address range still lives on in private networks to this very
day. Hadn't realised until now that ARPAnet actually had that address
range.
--
Tactical Nuclear Kittens

Johnny Billquist

unread,
28 Sept 2023, 10:20:5528/09/2023
to
Yes. It's a private range for exactly the reason that when ARPANET was
decomissioned/turned off, its address range was decided to not be
reused. Which made it available for private use as it is today.


One bit of ARPANET still exists today. The weird reverse DNS lookups on
IP addresses are done within the arpa.net domain. :-)

Gromit:/Users/johnny.billquist> nslookup -query=ptr 8.8.8.8.in-addr.arpa.net
Server: 195.186.1.111
Address: 195.186.1.111#53

Non-authoritative answer:
8.8.8.8.in-addr.arpa.net name = localhost.

Authoritative answers can be found from:


Johnny

gah4

unread,
28 Sept 2023, 12:57:0528/09/2023
to
On Thursday, September 28, 2023 at 7:20:55 AM UTC-7, Johnny Billquist wrote:

(snip)

> One bit of ARPANET still exists today. The weird reverse DNS lookups on
> IP addresses are done within the arpa.net domain. :-)
>
> Gromit:/Users/johnny.billquist> nslookup -query=ptr 8.8.8.8.in-addr.arpa.net
> Server: 195.186.1.111
> Address: 195.186.1.111#53
>
> Non-authoritative answer:
> 8.8.8.8.in-addr.arpa.net name = localhost.

All the ones I have are in-addr.arpa. No .net on them.

Single Stage to Orbit

unread,
28 Sept 2023, 14:01:2228/09/2023
to
On Thu, 2023-09-28 at 09:57 -0700, gah4 wrote:
> > One bit of ARPANET still exists today. The weird reverse DNS
> > lookups on IP addresses are done within the arpa.net domain. :-)
> >
> > Gromit:/Users/johnny.billquist> nslookup -query=ptr 8.8.8.8.in-
> > addr.arpa.net
> > Server: 195.186.1.111
> > Address: 195.186.1.111#53
> >
> > Non-authoritative answer:
> > 8.8.8.8.in-addr.arpa.net name = localhost.
>
> All the ones I have are in-addr.arpa.   No .net on them.

$ nslookup -query=ptr 8.8.8.8.in-addr.arpa.net
net.c:537: probing sendmsg() with IPV6_TCLASS=b8 failed: Network is
unreachable
Server: 192.168.2.254
Address: 192.168.2.254#53

Non-authoritative answer:
8.8.8.8.in-addr.arpa.net name = localhost.

Not here it is :)
--
Tactical Nuclear Kittens

gah4

unread,
28 Sept 2023, 14:25:5028/09/2023
to
On Thursday, September 28, 2023 at 11:01:22 AM UTC-7, Single Stage to Orbit wrote:
> On Thu, 2023-09-28 at 09:57 -0700, gah4 wrote:

(snip)

> > All the ones I have are in-addr.arpa. No .net on them.

> $ nslookup -query=ptr 8.8.8.8.in-addr.arpa.net
> net.c:537: probing sendmsg() with IPV6_TCLASS=b8 failed: Network is
> unreachable
> Server: 192.168.2.254
> Address: 192.168.2.254#53
> Non-authoritative answer:
> 8.8.8.8.in-addr.arpa.net name = localhost.
> Not here it is :)

But it gives the wrong answer!

> 8.8.8.8
Server: 127.0.0.1
Address: 127.0.0.1#53

Non-authoritative answer:
8.8.8.8.in-addr.arpa name = dns.google.


In my few tries, the in-addr.arpa.net always returns localhost.

in-addr.arpa, the one it has been for years, decades, returns
the right answer.





Dan Cross

unread,
28 Sept 2023, 15:43:3928/09/2023
to
In article <uf3m7s$lv$4...@news.misty.com>,
Johnny Billquist <b...@softjar.se> wrote:
>On 2023-09-28 08:09, Lars Brinkhoff wrote:
>> Johnny Billquist wrote:
>>> Dan Cross wrote:
>>>>> In fact, were there any [VAXen] at all on the ARPANET before the
>>>>> 1983 flag day?
>>>> Almost certainly. The initial TCP/IP implementation work for Unix
>>>> was being done at BBN at the time, and I imagine that meant VAXen
>>>> connected to ARPANET.
>>
>> I meant VAX machines talking the NCP protocol.
>>
>>> Well, before flag day, ARPANET wasn't speaking TCP/IP...
>>
>> Yet, there were experiments with TCP long before the flag day so it's
>> not a 100% either/or situation. I get the feeling (but I have no
>> evidence handy) some subset of nodes got started using TCP before NCP
>> was shut down. On several occasions in 1982 (and maybe earlier?) BBN
>> arranged for NCP "brown-outs" to encourage speedy development.
>
>There were of course development, and testing done between machines and
>so on. But that was not "ARPANET". ARPANET was running NCP until flag
>day, when it officially switched to IP.

...and for a while after that, too!

>And at some point after that,
>all of ARPANET because just the 10.* addresses on the Internet, and then
>ARPANET was turned off, and it was decided that 10.* should not exist on
>the Internet anymore...

What you wrote above is certainly true, but given that the
ARPANET was the original Internet backbone and that the initial
TCP work for VAX Unix was being done at BBN, it's not
unreasonable to believe that they did an NCP implementation for
the VAX before TCP/IP proper. Of course, that's speculation (I
have no evidence) but it's not unreasonable.

- Dan C.

Dan Cross

unread,
28 Sept 2023, 15:47:0328/09/2023
to
In article <e1bcd5f3-807f-46bc...@googlegroups.com>,
The correct domain for reverse DNS lookup is in-addr.arpa. It
looks like the `arpa.net` is just a normal domain (if a
surprising one).

- Dan C.

Lars Brinkhoff

unread,
28 Sept 2023, 15:49:3728/09/2023
to
Johnny Billquist wrote:
> There were of course development, and testing done between machines
> and so on. But that was not "ARPANET". ARPANET was running NCP until
> flag day, when it officially switched to IP.

NCP and TCP operated in parallel on the ARPANET for a while. The
Internet Protocol Transition Workbook from November 1981 encouraged new
hosts to only implement TCP, not NCP, and says at that point there were
TCP-only hosts. On several occasions during 1982, NCP was temporarily
blocked, but TCP was allowed. What happened on flag day was that NCP
was permanently blocked.

So what I was wondering was: were there any VAXen talking NCP, or did
they jump straight to TCP? I'd like to see evidence, not handwaving.

Dan Cross

unread,
28 Sept 2023, 16:00:0028/09/2023
to
In article <7wh6neh...@junk.nocrew.org>,
This came up on the TUHS list back in 2021 (you were on the
thread, Lars). That pointed to this:

https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/history

Which stronly implies that there was some "NCP" in VAX Unix
sometime in 1980. Whether that was the Network Control Protocol
or just an affectation for "networking code" (as implied by Noel
Chiappa in the TUHS thread) is unknown.

- Dan C.

Johnny Billquist

unread,
28 Sept 2023, 17:39:0628/09/2023
to
You are right. I don't know where I got the ".net" from, and I'm
surprised it worked...

Johnny

Johnny Billquist

unread,
28 Sept 2023, 17:41:4628/09/2023
to
Very definitely possible. And it's probably even possible to find
concrete information if you dig through the old RFCs. I know there are
ones which are just listing machines and OSes that are capable to
interoperate on the ARPANET. But I've read through them so many times I
don't care to do it again right now. :-)

Johnny

Johnny Billquist

unread,
28 Sept 2023, 17:43:2828/09/2023
to
How would they interoperate? TCP and NCP are not exactly compatible in
any way.

You would basically have to have two different parallel networks, and
them you might have some machines that would act as gateway between the
two networks.

Johnny

Johnny Billquist

unread,
28 Sept 2023, 17:44:5928/09/2023
to
I have obviously no idea. But one also have to be careful that DECnet
don't get mixed in here, since there is also an NCP there. And Ultrix
talked DECnet on VAXen. Not sure when that came about, though...

Johnny

gah4

unread,
28 Sept 2023, 18:19:4128/09/2023
to
On Thursday, September 28, 2023 at 2:43:28 PM UTC-7, Johnny Billquist wrote:

(snip)

> You would basically have to have two different parallel networks, and
> them you might have some machines that would act as gateway between the
> two networks.

I don't remember back to NCP, but I do remember exactly that
for DECnet and TCP/IP. There were gateways that would transfer
mail between the two. I believe also some that would allow
remote login between the two.


Lars Brinkhoff

unread,
29 Sept 2023, 01:07:4229/09/2023
to
Johnny Billquist wrote:
>> NCP and TCP operated in parallel on the ARPANET for a while.
> How would they interoperate? TCP and NCP are not exactly compatible in
> any way.

Details are found in the "Internet Protocol Transition Workbook".

> you might have some machines that would act as gateway between the two
> networks.

That's exactly what they did.

Lars Brinkhoff

unread,
29 Sept 2023, 02:22:3829/09/2023
to
Dan Cross writes:
> This came up on the TUHS list back in 2021 (you were on the
> thread, Lars). That pointed to this:
>
> https://www.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/history
>
> Which stronly implies that there was some "NCP" in VAX Unix
> sometime in 1980. Whether that was the Network Control Protocol
> or just an affectation for "networking code" (as implied by Noel
> Chiappa in the TUHS thread) is unknown.

Thanks! Yes, that would be unknow. That is almost an optimum point in
time for ambiguity as to what NCP means. DECnet, Chaosnet, and IBM also
picked up the term as roughly equivalent to "network stack". So this
BBN VAX NCP could be either the old Arpanet NCP, or a new TCP stack.

As per the 1981 Transition book, there were already TCP-only hosts on
the Arpanet, implying that some sites developed and deployed TCP well
before that. I get the sense that everyone was aware the switch was
going to happen and new development was towards TCP. But I don't know
the exact timeline.

Neil Rieck

unread,
29 Sept 2023, 08:24:5329/09/2023
to
Most people reading this thread will all ready know much of the following facts:
1) ARPANET research begins in 1966 (ARPA becomes DARPA in 1972)
2) A lot of people were writing their own client/server modifications before 1982, and much of it was in assembler
3) The various network modifications were not compatible, so DARPA wanted to develop a newer technology which would allow the various networks to interconnect (this is where the second name, internet, comes from)
4) Many people today do not know that UDP was developed 5-6 years 'after' TCP (many think it was the other way around; UDP was primarily developed to aid in packet routing but today it has many other uses (SIP springs to mind))
5) DARPA needed standardized protocols and code; this would best come from one team. Not sure of all the politics, but much of this work eventually came from a gifted programmer at UC Berkeley by the name of Bill Joy. He did a lot of his work on a VAX running BSD UNIX.
6) I'm not certain who moved all the assembler code into C, but once that was done, it was distributed amongst all the universities who were running UNIX systems.
7) I knew a lot of people who were running third party stacks on their Windows and Macintosh systems between 1994 and 1998. At that time network communication interfaces all cost a lot of money, so most newbies were asking questions like "how can I be using this internet stack for free?" My answer was always "anything developed by the US tax payer is usually placed into the public domain".

Neil



Dan Cross

unread,
29 Sept 2023, 10:54:3129/09/2023
to
In article <7w7co9i...@junk.nocrew.org>,
I, too, find it hard to imagine that a lot of effort would have
been put into a VAX NCP implementation since it was clear that
TCP/IP was coming, but as a stopgap or some special purpose? I
could see it.

I found a copy of the hosts table from 1983:
https://github.com/ttkzw/hosts.txt/blob/master/pub/hosts/19830119/HOSTS.TXT

This lists a number of VAX systems that appear to have been
assigned "ARPANET host numbers", but they could also be "RCCnet"
numbers (I don't think I've ever heard of RCCnet).

This:
https://github.com/ttkzw/hosts.txt/blob/master/pub/hosts/19820615/SYSHST%3B%20HOSTS%20PRETTY
appears to show at several VAXen on ARPANET (BBNF running VMS at
01/05 and UCLA-SECURITY running Unix at 2/01, among others).

Again, it's not entirely clear if these are NCP-hosts, possibly
running TCP/IP, or what. I do feel comfortable assuming they're
not running DECnet (or at least, that's irrelevant to them being
listed in these host tables).

- Dan C.

Dan Cross

unread,
29 Sept 2023, 10:57:5229/09/2023
to
In article <uf4s1t$cle$6...@news.misty.com>,
Well, the statement was that they existed in parallel, not that
they necessarily inter-operated. But communication between them
was documented in the transition guide, and there were gateways
for a while. I even see some evidence in early sendmail
configurations that there were provisions for sending mail to
NCP gateways (e.g., `usr.lib/sendmail/cf/ncphosts.m4` in 4.1c
BSD; UDEL was the NCP gateway and the file has a comment at the
top that says, "When NCP goes away, so should this file").

- Dan C.

gah4

unread,
29 Sept 2023, 15:03:2729/09/2023
to
On Friday, September 29, 2023 at 5:24:53 AM UTC-7, Neil Rieck wrote:

(snip)

> 7) I knew a lot of people who were running third party stacks on their Windows and Macintosh systems between 1994 and 1998. At that time network communication interfaces all cost a lot of money, so most newbies were asking questions like "how can I be using this internet stack for free?" My answer was always "anything developed by the US tax payer is usually placed into the public domain".

It was just about then, that NIC prices came down to really affordable prices.

I was about then working on school networking projects, where we really could put Ethernet into a school.

But yes, both MS-DOS and MacOS had little support for Ethernet. There was NCSA Telnet, which connected directly to the Ethernet card, with no OS support. (That was free, government funded. There were some non-free versions around.)

After not so long, MacOS had some support, and we ran a different NCSA Telnet. But also about then, Netscape 2.0, which was small enough to run on smaller Macintosh systems.

I do remember buying networking parts on eBay, and often could buy 10 for less than the price of 1.
People who wanted one, didn't bid on 10. (Less likely now, but maybe it still works.)

Arne Vajhøj

unread,
29 Sept 2023, 15:28:4629/09/2023
to
On 9/29/2023 3:03 PM, gah4 wrote:
> On Friday, September 29, 2023 at 5:24:53 AM UTC-7, Neil Rieck wrote:
>> 7) I knew a lot of people who were running third party stacks on
>> their Windows and Macintosh systems between 1994 and 1998. At that
>> time network communication interfaces all cost a lot of money, so
>> most newbies were asking questions like "how can I be using this
>> internet stack for free?" My answer was always "anything developed
>> by the US tax payer is usually placed into the public domain".
>
> It was just about then, that NIC prices came down to really
> affordable prices.
>
> I was about then working on school networking projects, where we
> really could put Ethernet into a school.
>
> But yes, both MS-DOS and MacOS had little support for Ethernet.
> There was NCSA Telnet, which connected directly to the Ethernet card,
> with no OS support. (That was free, government funded. There were
> some non-free versions around.)
>
> After not so long, MacOS had some support, and we ran a different
> NCSA Telnet. But also about then, Netscape 2.0, which was small
> enough to run on smaller Macintosh systems.

I would assume a lot of the people here ran PathWorks on DOS PC's.

Arne


gah4

unread,
29 Sept 2023, 15:46:1729/09/2023
to
On Friday, September 29, 2023 at 12:28:46 PM UTC-7, Arne Vajhøj wrote:

(snip)

> > But yes, both MS-DOS and MacOS had little support for Ethernet.
> > There was NCSA Telnet, which connected directly to the Ethernet card,
> > with no OS support. (That was free, government funded. There were
> > some non-free versions around.)

> > After not so long, MacOS had some support, and we ran a different
> > NCSA Telnet. But also about then, Netscape 2.0, which was small
> > enough to run on smaller Macintosh systems.

> I would assume a lot of the people here ran PathWorks on DOS PC's.

I do remember that one. And like NCSA Telnet, had no OS support.

But as noted previously, unless I forgot, you had to pay for that one.
Otherwise, yes, it allowed for DECnet connections.

I do remember having an account on an across the country MicroVAX
reachable by DECnet but not TCP/IP. I had to use the numeric address,
as the PC didn't know it by name.



Arne Vajhøj

unread,
29 Sept 2023, 15:51:4829/09/2023
to
On 9/29/2023 3:46 PM, gah4 wrote:
> On Friday, September 29, 2023 at 12:28:46 PM UTC-7, Arne Vajhøj wrote:
>>> But yes, both MS-DOS and MacOS had little support for Ethernet.
>>> There was NCSA Telnet, which connected directly to the Ethernet card,
>>> with no OS support. (That was free, government funded. There were
>>> some non-free versions around.)
>
>>> After not so long, MacOS had some support, and we ran a different
>>> NCSA Telnet. But also about then, Netscape 2.0, which was small
>>> enough to run on smaller Macintosh systems.
>
>> I would assume a lot of the people here ran PathWorks on DOS PC's.
>
> I do remember that one. And like NCSA Telnet, had no OS support.

MS did not provide anything. Drivers from the NIC vendor. The rest from
DEC.

> But as noted previously, unless I forgot, you had to pay for that one.

Yes. If I remember correct then one bought a N user license for the
server on VMS and then one could use it on N PC's.

> Otherwise, yes, it allowed for DECnet connections.
>
> I do remember having an account on an across the country MicroVAX
> reachable by DECnet but not TCP/IP. I had to use the numeric address,
> as the PC didn't know it by name.

Same as IP - it is either name or number. Not 256 x 256 x 256 x 256 but
just 64 x 1024.

Arne


gah4

unread,
29 Sept 2023, 18:08:5229/09/2023
to
On Friday, September 29, 2023 at 12:51:48 PM UTC-7, Arne Vajhøj wrote:

(snip, I wrote)

> > I do remember having an account on an across the country MicroVAX
> > reachable by DECnet but not TCP/IP. I had to use the numeric address,
> > as the PC didn't know it by name.

> Same as IP - it is either name or number. Not 256 x 256 x 256 x 256 but
> just 64 x 1024.

Yes, a five digit number.

And if I remember right, it was through a 56 kbit/s link.

Slow, but fast enough for remote login. Sometimes there was
a delay, though.

That was about 1987, when a fast cross country network used
a T1 line, but this wasn't one. Actually, some of the links might
have been T1, but just not the one I was on.



Johnny Billquist

unread,
2 Oct 2023, 06:16:3002/10/2023
to
There were definitely software that forwarded mail between the two, yes.
Forwarding terminal traffic would also be doable, but it's a bit harder
as there is no conventional way to inform the intermediate hop what end
destination you would like to connect to. So usually, what people would
do, is connect and log in to the host that talked both protocols, and
then use the other protocol to establish the connection to the next host.

Now, you could argue that this means it was possible to remote login
between the two netwowrks, but I think that is sortof stretching the
definition gateway between protocols.

(Heck, I've written a mail package that talks both SMTP and MAIL-11, and
which can forward mail between the two. So that is something that still
happens to this day...)

But that don't mean that DECnet is suddenly a part of TCP/IP.

Same with NCP vs. TCP/IP. You could definitely have gateways, and I'm
sure there were. But ARPANET was talking NCP until flag day, at which
point it switched over to talk TCP/IP.

If you talked TCP/IP before flag day you would have needed some kind of
gateway in order to interact in any way with ARPANET.
After flag day it was the reverse. (If anyone stayed on NCP.)

But I guess, in a sense, you could say that this becomes a question of
the definition of what is/was ARPANET.

Johnny

Johnny Billquist

unread,
2 Oct 2023, 06:18:4402/10/2023
to
That's what I expect. SO we have ARPANET, which is talking NCP, and you
have hosts that talk TCP/IP that can communicate with hosts on the
ARPANET via a gateway. Does that mean the TCP/IP hosts are on ARPANET? I
would say not. Just as hosts on my hobbyst DECnet are not neccesarily on
the internet, but they can communicate with hosts on the internet when
there is some gateway in between that can forward stuff for them.

Johnny

Lars Brinkhoff

unread,
2 Oct 2023, 06:45:3702/10/2023
to
Johnny Billquist wrote:
> But ARPANET was talking NCP until flag day, at which point it switched
> over to talk TCP/IP. [...] But I guess, in a sense, you could say
> that this becomes a question of the definition of what is/was ARPANET.

Already in 1981 it was encouraged that new ARPANET hosts only implement
TCP, leap-frogging NCP. It doesn't seem plausible to me it was thought
that those hosts were operating outside ARPANET.

Johnny Billquist

unread,
2 Oct 2023, 06:53:1002/10/2023
to
I would consider it to be a case of:
We know we are going to switch to TCP/IP soon, so it makes no sense that
you implement NCP. Until we switch, you can get partial participation
via gateways. And you can of course talk directly with others who are
also running TCP/IP.

Johnny

gah4

unread,
2 Oct 2023, 07:39:3502/10/2023
to
On Monday, October 2, 2023 at 3:16:30 AM UTC-7, Johnny Billquist wrote:
> On 2023-09-29 00:19, gah4 wrote:

(snip)

> > I don't remember back to NCP, but I do remember exactly that
> > for DECnet and TCP/IP. There were gateways that would transfer
> > mail between the two. I believe also some that would allow
> > remote login between the two.

> There were definitely software that forwarded mail between the two, yes.
> Forwarding terminal traffic would also be doable, but it's a bit harder
> as there is no conventional way to inform the intermediate hop what end
> destination you would like to connect to. So usually, what people would
> do, is connect and log in to the host that talked both protocols, and
> then use the other protocol to establish the connection to the next host.

> Now, you could argue that this means it was possible to remote login
> between the two netwowrks, but I think that is sortof stretching the
> definition gateway between protocols.

It is some years now, so I don't remember the details, but I am pretty
sure that there was one that worked even if you didn't have an account.

It might have been logically the same as logging in, one didn't
actually log in.

It might be that you supplied the user@host on the LOGIN prompt,
which then did the connection. Or, the other way, you put the
HOST::USER in the LOGIN: prompt.

It is pretty many years now, and not so many hosts did it.


gah4

unread,
2 Oct 2023, 08:00:4202/10/2023
to
On Monday, October 2, 2023 at 3:16:30 AM UTC-7, Johnny Billquist wrote:

(snip)

> There were definitely software that forwarded mail between the two, yes.
> Forwarding terminal traffic would also be doable, but it's a bit harder
> as there is no conventional way to inform the intermediate hop what end
> destination you would like to connect to. So usually, what people would
> do, is connect and log in to the host that talked both protocols, and
> then use the other protocol to establish the connection to the next host.

It is described here:

http://www.bitsavers.org/pdf/dec/vax/ultrix-32/DECnet_Ultrix_4.0/AA-JQ71C-TE_DECnet_Ultrix_4.0_DECnet-Internet_Gateway_Use_and_Management_May1990.pdf

You connect to the gateway with either SET HOST or telnet, then put

host::

or

host!

into the login: prompt on the Ultrix system.



Dan Cross

unread,
2 Oct 2023, 08:56:4802/10/2023
to
In article <ufe5e0$nbu$7...@news.misty.com>,
The layering is not quite right here. NCP was essentially a
transport protocol, and the IMPs provided lower-level network
protocol services; in this sense, NCP is closer to TCP than to
IP. The IMPs, in turn, used a protocol that was commonly called
"1822" (from a BBN technical report) to communicate with ARPANET
hosts; the initial TCP/IP implementations hosted on ARPANET fed
IP datagrams directly to IMPs using 1822.

See, e.g., IEN 28, sec 1.4 ["Interfaces"]. To quote:
|In the ARPANET case, for example, the Internet module would
|call on a local net module which would add the 1822 leader [6]
|to the internet segment creating an ARPANET message to transmit
|to the IMP.
(From: https://www.rfc-editor.org/ien/ien28.pdf)

The ARPANET was the first backbone for internetworking using IP,
but TCP/IP and NCP sort of existed in quasi-parallel at the
time. So TCP/IP hosts were very much "on the ARPANET", in the
sense that they used the packet network of IMPs for
communication in the same way that NCP-only hosts did.

- Dan C.

Johnny Billquist

unread,
2 Oct 2023, 11:20:2502/10/2023
to
That would possibly have been how I would do it.

But that is basically just the same as logging in to the intermediate
machine and starting a new session from there. There isn't really a
protocol translation directly between the two sides as such.

> It is pretty many years now, and not so many hosts did it.

I'm trying to remember if I saw/heard of something like that. I might
have, but I might just also be making that up as I write this. Too long
ago...

Johnny

Johnny Billquist

unread,
2 Oct 2023, 11:31:2402/10/2023
to
It again goes into what do we mean when we say "ARPNANET".
Just because you had other protocols using the same underlying
infrastructure, does it mean they are part of the same network?
I would say not.
Just as with VPNs. They are all on the same physical network, but you
can't speak directly between them without a gateway that forwards the
traffic between the two.
But the "problem" with ARPANET is that it was speaking another protocol
that isn't interoperable with TCP before flag day. So, ARPANET was NCP.
The fact that other protocols also operated over the same infrastructure
don't mean they were ARPANET.

Johnny

Dan Cross

unread,
2 Oct 2023, 11:40:3702/10/2023
to
In article <ufeno9$g48$2...@news.misty.com>,
The historical record shows that the players at the time meant
the network of IMPs and the hosts that connected to them. It
seems pretty clear that they didn't _just_ mean NCP.

>Just because you had other protocols using the same underlying
>infrastructure, does it mean they are part of the same network?
>I would say not.

This is arguing semantics to an extent, but to answer this
question, I would describe such an arrangement as different
applications of the underlying network.

>Just as with VPNs. They are all on the same physical network, but you
>can't speak directly between them without a gateway that forwards the
>traffic between the two.
>But the "problem" with ARPANET is that it was speaking another protocol
>that isn't interoperable with TCP before flag day.

Yes, it was speaking 1822. :-) It spoke 1822 after flag day,
too.

>So, ARPANET was NCP.

In this case, the historical record is clear: ARPANET was the
physical set of IMPs and their interconnecting lines. It was
not just NCP.

>The fact that other protocols also operated over the same infrastructure
>don't mean they were ARPANET.

Well, in this case, NCP came after 1822; initially, hosts used
1822 directly for host-to-host communication in the context of
ARPANET, but that proved unsatisfactory, so NCP was designed and
implemented. So given that "ARPANET" predated NCP, it seems
unfair to redefine the former to mean the latter, particularly
when it's pretty clear that that was not how the people working
on it at the time thought of it (as can be seen from the
afore referenced IENs, for example).

- Dan C.

Scott Dorsey

unread,
2 Oct 2023, 19:58:4902/10/2023
to
gah4 <ga...@u.washington.edu> wrote:
>It is some years now, so I don't remember the details, but I am pretty
>sure that there was one that worked even if you didn't have an account.

Decnet to arpa? Sure, there were lots of them and none that I know
required an account. It was just a polite service people provided.
The best one was at Columbia which had really good connectivity (and also
bitnet connectivity) so you could do "fredbox::fr...@columbia.edu" as I
recall.
--scott


--
"C'est un Nagra. C'est suisse, et tres, tres precis."

gah4

unread,
2 Oct 2023, 23:44:4002/10/2023
to
On Monday, October 2, 2023 at 4:58:49 PM UTC-7, Scott Dorsey wrote:
> gah4 <ga...@u.washington.edu> wrote:
> >It is some years now, so I don't remember the details, but I am pretty
> >sure that there was one that worked even if you didn't have an account.

> Decnet to arpa? Sure, there were lots of them and none that I know
> required an account. It was just a polite service people provided.
> The best one was at Columbia which had really good connectivity (and also
> bitnet connectivity) so you could do "fredbox::fr...@columbia.edu" as I
> recall.

Seems to be a feature of Ultrix.

I presume it can be turned on and off.


jimc...@gmail.com

unread,
3 Oct 2023, 00:49:3803/10/2023
to
On Monday, September 25, 2023 at 5:15:50 PM UTC-7, Dan Cross wrote:
> The early days of Windows NT are well-documented in the book,
> "Show Stopper!" by G. Pascal Zachary. In short, they used OS/2
> and 386 machines; NT was self-hosting within a couple of years.

Zachary wasn't very technical and made a number of mistakes in that book, although overall his story is well-researched and compelling. NT was originally brought up on single-board Intel i860 hardware, followed by MIPS DECstations and then i386 hardware; Cutler insisted that the team not focus on i386 because he wanted to keep NT from becoming wedded to the x86 architecture.

jimc...@gmail.com

unread,
3 Oct 2023, 01:33:0303/10/2023
to
On Sunday, September 24, 2023 at 7:10:45 AM UTC-7, John Dallman wrote:

> Bell regards competition from "zero cost" microprocessors such as the
> 8086 and 68000 as likely more significant than other minicomputer
> companies, but fails to make a plan to deal with them. DEC was eventually
> defeated by 80386 and later PCs and RISC workstations, and that failure
> seems to start here.

I think that's reading a lot into a summary strategic proposal written by an executive -- if he hadn't left due to his health, Bell may have been able to steer the company around a number of the missteps that came later.

> He assumes that DEC can dominate the market for
> terminals for its minis by using PDP-11 and VAX microprocessors, but
> doesn't seem to realise that compatible terminals can be built at much
> lower cost using third-party microprocessors.

Given that DEC used Zilog and Intel low-cost silicon in some of their own terminal and PC products, I think it's safe to say they were aware -- but that Bell was placing a bet that the capabilities of the PDP-11 and VAX could be brought into lower-cost, higher-volume microprocessors and make it possible to scale from terminals all the way up -- and retain vertical integration. That doc calls out that OEMs and silicon vendors were already clamoring for those investments. Bell founded DEC's VLSI business with all these factors in mind.

It's quite possible Gordon would have prevented a number of missteps like the heavy bet on ECL with the VAX 9000, and Olsen's decision to kill the OEM market for MicroVAX -- and perhaps bought the company time to pivot away from VAX to a more RISC-able architecture?

Bob Supnik's articles on LSI-11, MicroVAX, and beyond are illustrative of the opportunities the company had as well as the ways Ken Olsen and others squandered them. Bob's writings about DEC's semiconductor investments: http://simh.trailing-edge.com/dsarchive.html

Bob also gave a great interview to the Computer History Museum which talks about some of these topics: https://youtu.be/T3tcCBHRIfU?feature=shared



> The idea of running VMS on a terminal with a total of 64KB of RAM and ROM
> in 1982 seems implausible now, but it seems to have been the reason for
> 512-byte pages.

By 1979, Bell already had very senior engineers like Cutler and Hustveldt working on what became VAXELN and MicroVMS with that bet in mind

 
> Bell praises the extremely compact VAX instruction set
> and its elaborate function calls, without appreciating the ways they will
> come to inhibit pipelining and out-of-order execution, and thus doom the
> architecture to uncompetitive performance.

At that stage I'm not sure anyone realized just how hard it would be for VAX to be micro-optimised.

jimc...@gmail.com

unread,
3 Oct 2023, 01:37:4903/10/2023
to
On Sunday, September 24, 2023 at 9:10:04 AM UTC-7, John Dallman wrote:
> Nor did they look at the history of the art of making computers
> faster. The VAX architecture was implemented readily enough at first, but
> made pipelining, out-of-order and other ideas that had been invented in
> the 1950s and 1960s hard to add.

I don't think it's reasonable to say they "didn't look at the history..." so much as made a set of architectural choices that later cornered them. The VAX roadmap had pipelining planned for the 8X series before 11/780 even shipped; I think it's more accurate to say they made a set of serious architectural mistakes because they were optimizing for tradeoffs that seemed reasonable at the time.

jimc...@gmail.com

unread,
3 Oct 2023, 01:40:1703/10/2023
to
On Sunday, September 24, 2023 at 11:04:53 AM UTC-7, John Dallman wrote:

> > If DEC had went after the low end market with the C-VAX, I really
> > feel that DEC would still be with us today.
> Maybe. The MS-DOS hardware and software industry was already very well
> established, and competition had driven hardware prices down a lot. DEC
> would have had to pick some niches to target and win several of them.

Agreed. By the time CVAX launched, the damage had already been done -- CVAX was not price-competitive with x86. Even Alpha wasn't really price-competitive with Pentium.

The opportunity (if it ever existed) was squandered much earlier, before the first MicroVAX parts shipped, when Olsen decided to kill the strategy to sell them to OEMs and drive volume. John Mashey describes a number of ways where even that strategy might have failed, but there was zero chance of pulling it off by 1987.

jimc...@gmail.com

unread,
3 Oct 2023, 01:43:1903/10/2023
to
On Tuesday, September 26, 2023 at 5:26:49 AM UTC-7, Neil Rieck wrote:

> The Coles Notes version of the story centers around missed opportunities, and mistakes, by DEC as the industry shifted from CISC to RISC. Then bad advice at the top decided to bet the farm on CISC in the form of the water-cooled Aquarius (VAX-9000).

The issue with VAX 9000 wasn't that it was an implementation of the VAX CISC architecture; the issue was chasing IBM with a massive ECL implementation of VAX, with all the associated costs in power and cooling and engineering required to address them. By the time VAX 9000 launched, CMOS VAX was already faster and dramatically cheaper.

gah4

unread,
3 Oct 2023, 01:55:4903/10/2023
to
On Monday, October 2, 2023 at 10:43:19 PM UTC-7, jimc...@gmail.com wrote:

(snip)

> The issue with VAX 9000 wasn't that it was an implementation of the VAX CISC architecture; the issue was chasing IBM with a massive ECL implementation of VAX, with all the associated costs in power and cooling and engineering required to address them. By the time VAX 9000 launched, CMOS VAX was already faster and dramatically cheaper.

This is what everyone had to figure out.

ECL (and STTL) have a different scaling law from MOS.

ECL and STTL stay at the same voltage, which is related to the
band gap, as transistors get smaller.

Well, you can do that with MOS, too, but Denard scaling shrinks
the oxide along with other dimensions. That reduces supply voltage,
and so power.

In 1978, CMOS was slower than TTL, and harder to build than NMOS.

So, there is a transition when CMOS gets faster, and lower power,
(and power density) the transition was made.

The other problem, at least for some years, with CMOS is
parasitic SCRs. The way the PN and NP junctions combine,
can lead to the configuration of an SCR. If you manage to
turn it in, it is a direct connection across the power supply,
sometimes destructively.

I don't remember the timeline for CMOS VAX, though.
(Even though I have a MicroVAX 2000 and 3100.)

Dan Cross

unread,
3 Oct 2023, 07:22:5103/10/2023
to
In article <b767aa9a-e341-426e...@googlegroups.com>,
I knew about the i860, but this is the first I've heard about
the DECstation port (I thought they used MIPS Magnums or
something?). What was the initial development platform,
though? Certainly once it was self-hosting it could be any
supported platform, but before that? I was under the
impression that was mostly PCs running OS/2.

- Dan C.

Dave Froble

unread,
3 Oct 2023, 08:55:5803/10/2023
to
Well, yeah, almost every time, volume wins ...

DEC would have had to sell the C-VAX really cheap, perhaps at a loss for a
while. To have a market, one must first acquire that market.

--
David Froble Tel: 724-529-0450
Dave Froble Enterprises, Inc. E-Mail: da...@tsoft-inc.com
DFE Ultralights, Inc.
170 Grimplin Road
Vanderbilt, PA 15486

Johnny Billquist

unread,
3 Oct 2023, 09:39:4703/10/2023
to
On 2023-10-03 01:58, Scott Dorsey wrote:
> gah4 <ga...@u.washington.edu> wrote:
>> It is some years now, so I don't remember the details, but I am pretty
>> sure that there was one that worked even if you didn't have an account.
>
> Decnet to arpa? Sure, there were lots of them and none that I know
> required an account. It was just a polite service people provided.
> The best one was at Columbia which had really good connectivity (and also
> bitnet connectivity) so you could do "fredbox::fr...@columbia.edu" as I
> recall.

For mail, yes.

That still happens... Try sending to "pondus::bqt"@mim.stupi.net and
you'll reach me on my PDP-11/93 running RSX-11M-PLUS at home.

Johnny

Johnny Billquist

unread,
3 Oct 2023, 09:44:2403/10/2023
to
I would argue that ARPANET was the host and the services they provided.
Just become something else went over the same cables don't mean anything
meaningful.

>> Just because you had other protocols using the same underlying
>> infrastructure, does it mean they are part of the same network?
>> I would say not.
>
> This is arguing semantics to an extent, but to answer this
> question, I would describe such an arrangement as different
> applications of the underlying network.

I would disagree that it's semantics. If you took a computer that talked
TCP/IP and hooked it up to an IMP before flag day, you would be unable
to communcate with all the hosts on ARPANET, even if they were at the
other end of that IMP.

You could talk to other machines that talked TCP/IP, but to reach any
resources on what people referred to as ARPANET you would need a gateway
that translated your traffic, or content, to something that could go
over NCP. If there was no gateway, you were essentially isolated as your
own host, no matter how much of ARPANET was carried over the same IMP.

Johnny

Johnny Billquist

unread,
3 Oct 2023, 09:50:2503/10/2023
to
On 2023-10-03 07:55, gah4 wrote:
> On Monday, October 2, 2023 at 10:43:19 PM UTC-7, jimc...@gmail.com wrote:
>
> (snip)
>
>> The issue with VAX 9000 wasn't that it was an implementation of the VAX CISC architecture; the issue was chasing IBM with a massive ECL implementation of VAX, with all the associated costs in power and cooling and engineering required to address them. By the time VAX 9000 launched, CMOS VAX was already faster and dramatically cheaper.

More or less agree. It's not that the NVAX (the last CMOS VAX) was
faster. But it was close to the same speed as the 9000 at a fraction of
the cost, power requirements and size. And it was the later improved
giving the NVAX+ and NVAX++ which were faster.

That in combination with the 9000 taking way longer than planned to get
to market meant that there was no business case for the 9000 when it
came out.

> In 1978, CMOS was slower than TTL, and harder to build than NMOS.
>
> So, there is a transition when CMOS gets faster, and lower power,
> (and power density) the transition was made.

I think the writing was already on the wall while the 9000 was being
developed. So it was just a question of time, and with the 9000 being
late, this became even more an issue.

> The other problem, at least for some years, with CMOS is
> parasitic SCRs. The way the PN and NP junctions combine,
> can lead to the configuration of an SCR. If you manage to
> turn it in, it is a direct connection across the power supply,
> sometimes destructively.
>
> I don't remember the timeline for CMOS VAX, though.
> (Even though I have a MicroVAX 2000 and 3100.)

I think all single chip VAXen were CMOS. Not at all sure about the uVAX
I, but the II was, I think. Later ones definitely. Which are all before
the 9000.

The 9000 came out about the same time as the NVAX, which was the last
new VAX design in CMOS.

Johnny

Dan Cross

unread,
3 Oct 2023, 10:11:3003/10/2023
to
In article <ufh5rl$fi6$2...@news.misty.com>,
Johnny Billquist <b...@softjar.se> wrote:
>On 2023-10-02 17:40, Dan Cross wrote:
>>[snip]
>> The historical record shows that the players at the time meant
>> the network of IMPs and the hosts that connected to them. It
>> seems pretty clear that they didn't _just_ mean NCP.
>
>I would argue that ARPANET was the host and the services they provided.
>Just become something else went over the same cables don't mean anything
>meaningful.

I mean, you have the words of the people involved with respect
to what they meant. They clearly referred to IP going over "the
ARPANET" in IEN 28, among other contemporary accounts. We can
sit here, 40 years after the fact, and spitball about what they
_really_ meant or how they were wrong all we want, but we can
see directly what they were referring to.

>>> Just because you had other protocols using the same underlying
>>> infrastructure, does it mean they are part of the same network?
>>> I would say not.
>>
>> This is arguing semantics to an extent, but to answer this
>> question, I would describe such an arrangement as different
>> applications of the underlying network.
>
>I would disagree that it's semantics. If you took a computer that talked
>TCP/IP and hooked it up to an IMP before flag day, you would be unable
>to communcate with all the hosts on ARPANET, even if they were at the
>other end of that IMP.
>
>You could talk to other machines that talked TCP/IP, but to reach any
>resources on what people referred to as ARPANET you would need a gateway
>that translated your traffic, or content, to something that could go
>over NCP. If there was no gateway, you were essentially isolated as your
>own host, no matter how much of ARPANET was carried over the same IMP.

We have IPv4-only hosts on the Internet today that cannot
communicate with IPv6 hosts unless through a gateway of some
kind; would you argue that IPv6-only hosts are therefore not
"on the Internet"?

There were machines on the ARPANET before NCP was invented;
presumably some didn't even speak NCP after it was invented.
Were the first machines on the ARPANET therefore not on the
ARPANET because they didn't speak NCP?

- Dan C.

Johnny Billquist

unread,
3 Oct 2023, 10:31:1303/10/2023
to
On 2023-10-03 16:11, Dan Cross wrote:
> In article <ufh5rl$fi6$2...@news.misty.com>,
> Johnny Billquist <b...@softjar.se> wrote:
>> On 2023-10-02 17:40, Dan Cross wrote:
>>> [snip]
>>> The historical record shows that the players at the time meant
>>> the network of IMPs and the hosts that connected to them. It
>>> seems pretty clear that they didn't _just_ mean NCP.
>>
>> I would argue that ARPANET was the host and the services they provided.
>> Just become something else went over the same cables don't mean anything
>> meaningful.
>
> I mean, you have the words of the people involved with respect
> to what they meant. They clearly referred to IP going over "the
> ARPANET" in IEN 28, among other contemporary accounts. We can
> sit here, 40 years after the fact, and spitball about what they
> _really_ meant or how they were wrong all we want, but we can
> see directly what they were referring to.

That document talks about a theoretical ARPANET running TCP. Which you
could argue is what happened after flag day.

And the addressing scheme/ideas in that document is also an interesting
read. It's obviously different than what eventually was defined in IP.

So is this document relevant to bring up here? It's not something that
ever actually existed, but was the start of the process that eventually
led to the switch at flag day to TCP/IP.

>>>> Just because you had other protocols using the same underlying
>>>> infrastructure, does it mean they are part of the same network?
>>>> I would say not.
>>>
>>> This is arguing semantics to an extent, but to answer this
>>> question, I would describe such an arrangement as different
>>> applications of the underlying network.
>>
>> I would disagree that it's semantics. If you took a computer that talked
>> TCP/IP and hooked it up to an IMP before flag day, you would be unable
>> to communcate with all the hosts on ARPANET, even if they were at the
>> other end of that IMP.
>>
>> You could talk to other machines that talked TCP/IP, but to reach any
>> resources on what people referred to as ARPANET you would need a gateway
>> that translated your traffic, or content, to something that could go
>> over NCP. If there was no gateway, you were essentially isolated as your
>> own host, no matter how much of ARPANET was carried over the same IMP.
>
> We have IPv4-only hosts on the Internet today that cannot
> communicate with IPv6 hosts unless through a gateway of some
> kind; would you argue that IPv6-only hosts are therefore not
> "on the Internet"?

Well. At the moment, IPv6 only hosts don't really exist yet, but the
time might (will?) come. Eventually, I expect IPv4 to be phased out, at
which point an IPv4 only host will not be on the Inetnet anymore.
But in a sense yes, we're sort of getting to a dual-protocol Internet at
the moment. Fallback for most anyone/anything is still IPv4.

> There were machines on the ARPANET before NCP was invented;
> presumably some didn't even speak NCP after it was invented.
> Were the first machines on the ARPANET therefore not on the
> ARPANET because they didn't speak NCP?

If ARPANET was talking some other protocol before NCP, then obviously
that was the protocol you needed to talk to be on ARPANET, not NCP.
(I honestly don't know if there was something before NCP.)

Johnny

Dan Cross

unread,
3 Oct 2023, 11:45:1203/10/2023
to
In article <ufh8jd$fi6$5...@news.misty.com>,
Johnny Billquist <b...@softjar.se> wrote:
>On 2023-10-03 16:11, Dan Cross wrote:
>> In article <ufh5rl$fi6$2...@news.misty.com>,
>> Johnny Billquist <b...@softjar.se> wrote:
>>> On 2023-10-02 17:40, Dan Cross wrote:
>>>> [snip]
>>>> The historical record shows that the players at the time meant
>>>> the network of IMPs and the hosts that connected to them. It
>>>> seems pretty clear that they didn't _just_ mean NCP.
>>>
>>> I would argue that ARPANET was the host and the services they provided.
>>> Just become something else went over the same cables don't mean anything
>>> meaningful.
>>
>> I mean, you have the words of the people involved with respect
>> to what they meant. They clearly referred to IP going over "the
>> ARPANET" in IEN 28, among other contemporary accounts. We can
>> sit here, 40 years after the fact, and spitball about what they
>> _really_ meant or how they were wrong all we want, but we can
>> see directly what they were referring to.
>
>That document talks about a theoretical ARPANET running TCP. Which you
>could argue is what happened after flag day.

No...It talks about sending "Internet Protocol" "segments" over
the ARPANET using the 1822 protocol. It says it right there on
the tin.

Note that this is not just TCP; this is actually IP. IEN 2
suggested layering into IP and TCP:
https://www.rfc-editor.org/ien/ien2.txt

>And the addressing scheme/ideas in that document is also an interesting
>read. It's obviously different than what eventually was defined in IP.

Well, yes: this was IPv2, which was an experimental version. So
what?

>So is this document relevant to bring up here? It's not something that
>ever actually existed, but was the start of the process that eventually
>led to the switch at flag day to TCP/IP.

Well, the part that I quoted talked about sending IP datagrams
over the ARPANET by wrapping the in 1822 frames and sending them
to an IMP. I'd say that's relevant with respect to exploring
what the authors of the early IP drafts were thinking: they had
a network, that network (which the called the "ARPANET") could
talk NCP, but they also obviously felt that they could make it
talk IP/TCP as well.

>>>>> Just because you had other protocols using the same underlying
>>>>> infrastructure, does it mean they are part of the same network?
>>>>> I would say not.
>>>>
>>>> This is arguing semantics to an extent, but to answer this
>>>> question, I would describe such an arrangement as different
>>>> applications of the underlying network.
>>>
>>> I would disagree that it's semantics. If you took a computer that talked
>>> TCP/IP and hooked it up to an IMP before flag day, you would be unable
>>> to communcate with all the hosts on ARPANET, even if they were at the
>>> other end of that IMP.
>>>
>>> You could talk to other machines that talked TCP/IP, but to reach any
>>> resources on what people referred to as ARPANET you would need a gateway
>>> that translated your traffic, or content, to something that could go
>>> over NCP. If there was no gateway, you were essentially isolated as your
>>> own host, no matter how much of ARPANET was carried over the same IMP.
>>
>> We have IPv4-only hosts on the Internet today that cannot
>> communicate with IPv6 hosts unless through a gateway of some
>> kind; would you argue that IPv6-only hosts are therefore not
>> "on the Internet"?
>
>Well. At the moment, IPv6 only hosts don't really exist yet, but the
>time might (will?) come.

Um, sure they do. Plenty of IoT and embedded devices have
skipped v4 entirely.

>Eventually, I expect IPv4 to be phased out, at
>which point an IPv4 only host will not be on the Inetnet anymore.
>But in a sense yes, we're sort of getting to a dual-protocol Internet at
>the moment. Fallback for most anyone/anything is still IPv4.

Ah, but both are called the "Internet"? Noted. :-)

>> There were machines on the ARPANET before NCP was invented;
>> presumably some didn't even speak NCP after it was invented.
>> Were the first machines on the ARPANET therefore not on the
>> ARPANET because they didn't speak NCP?
>
>If ARPANET was talking some other protocol before NCP, then obviously
>that was the protocol you needed to talk to be on ARPANET, not NCP.
>(I honestly don't know if there was something before NCP.)

This is easily discoverable. In addition to my note earlier in
this thread about the Host<->protocol known as "1822"
(https://groups.google.com/g/comp.os.vms/c/aX_f3g9O9jo/m/HMYxbVsRAgAJ),
one can simply look at the relevant RFCs: RFC 33 describes NCP:
https://datatracker.ietf.org/doc/html/rfc33

RFC11 describes the earlier host-host protocol:
https://datatracker.ietf.org/doc/html/rfc11
(which in turn refers to BBN report 1822)

Anyway, it seems clear from the historical record that the
people working on TCP/IP thought of the ARPANET as somehow
distinct from just hosts using NCP. You may chose to disagree,
but I don't see any evidence that that's how any of the players
at the time thought of it, and indeed, I see evidence to the
contrary.

- Dan C.

Arne Vajhøj

unread,
3 Oct 2023, 16:18:2203/10/2023
to
It is a fact that "price per VUPS" was very high for the 9000
compared to smaller VAX'es.

But was it intended to compete on "price per VUPS"?

I would have thought that it was intended to compete on:
* max CPU in a single box
* max RAM in a single box
* max IO capacity in a single box

And just maybe its fast demise was also due to the fact that the
mainframe market was moving to a single architecture (IBM mainframe
with IBM, Amdahl and Hitachi as vendors).

Arne


Johnny Billquist

unread,
3 Oct 2023, 16:57:4103/10/2023
to
Yes. And to quote that document:

An analogy may be drawn between the internet situation and the
ARPANET. The endpoints of message transmissions are hosts in both
cases, and they exchange messages conforming to a host to host
protocol. In the ARPA subnet there is a IMP to IMP protocol that is
primarily a hop by hop protocol, to parallel this the internet system
should have a hop by hop internet protocol. In the ARPANET a host and
an IMP interact through an inteface, commonly called 1822, which
specifies the format of messages crossing the boundary, an equivalent
interface in needed in the internet system.

Internet != ARPANET. IMPs are dealing with hop by hop communication.
Host protocol (NCP in this case) is dealing with host to host
communication, which in the internet case is TCP in development.

Anyway - I think we've beaten this horse to death, and I have a feeling
neither of us will convince the other. And that means further
discussions will only be more noise for others.

Feel free to comment and get last words in. I'll try to stop from my side.

Johnny

Arne Vajhøj

unread,
3 Oct 2023, 19:23:1103/10/2023
to
On 10/3/2023 7:22 AM, Dan Cross wrote:
> In article <b767aa9a-e341-426e...@googlegroups.com>,
> jimc...@gmail.com <jimc...@gmail.com> wrote:
>> On Monday, September 25, 2023 at 5:15:50 PM UTC-7, Dan Cross wrote:
>>> The early days of Windows NT are well-documented in the book,
>>> "Show Stopper!" by G. Pascal Zachary. In short, they used OS/2
>>> and 386 machines; NT was self-hosting within a couple of years.
>>
>> Zachary wasn't very technical and made a number of mistakes in that book, although overall his story is well-researched and compelling. NT
>> was originally brought up on single-board Intel i860 hardware, followed by MIPS DECstations and then i386 hardware; Cutler insisted that the
>> team not focus on i386 because he wanted to keep NT from becoming wedded to the x86 architecture.
>
> I knew about the i860, but this is the first I've heard about
> the DECstation port (I thought they used MIPS Magnums or
> something?).

Most sources talk about just the CPU: MIPS R3000.

Per:

https://en.wikipedia.org/wiki/DECstation
https://en.wikipedia.org/wiki/MIPS_Magnum
https://en.wikipedia.org/wiki/Jazz_(computer)
https://www.linux-mips.org/wiki/Jazz

then:
- MIPS had a Magnum R3000 system with R3000 CPU
and TurboChannel bus
- DEC has a DECstatiosn 5000 system with R3000 CPU
and TurboChannel bus
- MS developed their own Jazz system with R3000 CPU
and EISA bus for use by Windows NT
- MS sold Jazz to MIPS that turned it into
Magnum R4000 with R4000 CPU and EISA bus (and was
big endian unlike MS Jazz that was little endian)

Arne


Johnny Billquist

unread,
4 Oct 2023, 06:41:0104/10/2023
to
Yes.

> But was it intended to compete on "price per VUPS"?

Well. The problem was that when the 9000 finally did come out, it was
not competitive from any perspective.
It was way more expensive than a 7000. It was way larger than a 7000. It
was way costlier to run than a 7000. It had close to similar performance
to a 7000. The 9000 started shipping in 1991, while the 7000 shipped in
1992.
Why would anyone buy the 9000? There was just a small window left before
the NVAX based 7000 came. And everyone knew that was coming. NVAX based
machines as such started shipping in 1991 as well.

> I would have thought that it was intended to compete on:
> * max CPU in a single box

NVAX does it better.

> * max RAM in a single box

NVAX does it better.

> * max IO capacity in a single box

In a single box, I think they come out even. The 9000 had massively more
I/O capacity, if you look at the full system. But that's a lot of boxes.

And DEC was also pushing for clusters, and had been for quite a while,
where the capacity of a single machine wasn't the main point.

> And just maybe its fast demise was also due to the fact that the
> mainframe market was moving to a single architecture (IBM mainframe
> with IBM, Amdahl and Hitachi as vendors).

Possibly, but I wouldn't think so. There were plenty of DEC customers
looking for faster VAXen. And the VAX market was still strong at that
time, athough the Alpha was just about coming in as well.

Johnny

Jan-Erik Söderholm

unread,
4 Oct 2023, 08:49:4404/10/2023
to
If I'm not completaly wrong, the Swedish weather service (SMHI) had one
VAX 9000 once. I seam to remember seeing it behind bars and fences at a
visit to the SMHI HQ in Norrköping/Sweden. No one but DEC people was
allowed to touch it. This was around the same time as a scandal with
a 11/782 (I think) that was going to ship to Soviet.

I also found this note from 1994:
"1994-01-01, Over the last four years, CERN has progressively converted its
central batch production facilities from classic mainframe platforms (Cray
XMP, IBM, ESA, Vax 9000) to distributed RISC based facilities..."



Johnny Billquist

unread,
4 Oct 2023, 10:50:0004/10/2023
to
On 2023-10-04 14:49, Jan-Erik Söderholm wrote:
> If I'm not completaly wrong, the Swedish weather service (SMHI) had one
> VAX 9000 once. I seam to remember seeing it behind bars and fences at a
> visit to the SMHI HQ in Norrköping/Sweden. No one but DEC people was
> allowed to touch it. This was around the same time as a scandal with
> a 11/782 (I think) that was going to ship to Soviet.

I'm old enough to remember the VAX-11/782 scandal in fairly vivid
detail. And it was long before the 9000 was even thought of. (A good
summary exists on the Swedish wikipedia page:
https://sv.wikipedia.org/wiki/Containeraff%C3%A4ren. Google can probably
do a good translation if anyone wants the details. But this was back in
1983.)

But I could possibly see that SMHI could have had one. They had big
computing requirements. I do know that SAAB in Linköping had one. I know
a guy who worked there at the time and was atleast somewhat responsible.
But the machine was gone before I got to know him. But he had kept some
memorabilia, and so I actually have a module from a VAX 9000 at home,
which came from the one at SAAB.

> I also found this note from 1994:
> "1994-01-01, Over the last four years, CERN has progressively converted
> its central batch production facilities from classic mainframe platforms
> (Cray XMP, IBM, ESA, Vax 9000) to distributed RISC based facilities..."

Interesting. Well, I guess it sortof makes sense that CERN would also be
a user, since they also had extreme computation needs, and and money was
less of an issue.

Johnny

Jan-Erik Söderholm

unread,
4 Oct 2023, 11:11:3604/10/2023
to
Den 2023-10-04 kl. 16:49, skrev Johnny Billquist:
> On 2023-10-04 14:49, Jan-Erik Söderholm wrote:
>> If I'm not completaly wrong, the Swedish weather service (SMHI) had one
>> VAX 9000 once. I seam to remember seeing it behind bars and fences at a
>> visit to the SMHI HQ in Norrköping/Sweden. No one but DEC people was
>> allowed to touch it. This was around the same time as a scandal with
>> a 11/782 (I think) that was going to ship to Soviet.
>
> I'm old enough to remember the VAX-11/782 scandal in fairly vivid detail.
> And it was long before the 9000 was even thought of. (A good summary exists
> on the Swedish wikipedia page:
> https://sv.wikipedia.org/wiki/Containeraff%C3%A4ren. Google can probably do
> a good translation if anyone wants the details. But this was back in 1983.)
>
> But I could possibly see that SMHI could have had one. They had big
> computing requirements. I do know that SAAB in Linköping had one. I know a
> guy who worked there at the time and was atleast somewhat responsible. But
> the machine was gone before I got to know him. But he had kept some
> memorabilia, and so I actually have a module from a VAX 9000 at home, which
> came from the one at SAAB.
>
>   Johnny
>

As was described at the SMHI visit, their VAX systems was used as a kind
of "front-ends" against the Cray-1 system(s) at the center in Reading/UK
for weather calculations.


Johnny Billquist

unread,
4 Oct 2023, 12:47:3904/10/2023
to
That makes it sound unlikely that it would have been VAX 9000 systems.

I'm trying to remember about VAX as front-end to Cray-1. And my brain
keeps trying to say something like VAX-11/780, which would also be more
aligned in time with the VAX-11/782 scandal. So I think you just got the
9000 mixed in there by mistake, and the rest probably sums up pretty
well. It could even have been a VAX-8600, which was the top dog in 1984.

Johnny

Arne Vajhøj

unread,
4 Oct 2023, 17:00:0104/10/2023
to
I think 9000 started shipping in 1990.

> Why would anyone buy the 9000? There was just a small window left before
> the NVAX based 7000 came. And everyone knew that was coming. NVAX based
> machines as such started shipping in 1991 as well.
>
>> I would have thought that it was intended to compete on:
>> * max CPU in a single box
>
> NVAX does it better.
>
>> * max RAM in a single box
>
> NVAX does it better.
>
>> * max IO capacity in a single box
>
> In a single box, I think they come out even. The 9000 had massively more
> I/O capacity, if you look at the full system. But that's a lot of boxes.

As I recall it then the 6000 was a single cabinet wide thing, while
a 9000 (at least in large config - 400??) was a 3 cabinet wide thing.

> And DEC was also pushing for clusters, and had been for quite a while,
> where the capacity of a single machine wasn't the main point.

In the VAX market.

The mainframe market was still single machine oriented.

>> And just maybe its fast demise was also due to the fact that the
>> mainframe market was moving to a single architecture (IBM mainframe
>> with IBM, Amdahl and Hitachi as vendors).
>
> Possibly, but I wouldn't think so. There were plenty of DEC customers
> looking for faster VAXen. And the VAX market was still strong at that
> time, athough the Alpha was just about coming in as well.

I am not sure that we really disagree so much.

The 9000 did not have a market.

In the "super-super mini-computer" market it was too expensive.
You could buy a handful of 6000's or huge number of MicroVAX'es
for the price of a 9000. A 9000 was simply not cost efficient
in that market.

In the "mainframe" market (it was branded as an IBM mainframe
killer so that market must have been considered relevant) the
time of the non-IBM-compatible mainframe was over.

And in the "super computer" market then one could buy the vector
bolton for the 9000 (and I suspect some did - the swedish weather
service 9000 Jan-Erik remembers probably had it), but it was still
a very expensive system - and the 6000 could also get vector
bolton.

Arne


Scott Dorsey

unread,
4 Oct 2023, 17:00:1704/10/2023
to
Excellent! Do you gateway to bitnet too?

For remote login, there were lots of computers out there that ran both
tcp/ip and decnet... these included lots of vaxen running vms and ultrix
but also included Sun systems and others. Get an account on any machine
with both of them (or use the free guest account at ai.mit.edu) and you
can telnet into one machine and set host out of it to another.

Johnny Billquist

unread,
4 Oct 2023, 19:21:4304/10/2023
to
On 2023-10-04 23:00, Scott Dorsey wrote:
> Johnny Billquist <b...@softjar.se> wrote:
>> On 2023-10-03 01:58, Scott Dorsey wrote:
>>> gah4 <ga...@u.washington.edu> wrote:
>>>> It is some years now, so I don't remember the details, but I am pretty
>>>> sure that there was one that worked even if you didn't have an account.
>>>
>>> Decnet to arpa? Sure, there were lots of them and none that I know
>>> required an account. It was just a polite service people provided.
>>> The best one was at Columbia which had really good connectivity (and also
>>> bitnet connectivity) so you could do "fredbox::fr...@columbia.edu" as I
>>> recall.
>>
>> For mail, yes.
>>
>> That still happens... Try sending to "pondus::bqt"@mim.stupi.net and
>> you'll reach me on my PDP-11/93 running RSX-11M-PLUS at home.
>
> Excellent! Do you gateway to bitnet too?

Nope. Never did bitnet, and have basically zero knowledge about it,
apart from knowing it existed.

> For remote login, there were lots of computers out there that ran both
> tcp/ip and decnet... these included lots of vaxen running vms and ultrix
> but also included Sun systems and others. Get an account on any machine
> with both of them (or use the free guest account at ai.mit.edu) and you
> can telnet into one machine and set host out of it to another.

Yeah. That's what you normally would have to do. No direct translation
between the protocols.

Johnny

Johnny Billquist

unread,
4 Oct 2023, 19:36:3104/10/2023
to
On 2023-10-04 22:59, Arne Vajhøj wrote:
> On 10/4/2023 6:40 AM, Johnny Billquist wrote:
>> On 2023-10-03 22:18, Arne Vajhøj wrote:
>>> On 10/3/2023 9:50 AM, Johnny Billquist wrote:
>>>> The 9000 came out about the same time as the NVAX, which was the
>>>> last new VAX design in CMOS.
>>>
>>> It is a fact that "price per VUPS" was very high for the 9000
>>> compared to smaller VAX'es.
>>
>> Yes.
>>
>>> But was it intended to compete on "price per VUPS"?
>>
>> Well. The problem was that when the 9000 finally did come out, it was
>> not competitive from any perspective.
>> It was way more expensive than a 7000. It was way larger than a 7000.
>> It was way costlier to run than a 7000. It had close to similar
>> performance to a 7000. The 9000 started shipping in 1991, while the
>> 7000 shipped in 1992.
>
> I think 9000 started shipping in 1990.

That was the original plan (or even 89), but they got delayed. Which was
part of the problem with the 9000. It was way expensive, and had serious
problems, and got delayed. I think initial deliveries were close to end
of 91, and it still had issues. According to Wikipedia a few systems
were shipped in 90, but they had issues.
(https://en.wikipedia.org/wiki/VAX_9000)

It was perhaps a hard sell even in the original plan, but with the
delays added, any window of opportunity was basically lost. The market
was definitely not there anymore when the systems finally were shipped.

>>> * max IO capacity in a single box
>>
>> In a single box, I think they come out even. The 9000 had massively
>> more I/O capacity, if you look at the full system. But that's a lot of
>> boxes.
>
> As I recall it then the 6000 was a single cabinet wide thing, while
> a 9000 (at least in large config - 400??) was a 3 cabinet wide thing.

The 9000 was, as far as I can recall, two double cabs and one single.
Maybe it was possible to get some smaller configs, but that compared to
the 6000 or 7000 at a single cab is quite a difference.

>> And DEC was also pushing for clusters, and had been for quite a while,
>> where the capacity of a single machine wasn't the main point.
>
> In the VAX market.
>
> The mainframe market was still single machine oriented.

True.

>>> And just maybe its fast demise was also due to the fact that the
>>> mainframe market was moving to a single architecture (IBM mainframe
>>> with IBM, Amdahl and Hitachi as vendors).
>>
>> Possibly, but I wouldn't think so. There were plenty of DEC customers
>> looking for faster VAXen. And the VAX market was still strong at that
>> time, athough the Alpha was just about coming in as well.
>
> I am not sure that we really disagree so much.

I'm not sure we disagree either. Mostly getting through the finer points
of the whole thing.

> The 9000 did not have a market.
>
> In the "super-super mini-computer" market it was too expensive.
> You could buy a handful of 6000's or huge number of MicroVAX'es
> for the price of a 9000. A 9000 was simply not cost efficient
> in that market.

Yes.

> In the "mainframe" market (it was branded as an IBM mainframe
> killer so that market must have been considered relevant) the
> time of the non-IBM-compatible mainframe was over.

Probably. But even so, the 9000 just wasn't the right choice for DEC.
But I guess in a way it was symptomatic of the whole IBM-ification of
DEC that started in the late 80s.

> And in the "super computer" market then one could buy the vector
> bolton for the 9000 (and I suspect some did - the swedish weather
> service 9000 Jan-Erik remembers probably had it), but it was still
> a very expensive system - and the 6000 could also get vector
> bolton.

Yup. But I don't think the vector option was ever very relevant. As far
as I can remember, it wasn't even available on the 6000-500 and
6000-600. Only the 6000-400. Which suggest that there wasn't any demand.
The 7000 never had it.

Johnny

Arne Vajhøj

unread,
4 Oct 2023, 20:11:5104/10/2023
to
On 10/4/2023 7:36 PM, Johnny Billquist wrote:
> On 2023-10-04 22:59, Arne Vajhøj wrote:
>> On 10/4/2023 6:40 AM, Johnny Billquist wrote:
>>> Well. The problem was that when the 9000 finally did come out, it was
>>> not competitive from any perspective.
>>> It was way more expensive than a 7000. It was way larger than a 7000.
>>> It was way costlier to run than a 7000. It had close to similar
>>> performance to a 7000. The 9000 started shipping in 1991, while the
>>> 7000 shipped in 1992.
>>
>> I think 9000 started shipping in 1990.
>
> That was the original plan (or even 89), but they got delayed. Which was
> part of the problem with the 9000. It was way expensive, and had serious
> problems, and got delayed. I think initial deliveries were close to end
> of 91, and it still had issues. According to Wikipedia a few systems
> were shipped in 90, but they had issues.
> (https://en.wikipedia.org/wiki/VAX_9000)

Usually shipping is considered shipping - problems or no problems.

>> In the "mainframe" market (it was branded as an IBM mainframe
>> killer so that market must have been considered relevant) the
>> time of the non-IBM-compatible mainframe was over.
>
> Probably. But even so, the 9000 just wasn't the right choice for DEC.
> But I guess in a way it was symptomatic of the whole IBM-ification of
> DEC that started in the late 80s.

When you are number two you want to be number one.

>> And in the "super computer" market then one could buy the vector
>> bolton for the 9000 (and I suspect some did - the swedish weather
>> service 9000 Jan-Erik remembers probably had it), but it was still
>> a very expensive system - and the 6000 could also get vector
>> bolton.
>
> Yup. But I don't think the vector option was ever very relevant. As far
> as I can remember, it wasn't even available on the 6000-500 and
> 6000-600. Only the 6000-400. Which suggest that there wasn't any demand.
> The 7000 never had it.

I think that is correct - only -400 and -500 had it.

VMS dropped out of the scientific computing market.

And soon after the entire vector thing went away in scientific
computing for a few decades until it came back in the form of GPU's.

Arne



bill

unread,
4 Oct 2023, 20:24:2004/10/2023
to
I always find it funny when I see this comment.
Unisys (formerly UNIVAC) is doing just fine with their 2200 which
is the follow on from and compatible with the old 1100. As a
matter of fact, two of the largest ISes in use today run on
Unisys after running for more than a decade on the 1100.

bill


Arne Vajhøj

unread,
4 Oct 2023, 20:32:5204/10/2023
to
It probably did not help either that the vector bolton took
space away from CPU's.

Max was:
* 6 CPU + 0 vector
* 4 CPU + 1 vector
* 2 CPU + 2 vector

Arne


It's loading more messages.
0 new messages