PDP-11/74

1,321 views
Skip to first unread message

Antoni Villalonga

unread,
Feb 9, 2020, 12:16:16 PM2/9/20
to [PiDP-11]
Hi,

There are specs for /74 available?
It seems Simh doesn't support that interesting multi-processor version.

http://www.bitsavers.org/pdf/dec/pdp11/1174/1170mP_Jul77.pdf
https://gunkies.org/wiki/PDP-11/74
http://www.dbit.com/pub/pdp11/faq/faq.pages/never11s.html

Regards,

--

Antoni Villalonga i Noceras
#Bloc# ~> http://friki.cat/
#Twitter# ~> @friki

Jonathan Morton

unread,
Feb 9, 2020, 12:18:37 PM2/9/20
to Antoni Villalonga, [PiDP-11]
> On 9 Feb, 2020, at 7:16 pm, Antoni Villalonga <friki...@gmail.com> wrote:
>
> There are specs for /74 available?
> It seems Simh doesn't support that interesting multi-processor version.

I understand this was a machine that was prototyped but never actually sold. Honestly, it must have been a bit of a nightmare.

- Jonathan Morton

Johnny Billquist

unread,
Feb 9, 2020, 12:18:57 PM2/9/20
to Antoni Villalonga, [PiDP-11]
Hi.

On 2020-02-09 18:16, Antoni Villalonga wrote:
> Hi,
>
> There are specs for /74 available?

Yes.
simh do not support multiprocessors in general. I believe there are
internal details in simh which makes this hard to implement.

The 11/74 is otherwise an interesting machine. E11 have experimental
support for it, but there are a number of tricky issues that are hard to
get right. E11 works, but is not perfect. It would be a very complex
task if one were to implement this in simh, even disregarding any
limitations in the simh design itself.

Johnny

--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: b...@softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol

Johnny Billquist

unread,
Feb 9, 2020, 12:21:23 PM2/9/20
to Jonathan Morton, Antoni Villalonga, [PiDP-11]
Softof correct. None were sold. Multiple were manufactured. Some even
went to customers on field trial, but were returned at the end of it.

DEC kept running them internally, and as far as I know, the last one was
shut down in 2001 or so.

Nothing nightmarish about it at all. I have a simulated one running.
That is Mim.Update.UU.SE. Just telnet to it, and log in.
Anyone who is familiar with RSX might find the memory page of RMD a bit
"special". Or just try the task MPD.

Garry Lockyer

unread,
Feb 9, 2020, 12:36:32 PM2/9/20
to Jonathan Morton, Antoni Villalonga, [PiDP-11]
It was quite a mass of cables!

One of the very few 11/74s that escaped Engineering went to Alberta Government Telephones (AGT, now TELUS), but was returned to DEC and became an in-house system (at CGO, at Calgary).

I’m not sure how many CPU nodes it had at AGT. When I saw it, there were only 2 and I split that into 2 independent systems, which we moved one at a time when we moved from an office in south Calgary to one in north Calgary.

Regards,

Garry Lockyer
C: +1.250.689.0686
E: Ga...@Lockyer.ca


> On Feb 9, 2020, at 09:18, Jonathan Morton <chrom...@gmail.com> wrote:
>
> 
> --
> You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pidp-11/86773B19-3666-4089-9A57-50B54B7146AA%40gmail.com.

Johnny Billquist

unread,
Feb 9, 2020, 12:52:14 PM2/9/20
to Garry Lockyer, Jonathan Morton, Antoni Villalonga, [PiDP-11]
Dave Carroll, who was the last man standing over the internal systems
posted a bunch of photos a number of years ago. Seems they might no
longer be directly accessible, but people made copies.
Here is one site with pictures from the last system:

https://oboguev.livejournal.com/2696291.html

Yes, there were some cables, and you can even see them in some pictures.
The majority was because of the memory boxes. The MK11 memory have four
incoming flat cables, and four outgoing. This for a normal 11/70. With
the 11/74, each CPU have such a set of cables for their memory bus.
Which means that the MK11 will have 16 incoming and 16 outgoing flat
cables each.
But apart from that, it's not much different than a normal 11/70.

If people want to, we can dive into more technical details of these
systems. They are *very* cool. :-)

Johnny

Antoni Villalonga

unread,
Feb 9, 2020, 1:46:10 PM2/9/20
to Johnny Billquist, Garry Lockyer, Jonathan Morton, [PiDP-11]
On 2020-02-09 18:36, Garry Lockyer wrote:
> It was quite a mass of cables!

Lucky you! :D

On Sun, Feb 9, 2020 at 6:52 PM Johnny Billquist wrote:
> But apart from that, it's not much different than a normal 11/70.

I didn't found the handbook for that version.

From a developer point of view, there are more changes than new
instructions: ASBR locks and IIST magic?
I'll love to see some more docs about them.

I'm trying to figure out what they do and how should a time-sharing
system use them.
Also where is the limit on number of processors. May work a
virtualized /74 with 8/16/32 ALUs?.

John Forecast

unread,
Feb 9, 2020, 1:47:40 PM2/9/20
to [PiDP-11]
I was the project leader for DECnet-11M+ (Phase III) and was responsible for the multi-processor support. In general, most user-mode code ran the same as on RSX-11M. Kernel-mode code on the other hand could require additional support. The caches on each processor are not coherent so you either have to flush the cache at various points or run uncached by setting a bit in a PDR. The ASRB instruction (normally not used very frequently) was modified to always bypass the cache and was used for MP locking.

I initially got everything running on our 11/70 and then managed to book an hour of time on CASTOR:: (the RSX development group's quad machine). I booted my image, started DECnet and about 10 secs later the machine room went quiet and all I could hear was each of the 4 console LA36's in sequence print out XDT> . I had forgotten to flush the cache in a device driver and all 4 processors were trying to use the same buffer for I/O. A couple of weeks later we received our own dual-processor system (ELROND::) and I was able to get everything running.

My recollection from some conversations with our support engineer was that the cables into and out of the MK11's were a source of system instability and were the primary reason for the system being cancelled.

  John.



John Forecast

unread,
Feb 9, 2020, 3:17:40 PM2/9/20
to [PiDP-11]


On Sunday, February 9, 2020 at 1:46:10 PM UTC-5, Antoni Villalonga wrote:
On 2020-02-09 18:36, Garry Lockyer wrote:
> It was quite a mass of cables!

Lucky you! :D

On Sun, Feb 9, 2020 at 6:52 PM Johnny Billquist wrote:
> But apart from that, it's not much different than a normal 11/70.

I didn't found the handbook for that version.

 The 11/74 handbook was planned but I don't believe it was ever published.


From a developer point of view, there are more changes than new
instructions: ASBR locks and IIST magic?
I'll love to see some more docs about them.


 There were no new instructions although the final 11/74 was expected to include CIS (Commercial Instruction Set). In addition to ASRB and IIST, there were changes to:

    cache    - for flushing and temporary bypass
    pdr        - cache bypass for a specific page
    map      - add cache bypass to unibus map entries

I'm trying to figure out what they do and how should a time-sharing
system use them.

Some of the MP-specific bits in the cache and pdr descriptors "leaked out" in the 11/44 documents. The full source code for the RSX-11M+ kernel is available (including the MP support code) on any M+ distribution as a final definition of the system interfaces.
 
Also where is the limit on number of processors. May work a
virtualized /74 with 8/16/32 ALUs?.


On the physical machine the memory boxes and IIST were limited to a max of 4 processors. RSX-11M+ also assumes a max of 4 processors. In a virtrualized environment, if you work around the above limitations with your own design, I don't think there are any other limitations although at some point spin-locks will limit any performance improvements from adding additional processors.

  John.

Johnny Billquist

unread,
Feb 9, 2020, 3:56:53 PM2/9/20
to John Forecast, [PiDP-11]
Ok. More talk on the 11/74 then... Fun.
And I don't have Johns first hand experience of working on a real
machine, so I'll happily defer to him if we disagree on anything.

That said, I have certainly had some talks with Dave Carroll on the
topic, as well as been working together with John Wilson on the E11
support for the machine, as well as having an emulated system running
24/7 and have had to write some software that really dive into the
kernel, for which proper MP handling is essential. Yes, I don't know how
many times I crashed RSX while getting TCP/IP and assorted services
working correctly.

On 2020-02-09 21:17, John Forecast wrote:
>
>
> On Sunday, February 9, 2020 at 1:46:10 PM UTC-5, Antoni Villalonga wrote:
>
> On 2020-02-09 18:36, Garry Lockyer wrote:
> > It was quite a mass of cables!
>
> Lucky you! :D
>
> On Sun, Feb 9, 2020 at 6:52 PM Johnny Billquist wrote:
> > But apart from that, it's not much different than a normal 11/70.
>
> I didn't found the handbook for that version.
>
>
>  The 11/74 handbook was planned but I don't believe it was ever published.

I never saw one. But on the other hand, with just a couple of additional
comments, the 11/70 processor handbook covers it.
And all the extensions introduced by the 11/74 were later on also
available on the 11/44 as well as the J11.

> From a developer point of view, there are more changes than new
> instructions: ASBR locks and IIST magic?
> I'll love to see some more docs about them.
>
>
>  There were no new instructions although the final 11/74 was expected
> to include CIS (Commercial Instruction Set). In addition to ASRB and
> IIST, there were changes to:
>
>     cache    - for flushing and temporary bypass
>     pdr        - cache bypass for a specific page
>     map      - add cache bypass to unibus map entries

The cache bypass in the cache was new.
The cache bypass on the pdr was also new.

Neither introduced any new instructions, nor any other type of change to
the machine.

The unibus map do not have any cache bypass characteristic. Not sure
that it would even be meaningful. The most that ever happens based on
DMA is the invalidation of cache entries. Nothing is ever read or
written to/from cache for DMA.

If two processes share the same memory write enabled, they will be
mapped with cache bypass, which means any DMA will be correctly
reflected in both processes memory anyway.

In addition, as you note, ASRB was modified to always bypass the cache.
This was in order to more efficiently implement spin locks.

In the RSX code, an 11/74 is identified by first identifying it as an
11/70, and then check if the cache bypass bit in the pdr is there or
not. On a normal 11/70 that bit is ignored when the pdr is written, and
always reads back as 0.

The IIST is just a device, so it would theoretically be possible to hook
to any Unibus machine. Of course, it don't really make sense unless you
also have shared memory, but anyway.

And the IIST is used to interrupt other processors, boot them, and also
monitor if they happen to hang.
(IIST stands for Interprocessor Interrupt and Sanity Timer, if I
remember right.)

> I'm trying to figure out what they do and how should a time-sharing
> system use them.
>
>
> Some of the MP-specific bits in the cache and pdr descriptors "leaked
> out" in the 11/44 documents. The full source code for the RSX-11M+
> kernel is available (including the MP support code) on any M+
> distribution as a final definition of the system interfaces.

Right. And also the RSX manuals do have the sections marked MP systems
only, even though officially none existed. :-)

> Also where is the limit on number of processors. May work a
> virtualized /74 with 8/16/32 ALUs?.
>
>
> On the physical machine the memory boxes and IIST were limited to a max
> of 4 processors. RSX-11M+ also assumes a max of 4 processors. In a
> virtrualized environment, if you work around the above limitations with
> your own design, I don't think there are any other limitations although
> at some point spin-locks will limit any performance improvements from
> adding additional processors.

Working around these limitations means coming up with a different
hardware design, as there are masks and things in the IIST which limits
it to 4 processors.

But if you want to do something incompatible, you can of course do
anything, but then it isn't an 11/74 anymore.

More fun things about the 11/74:

It is a truly SMP system. Any CPU can do any work. There is only one
piece that is dedicated to one CPU, and that is the update of the system
clock. Only one CPU does that, but if that CPU goes offline, another CPU
takes over.
CPUs can be brought online and offline at any point in time. It's a
simple command. The same is also true for memory. So the system can be
changed quite a lot at runtime.
It is also possible to take a CPU offline, dedicate some memory to it,
and run diagnostics while the other CPUs continue normal operation, and
you can even control the running of diagnostics on the offline CPU from
within RSX.
Every CPU is pretty much just a normal 11/70. As mentioned above, a few
changes were made to the cache behavior, MMU and one instruction, but
mostly this is totally invisible to user processes. It is commonly also
not noticeable even to kernel code.
One consequence of this is to understand that it also means that each
CPU have its own Unibus. So this system does not have a shared I/O bus.
However, it also supports the DT03/DT07 which is a Unibus switch. This
allows for shared segments of Unibus, which can then be switched between
the different CPUs.
But, importantly, each CPU has its own console. They are all sitting at
the same address, but on different Unibuses.
A minimal system have the same number of Unibuses as it has CPUs.
For redundancy and performance, it supported dual access devices, and
you commonly then wanted the different ports of a device to be connected
to controllers sitting on different Unibuses. So you did not loose a
device just because a CPU went away.

The two most known 11/74 systems were CASTOR and POLLUX.
CASTOR was a 4 CPU system used by RSX development. POLLUX was used by
the DECnet group, if I remember right, and had 2 CPUs.
A third system existed, I think, also with 2 CPUs, managed by field
service. Not sure if this was PHEANX.
By the end, hardware from different places were shipped to Colorado,
where it was put together as one 4-CPU machine. This is the one I gave a
link to some pictures on.
There is a name "DAEMON" on that machine. Not sure if it was renamed to
that. I thought this was PHEANX, but I might have mixed names up...

Around 2000, this machine developed some hardware problem, and it was
decided it wasn't worth trying to fix it. I don't know what happened to
the machine after that, but I think it was scrapped.

At that time, RSX development had been done for many years on an 11/74,
because it was simply the fastest machine around for the work. But
around this time, emulators started showing up finally making the 11/74
no longer that needed.
But the fact that it was kept around for so long also meant that there
was a lot of effort put into fixing any MP issues in RSX, and making
sure all software would function correctly on this machine.

The only OS that ever supported the 11/74 is RSX-11M-PLUS. M-PLUS was
explicitly developed for this machine, and even though the machine never
became a product, the various improvements in M-PLUS did make it become
a product in its own right, and it is the most advanced and highest
performing variant of RSX-11 around.

The basic kernel had rather few changes required in order to support the
11/74. But it does rely on a big lock, so it does not scale that well to
more than four processors. Already at four, the gain starts to drop off.
Since the 11/74 CPUs themselves do not have any cache coherency between
them, the OS explicitly have to make sure that the cache is only used in
acceptable ways, and it does potentially cost in performance under some
circumstances. Most obviously if more than one process share the same
memory with write access, then that memory will always be bypassing the
cache, which costs in performance.

Johnny

John Forecast

unread,
Feb 9, 2020, 7:24:57 PM2/9/20
to [PiDP-11]
Actually it does exist (see routine $MPUBM in MEMAP.MAC) but it's only used for DMA access to Unibus memory. A very niche usage since Dave Carroll only added software support in '93 about a decade after the initial release.
CASTOR and POLLUX were the same system. CASTOR was for when the system was running as a quad. Sometimes they would split the system into 2 duals and the second system would be POLLUX. The DECnet 11/74 was ELROND (all of our development machines were named after Middle Earth characters). After some initial teething problems it eventually became our timesharing system (taking over from an 11/70) and was remarkably stable with very few hardware problems.
Compute intensive applications work well on the 11/74 but I/O intensive or system call intensive applications can slow down the entire system  due to cache flushes when entering the kernel.

  John.

Johnny Billquist

unread,
Feb 9, 2020, 8:20:59 PM2/9/20
to [PiDP-11]
HI, John. Thanks for the feedback and refresh of my memory...

I'm keeping all the text in, so people have to scroll some to see more
comments...
I had to read through some documentation again here. I was pretty much
sure that the Unibus map always went directly to memory, and did not
make use of the cache, but I am obviously wrong. Same for the RH70. I
don't know why, but I was almost certain that these paths never went
through the cache, and only, if needed, invalidated it.
Seems I was wrong, in which case you definitely want a cache bypass also
for the Unibus map.

Unibus memory made life even more weird, though. :-)
Dang! Now that you say so, I do remember. CASTOR and POLLUX was the same
(sortof) machine. I really need to etch that somewhere in my brain.

And yes, I even know about ELROND. I don't know why that named slipped.
The name pops up in a whole bunch of internal documentation and code...
I'm not sure that is correct. I can't remember seeing any unconditional
flush, and I don't think it is necessary. But there is the cache bypass
that is being used quite some... ;-)

However, there are still issues with lots of system calls, because of
the lock to enter the kernel, which only one CPU can have at a time.
Which just don't scale well. It has been observed in other types of
systems as well that above about four CPUs performance don't scale well
anymore.

Of course, the spin locks bypass the cache. And that also is painful,
since it keeps the memory boxes very occupied, which might hurt the
other CPUs.
And for some access to shared kernel structures, you'll need to bypass
as well. So there are definitely issues.

John Forecast

unread,
Feb 10, 2020, 1:36:06 PM2/10/20
to [PiDP-11]
There's an unconditional flush at the end of $MLOCK which is called to take out the kernel lock. At least that routine doesn't sit in a ASRB spin loop and executes a WAIT instruction if it can't get the lock within a couple of attempts.

Digby R.S. Tarvin

unread,
Feb 10, 2020, 5:26:42 PM2/10/20
to [PiDP-11]
Very interesting. I see from the image that Johnny posted a link to that the front panel(s) are very similar to the 11/70 (PiDP11) panel, which is rather interesting. The only difference I can see is that the key switch is replaced by another rotary knob, and the main toggle switches look a little slimmer. I can't quite read the annotation on the extra knob, but it looks like it might serve the same function as the key switch, is that right?

Anyway, it appears that all it would take appart from new emulation software is a slight revision to the artwork and replaceing the key switch with a rotary knob to convert four PiDP11's into an 11/74 emulation.. I'm guessing there isn't any operating system software to run on it though, which would make it a lot less interesting to emulate. (and I think I prefer the commercial 11/70 livery)

Also, If there is one front panel per processor, then the 11/74 appears to be a very big machine - looks about 8 19" rack widths in that picture.

DigbyT


--
You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.

Johnny Billquist

unread,
Feb 10, 2020, 5:47:20 PM2/10/20
to [PiDP-11]
Right, $MLOCK is called if you take a WAIT lock. It's better than a spin
lock in that it will not be hammering the memory system all the time.

And you are right, the exec lock is a wait lock, so it does flush the
cache. Oh well.

It does depend on a spin lock for some situations, though. So spin lock
loops might still happen. Hopefully not for long, though.

Johnny Billquist

unread,
Feb 10, 2020, 5:58:06 PM2/10/20
to Digby R.S. Tarvin, [PiDP-11]
Of course there is an operating system. Didn't you listen? That is
RSX-11M-PLUS. It supports the 11/74 just fine. :-)
Mim is running as an 11/74 today, and have been for years...

And yes, the "extra" knob is just the key turned into a knob. Same three
positions...

One thing I find interesting is that the front panels seems to have been
mounted more vertical.

The livery is actually the same as on the DECdatasystem-570. Which was
the PDP-11/70 in the corporate cabinet.
Magica looks like that.

And yes, the 11/74 is a very big machine. :-)

And, just to make it clear, chances are that just connecting four simh
instances together somehow, will not work. At least not with RSX.
RSX expects machines to respond within a pretty small time window. And
it times this with just simple loops. So, with a much more unpredictable
scheduling of actually CPU execution, in combination with much faster
execution of code, RSX will probably time out interprocessor requests
all the time, and just crash.

You are going to need to start handling instruction scheduling in a very
tight way for en 11/74 emulation to actually work right.
This was obviously not a problem on real hardware, where the response
times would actually have been very predictable.

Johnny
> <mailto:pidp-11+u...@googlegroups.com>.
> <https://groups.google.com/d/msgid/pidp-11/652ffd11-64fd-4d4c-ab6b-4147f9bf8051%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to pidp-11+u...@googlegroups.com
> <mailto:pidp-11+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pidp-11/CACo5X5i3L69FVjjNR6HvFm84a0JWUFFwDqOGCNQj3q5ze_ON%3Dw%40mail.gmail.com
> <https://groups.google.com/d/msgid/pidp-11/CACo5X5i3L69FVjjNR6HvFm84a0JWUFFwDqOGCNQj3q5ze_ON%3Dw%40mail.gmail.com?utm_medium=email&utm_source=footer>.

Digby R.S. Tarvin

unread,
Feb 10, 2020, 6:37:26 PM2/10/20
to [PiDP-11]
On Mon, 10 Feb 2020 at 22:58, Johnny Billquist <b...@softjar.se> wrote:
Of course there is an operating system. Didn't you listen? That is
RSX-11M-PLUS. It supports the 11/74 just fine. :-)
Mim is running as an 11/74 today, and have been for years...

Ah, I stand corrected. Now that you  mention it, I did see your reference to having something running. But assumed if the machine was never commercially released then there wasn't likely to be much available in the way of finished software. 
 
And yes, the "extra" knob is just the key turned into a knob. Same three
positions...

One thing I find interesting is that the front panels seems to have been
mounted more vertical.

The livery is actually the same as on the DECdatasystem-570. Which was
the PDP-11/70 in the corporate cabinet.
Magica looks like that.

Looked to me more reminiscent of the industrial versions of the 11/70, which I didn't like as much as the red and purple ones that I had always seen in commercial/educational installations. 
 
And yes, the 11/74 is a very big machine. :-)

And, just to make it clear, chances are that just connecting four simh
instances together somehow, will not work. At least not with RSX.
RSX expects machines to respond within a pretty small time window. And
it times this with just simple loops. So, with a much more unpredictable
scheduling of actually CPU execution, in combination with much faster
execution of code, RSX will probably time out interprocessor requests
all the time, and just crash.

You are going to need to start handling instruction scheduling in a very
tight way for en 11/74 emulation to actually work right.
This was obviously not a problem on real hardware, where the response
times would actually have been very predictable.

Oh, I was not proposing to connect 4 PiDP's together running multiple copies of simh - I know that wouldn't work.

What I had in mind was a PDP 11/74 emulator modified to connect to four seperate blinkenlight servers, each runningg on one of four PiDP.
The 11/74 emulation would run either on one of the four PiDP 11 raspberry pi's or on a fifth machine. 

The neat thing about the blinkenlight architecture is that no hardware change would be necessary to support that.. The PiDP 11's just need their scripts modified to start the blinkenlight servers and simh as seperate independent services - like this.

digbyt@PiDP11:~ $ systemctl status server11
â server11.service - LSB: PiDP11 Blinkenlight Server
   Loaded: loaded (/etc/init.d/server11; generated; vendor preset: enabled)
   Active: active (running) since Tue 2020-02-04 09:48:29 AEDT; 1 weeks 0 days a
     Docs: man:systemd-sysv-generator(8)
  Process: 359 ExecStart=/etc/init.d/server11 start (code=exited, status=0/SUCCE
    Tasks: 3 (limit: 4915)
   CGroup: /system.slice/server11.service
           ââ443 /opt/pidp11/src/11_pidp_server/pidp11/bin-rpi/pidp1170_blinkenl

digbyt@PiDP11:~ $ systemctl status pidp11
â pidp11.service - LSB: PiDP-11 emulator
   Loaded: loaded (/etc/init.d/pidp11; generated; vendor preset: enabled)
   Active: active (running) since Tue 2020-02-04 09:48:31 AEDT; 1 weeks 0 days a
     Docs: man:systemd-sysv-generator(8)
  Process: 447 ExecStart=/etc/init.d/pidp11 start (code=exited, status=0/SUCCESS
    Tasks: 7 (limit: 4915)
   CGroup: /system.slice/pidp11.service
           ââ561 SCREEN -dmS pidp11 ./pidp11_clnt.sh
           ââ562 /bin/bash ./pidp11_clnt.sh
           ââ583 sudo ./client11 /run/pidp11/tmpsimhcommand.txt
           ââ587 ./client11 /run/pidp11/tmpsimhcommand.txt

I did it so that I cold use my PiDP 11 as a stand along console, with the simh or other blinkenlight client running on a different machine not necessarily in the same room (or even country) as the panel.

DigbyT

Johnny Billquist

unread,
Feb 11, 2020, 2:40:49 AM2/11/20
to Digby R.S. Tarvin, [PiDP-11]
Hi.

On 2020-02-11 00:37, Digby R.S. Tarvin wrote:
> On Mon, 10 Feb 2020 at 22:58, Johnny Billquist <b...@softjar.se
> <mailto:b...@softjar.se>> wrote:
>
> Of course there is an operating system. Didn't you listen? That is
> RSX-11M-PLUS. It supports the 11/74 just fine. :-)
> Mim is running as an 11/74 today, and have been for years...
>
>
> Ah, I stand corrected. Now that you  mention it, I did see your
> reference to having something running. But assumed if the machine was
> never commercially released then there wasn't likely to be much
> available in the way of finished software.

Yeah, that's the funny thing. Even though they were never sold, RSX was
well tested, and documented. Same for DECnet. Pretty much all other
software did not need any change. It all just works.
I had to explicitly fix some things in TCP/IP to make it work on the
11/74 as well, but then again, some parts of that code also digs deep
into the kernel.

So, in the end, pretty much all software I ever saw for RSX works just
fine on an 11/74.

And even the RSX manual set contains information about multiprocessor
specific things, so the documentation is also there.

> And yes, the "extra" knob is just the key turned into a knob. Same
> three
> positions...
>
> One thing I find interesting is that the front panels seems to have
> been
> mounted more vertical.
>
> The livery is actually the same as on the DECdatasystem-570. Which was
> the PDP-11/70 in the corporate cabinet.
> Magica looks like that.
>
>
> Looked to me more reminiscent of the industrial versions of the 11/70,
> which I didn't like as much as the red and purple ones that I had always
> seen in commercial/educational installations.

I don't know why DEC changed the livery, but they did. All later
machines were using those blue colors.

> And yes, the 11/74 is a very big machine. :-)
>
> And, just to make it clear, chances are that just connecting four simh
> instances together somehow, will not work. At least not with RSX.
> RSX expects machines to respond within a pretty small time window. And
> it times this with just simple loops. So, with a much more
> unpredictable
> scheduling of actually CPU execution, in combination with much faster
> execution of code, RSX will probably time out interprocessor requests
> all the time, and just crash.
>
> You are going to need to start handling instruction scheduling in a
> very
> tight way for en 11/74 emulation to actually work right.
> This was obviously not a problem on real hardware, where the response
> times would actually have been very predictable.
>
>
> Oh, I was not proposing to connect 4 PiDP's together running multiple
> copies of simh - I know that wouldn't work.

Well, if there wasn't for the timing issues, it would not be a bad idea.
But having one simh instance pretending multiple CPUs also do not work,
because simh wasn't design for it.
So there is a problem here...

John Forecast

unread,
Feb 11, 2020, 12:55:08 PM2/11/20
to [PiDP-11]


On Tuesday, February 11, 2020 at 2:40:49 AM UTC-5, Johnny Billquist wrote:
Hi.

On 2020-02-11 00:37, Digby R.S. Tarvin wrote:
> On Mon, 10 Feb 2020 at 22:58, Johnny Billquist <b...@softjar.se
> <mailto:b...@softjar.se>> wrote:
>
>     Of course there is an operating system. Didn't you listen? That is
>     RSX-11M-PLUS. It supports the 11/74 just fine. :-)
>     Mim is running as an 11/74 today, and have been for years...
>
>
> Ah, I stand corrected. Now that you  mention it, I did see your
> reference to having something running. But assumed if the machine was
> never commercially released then there wasn't likely to be much
> available in the way of finished software.

Yeah, that's the funny thing. Even though they were never sold, RSX was
well tested, and documented. Same for DECnet. Pretty much all other
software did not need any change. It all just works.
I had to explicitly fix some things in TCP/IP to make it work on the
11/74 as well, but then again, some parts of that code also digs deep
into the kernel.

So, in the end, pretty much all software I ever saw for RSX works just
fine on an 11/74.

And even the RSX manual set contains information about multiprocessor
specific things, so the documentation is also there.


The 11/74 was cancelled only a few weeks before it was to be announced so pretty much most of the work was complete - 11M+ was already in field test. By that time, ELROND:: had become our timesharing system for all DECnet-RSX development and CASTOR:: was heavily used for RSX development so there was a strong incentive to keep the code running. We were not supposed to make any functional or performance improvements on the MP side but bug fixes were OK.

>     And yes, the "extra" knob is just the key turned into a knob. Same
>     three
>     positions...
>
>     One thing I find interesting is that the front panels seems to have
>     been
>     mounted more vertical.
>
>     The livery is actually the same as on the DECdatasystem-570. Which was
>     the PDP-11/70 in the corporate cabinet.
>     Magica looks like that.
>
>
> Looked to me more reminiscent of the industrial versions of the 11/70,
> which I didn't like as much as the red and purple ones that I had always
> seen in commercial/educational installations.

I don't know why DEC changed the livery, but they did. All later
machines were using those blue colors.


At least it was better than the later remote diagnostic consoles which got rid of the lights entirely.

Digby R.S. Tarvin

unread,
Feb 11, 2020, 2:52:18 PM2/11/20
to Johnny Billquist, [PiDP-11]
Hi,

On Tue, 11 Feb 2020 at 07:40, Johnny Billquist <b...@softjar.se> wrote:
Hi.

On 2020-02-11 00:37, Digby R.S. Tarvin wrote:

> Ah, I stand corrected. Now that you  mention it, I did see your
> reference to having something running. But assumed if the machine was
> never commercially released then there wasn't likely to be much
> available in the way of finished software.

Yeah, that's the funny thing. Even though they were never sold, RSX was
well tested, and documented. Same for DECnet. Pretty much all other
software did not need any change. It all just works.
I had to explicitly fix some things in TCP/IP to make it work on the
11/74 as well, but then again, some parts of that code also digs deep
into the kernel.

So, in the end, pretty much all software I ever saw for RSX works just
fine on an 11/74.
And even the RSX manual set contains information about multiprocessor
specific things, so the documentation is also there.

That makes it more interesting. 

I have never actually used a non-unix system on a PDP11, so don't know
much about RSX. Closest I came was studying the structure of RT-11 in a computer
architecture course, usingf CP/M (which I understand was influenced by Tops-10)
and using VMS as a post PDP11 DEC OS. 

I'm mainlyh interested in operating systems for which source code is available,
and I assume the fact the the 11/74 didnt make it to release means Unix wasnt
ported to it. Is RSX source code available?
 
>     And yes, the "extra" knob is just the key turned into a knob. Same
>     three
>     positions...
>
>     One thing I find interesting is that the front panels seems to have
>     been
>     mounted more vertical.
>
>     The livery is actually the same as on the DECdatasystem-570. Which was
>     the PDP-11/70 in the corporate cabinet.
>     Magica looks like that.
>
>
> Looked to me more reminiscent of the industrial versions of the 11/70,
> which I didn't like as much as the red and purple ones that I had always
> seen in commercial/educational installations.

I don't know why DEC changed the livery, but they did. All later
machines were using those blue colors.

Yes, it seems strange. The aesthetics of the red and purple machines I am
familiar with are a significant factor in what sets the DEC machines apart (IMHO).
To me, the PDP11's were just the best looking machines on the market - beating all
the laster trendy Apple products hands down.  I had assumed
it was good aesthetic design, but perhaps it was just luck and the later inevitable
thanges that companies feel the need to make to get new products to stand out
just didnt work as well.
 
>     And yes, the 11/74 is a very big machine. :-)
>
>     And, just to make it clear, chances are that just connecting four simh
>     instances together somehow, will not work. At least not with RSX.
>     RSX expects machines to respond within a pretty small time window. And
>     it times this with just simple loops. So, with a much more
>     unpredictable
>     scheduling of actually CPU execution, in combination with much faster
>     execution of code, RSX will probably time out interprocessor requests
>     all the time, and just crash.
>
>     You are going to need to start handling instruction scheduling in a
>     very
>     tight way for en 11/74 emulation to actually work right.
>     This was obviously not a problem on real hardware, where the response
>     times would actually have been very predictable.
>
>
> Oh, I was not proposing to connect 4 PiDP's together running multiple
> copies of simh - I know that wouldn't work.

Well, if there wasn't for the timing issues, it would not be a bad idea.
But having one simh instance pretending multiple CPUs also do not work,
because simh wasn't design for it.
So there is a problem here...

I havn't looked at the 11/74 architecture, but assuming the multiple processors
share a physical memory I would think stitching four simh's together well enough
to allow an 11/74 OS to run transparently would be harder than enhancing a single
simh to include the 11/74 in its available architectures. 

Is the E11 that you mentioned as having experiment suopport for the 11/74 the Ersatz 11 emulator?
If so, that is the sort of thing that I would image could be modified to use four network PiDP11's panales.
Is E11 what youare running your 11/74 RSX system on?

Digby

Johnny Billquist

unread,
Feb 11, 2020, 4:59:00 PM2/11/20
to John Forecast, [PiDP-11]
Hi.
Interested in looking at some DECnet bugs...? ;-)
I have some odd crashes, for which I've kept the crash dumps, which
appear to be inside DECnet somewhere...

> >     And yes, the "extra" knob is just the key turned into a knob.
> Same
> >     three
> >     positions...
> >
> >     One thing I find interesting is that the front panels seems
> to have
> >     been
> >     mounted more vertical.
> >
> >     The livery is actually the same as on the DECdatasystem-570.
> Which was
> >     the PDP-11/70 in the corporate cabinet.
> >     Magica looks like that.
> >
> >
> > Looked to me more reminiscent of the industrial versions of the
> 11/70,
> > which I didn't like as much as the red and purple ones that I had
> always
> > seen in commercial/educational installations.
>
> I don't know why DEC changed the livery, but they did. All later
> machines were using those blue colors.
>
>
> At least it was better than the later remote diagnostic consoles which
> got rid of the lights entirely.

The remote diagnostics console (Update actually have a couple of them)
is really boring. Just a few lamps, and all dark purplish, with a white
frame.

But I can understand why it was developed. I'm usually far from Magica,
and while I do have access to the console, if the machine were to halt,
I cannot start it again...

Johnny Billquist

unread,
Feb 11, 2020, 5:12:29 PM2/11/20
to [PiDP-11]
Hi.

On 2020-02-11 20:52, Digby R.S. Tarvin wrote:
> Hi,
>
> On Tue, 11 Feb 2020 at 07:40, Johnny Billquist <b...@softjar.se
> <mailto:b...@softjar.se>> wrote:
>
> Hi.
>
> On 2020-02-11 00:37, Digby R.S. Tarvin wrote:
>
> > Ah, I stand corrected. Now that you  mention it, I did see your
> > reference to having something running. But assumed if the machine
> was
> > never commercially released then there wasn't likely to be much
> > available in the way of finished software.
>
> Yeah, that's the funny thing. Even though they were never sold, RSX was
> well tested, and documented. Same for DECnet. Pretty much all other
> software did not need any change. It all just works.
> I had to explicitly fix some things in TCP/IP to make it work on the
> 11/74 as well, but then again, some parts of that code also digs deep
> into the kernel.
>
> So, in the end, pretty much all software I ever saw for RSX works just
> fine on an 11/74.
> And even the RSX manual set contains information about multiprocessor
> specific things, so the documentation is also there.
>
>
> That makes it more interesting.

Indeed. :-)

> I have never actually used a non-unix system on a PDP11, so don't know
> much about RSX. Closest I came was studying the structure of RT-11 in a
> computer
> architecture course, usingf CP/M (which I understand was influenced by
> Tops-10)
> and using VMS as a post PDP11 DEC OS.

RT-11 is a very different beast than RSX. RT-11 bears close resemblance
to OS/8, but can also be seen as similar to CP/M, as well as Tops-10.

VMS is internally like RSX on stereoids. It is very similar, and early
versions of VMS was basically shipped with the RSX userland, before they
got everything running native. Which is why early VAXen also had PDP-11
compatibility mode. So you could just take your RSX image and run it
under VMS as well.

At the user level, though, VMS looks a little more like RSTS/E.

> I'm mainlyh interested in operating systems for which source code is
> available,
> and I assume the fact the the 11/74 didnt make it to release means Unix
> wasnt
> ported to it. Is RSX source code available?

Yes, RSX comes with sources. And not the stripped things you get with
RT-11 either, but full proper sources for all of the kernel, and some
other bits. It is enough that you could have reverse-engineered how an
11/74 works from it, and made something for Unix.
The the fact that you wouldn't have had a machine on which to test Unix
pretty much meant it would never happen. Not to mention that the design
of Unix do not easily allow moving to MP. Unix is way heavy on using CPU
priorities to serialize access to things in the kernel. You have it all
over the place. And with MP, all of that have to be redesigned.
RSX, on the other hand, have a serialization of kernel access based on
work queues. So all drivers and other code just queues the things needed
to be done on the queue, and then exit. So all you have to do is make
sure the entering of data on the work queue is safe, and then the
drivers are good. Then you just need to have a lock when ever entering
the kernel normally, and then you're good. So rather few changes were in
the end needed to make RSX MP-able.

> I don't know why DEC changed the livery, but they did. All later
> machines were using those blue colors.
>
> Yes, it seems strange. The aesthetics of the red and purple machines I am
> familiar with are a significant factor in what sets the DEC machines
> apart (IMHO).
> To me, the PDP11's were just the best looking machines on the market -
> beating all
> the laster trendy Apple products hands down.  I had assumed
> it was good aesthetic design, but perhaps it was just luck and the later
> inevitable
> thanges that companies feel the need to make to get new products to
> stand out
> just didnt work as well.

DEC had different color schemes for all their machines. The PDP-11 was
maroon/purple. PDP-8 was orange/yellow. PDP-12 was green/something
greenish. PDP-10 was some kind of blue combination.

Other color schemes also existed. You had the Industrial-11, which was
red/blue if I remember right. And there were other special variants as
well. And I don't know/remember the color scheme for the PDP-9 or PDP-15.

But if you ask me, they all looked good. :-)

> Well, if there wasn't for the timing issues, it would not be a bad idea.
> But having one simh instance pretending multiple CPUs also do not work,
> because simh wasn't design for it.
> So there is a problem here...
>
>
> I havn't looked at the 11/74 architecture, but assuming the multiple
> processors
> share a physical memory I would think stitching four simh's together
> well enough
> to allow an 11/74 OS to run transparently would be harder than enhancing
> a single
> simh to include the 11/74 in its available architectures.

I think this is more a question about simh than the 11/74. And as far as
I am aware, it would be much harder to enhance simh to have four CPUs
than just stich four machines running separate instances of simh
together. The latter you could even do by just creating an mmaped memory
region that four different processes had access to. Then you already
have your shared memory done.

But the big problem is, as I said, timing related.

> Is the E11 that you mentioned as having experiment suopport for the
> 11/74 the Ersatz 11 emulator?

Yes.

> If so, that is the sort of thing that I would image could be modified to
> use four network PiDP11's panales.

Definitely.

> Is E11 what youare running your 11/74 RSX system on?

Yes.

John Forecast

unread,
Feb 11, 2020, 6:38:25 PM2/11/20
to [PiDP-11]
The last time I looked at the DECnet-RSX source code was probably around 1984. Without source code I think I'll have to decline ;-)

Steve Tockey

unread,
Feb 11, 2020, 7:10:49 PM2/11/20
to [PiDP-11]

Johnny Billquist wrote:

"DEC had different color schemes for all their machines. The PDP-11 was 
maroon/purple. PDP-8 was orange/yellow. PDP-12 was green/something 
greenish. PDP-10 was some kind of blue combination."

Interestingly, the PDP-11/20 had the same orange/yellow color scheme as the PDP-8/e and looked very similar at first glance. See, for example:





Johnny Billquist

unread,
Feb 11, 2020, 7:16:25 PM2/11/20
to Steve Tockey, [PiDP-11]
On 2020-02-12 01:10, Steve Tockey wrote:
>
> Johnny Billquist wrote:
>
> /"DEC had different color schemes for all their machines. The PDP-11 was /
> /maroon/purple. PDP-8 was orange/yellow. PDP-12 was green/something
> /
> /greenish. PDP-10 was some kind of blue combination."/
>
> Interestingly, the PDP-11/20 had the same orange/yellow color scheme as
> the PDP-8/e and looked very similar at first glance. See, for example:
>
> https://en.wikipedia.org/wiki/File:Digital_PDP11-IMG_1498_cropped.jpg

Either I'm color blind, or that is a color scheme that looks like a
PDP-12. :-)

Anyway. A rather odd color scheme that I haven't seen before on a
PDP-11. But, as I mentioned, there were certainly various special color
variants. Like the Industrial-11, which used red/blue, but was still
just a PDP-11.
I wonder what this color scheme might have been for...?

Here is a picture of an 11/20 in the more "normal" color scheme:
https://www.computerhistory.org/revolution/minicomputers/11/366/1946

Garry Lockyer

unread,
Feb 11, 2020, 7:54:15 PM2/11/20
to Steve Tockey, [PiDP-11]
Green PDP-11 might have been “Industrial” products.


Regards,

Garry Lockyer
E: Ga...@Lockyer.ca


On Feb 11, 2020, at 16:10, Steve Tockey <steve...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pidp-11/5024f42e-8939-42ca-aa03-db86e8be5cdd%40googlegroups.com.

Johnny Billquist

unread,
Feb 11, 2020, 8:05:40 PM2/11/20
to Garry Lockyer, Steve Tockey, [PiDP-11]
On 2020-02-12 01:54, Garry Lockyer wrote:
> Green PDP-11 might have been “Industrial” products.

Possibly also educational, or maybe OEM. I do know that later
Industrial-11 ran in blue/red, but they might have used a different
color scheme in the early days, so I would definitely not exclude it as
a possibility.

There were many niche markets, which had something or other a bit
special. I'm trying to remember, but I think the PDP-8/F could have some
funny color scheme as well, since it was commonly the OEM variant.

Steve Tockey

unread,
Feb 11, 2020, 10:50:38 PM2/11/20
to [PiDP-11]

The -8/f was basically an -8/e in a shorter (half deep) cabinet, with a different power supply, and LEDs on the front panel instead of incandescent lights. The -8/m was technically the OEM version of the -8/f, but the differences between the -8/f and -8/m were pretty minor. Both the -8/f and -8/m had a white outer border on the front panel whereas the -8/e had a black outer border.

A photo of a -8/f front panel is here (near the bottom of the page): https://jeelabs.org/article/1607a/

Lars Brinkhoff

unread,
Feb 11, 2020, 11:58:14 PM2/11/20
to [PiDP-11]
Johnny Billquist wrote:
DEC had different color schemes for all their machines. The PDP-11 was
maroon/purple. PDP-8 was orange/yellow. PDP-12 was green/something
greenish. PDP-10 was some kind of blue combination.

I collected some photos highlighting the PDP colors here:

Johnny Billquist

unread,
Feb 12, 2020, 1:40:27 AM2/12/20
to Steve Tockey, [PiDP-11]
On 2020-02-12 04:50, Steve Tockey wrote:
>
> The -8/f was basically an -8/e in a shorter (half deep) cabinet, with a different power supply, and LEDs on the front panel instead of incandescent lights. The -8/m was technically the OEM version of the -8/f, but the differences between the -8/f and -8/m were pretty minor. Both the -8/f and -8/m had a white outer border on the front panel whereas the -8/e had a black outer border.
>
> A photo of a -8/f front panel is here (near the bottom of the page): https://jeelabs.org/article/1607a/

I was trying to remember which of the 8/F and 8/M was the OEM. Anyway, I
have both...
And I remember they look slightly different than the 8/E (which I also
have).

I can't remember any differences at all between the 8/M and 8/F. But
maybe there was something. But I was wondering if there wasn't some
color difference between the 8/F and 8/M.

But the point was just that essentially, DEC did many different variants
on color schemes for machines, and even different "models" for specific
purposes.

Jon Brase

unread,
Feb 12, 2020, 3:42:28 AM2/12/20
to [PiDP-11]


On Tuesday, February 11, 2020 at 6:10:49 PM UTC-6, Steve Tockey wrote:


Interestingly, the PDP-11/20 had the same orange/yellow color scheme as the PDP-8/e and looked very similar at first glance. See, for example:


Have you ever been tested for colorblindness? That looks nothing like this on first glance, but if I apply a colorblindness filter to my browser window, the two color schemes become indistinguishable. Even the "normal" PDP-11 color scheme (like the 11/20 image Johnny linked, or the 11/70 color scheme that is used for the PiDP-11)  gets to resemble the PDP-8 scheme somewhat: the red parts match the color of the orange parts of the PDP-8 scheme, but the purple parts are just grey. About the only DEC color scheme that doesn't end up looking even somewhat like a PDP-8 is the PDP-10s blue scheme.

Jon Brase

unread,
Feb 12, 2020, 3:56:15 AM2/12/20
to [PiDP-11]


On Tuesday, February 11, 2020 at 6:16:25 PM UTC-6, Johnny Billquist wrote:
On 2020-02-12 01:10, Steve Tockey wrote:
>
> Johnny Billquist wrote:
>
> /"DEC had different color schemes for all their machines. The PDP-11 was /
> /maroon/purple. PDP-8 was orange/yellow. PDP-12 was green/something
> /
> /greenish. PDP-10 was some kind of blue combination."/
>
> Interestingly, the PDP-11/20 had the same orange/yellow color scheme as
> the PDP-8/e and looked very similar at first glance. See, for example:
>
> https://en.wikipedia.org/wiki/File:Digital_PDP11-IMG_1498_cropped.jpg

Either I'm color blind, or that is a color scheme that looks like a
PDP-12. :-)

Anyway. A rather odd color scheme that I haven't seen before on a
PDP-11. But, as I mentioned, there were certainly various special color
variants. Like the Industrial-11, which used red/blue, but was still
just a PDP-11.

Wikipedia lists the green scheme as the "original" PDP-11 front panel.

It also has a picture of an 11/70 with a blue scheme that looks more like that of the PDP-10

Paul Birkel

unread,
Feb 12, 2020, 4:08:08 AM2/12/20
to [PiDP-11]


On Wednesday, February 12, 2020 at 3:56:15 AM UTC-5, Jon Brase wrote:

Wikipedia lists the green scheme as the "original" PDP-11 front panel.

You're misinterpreting the legend on Wikipedia.  All that they are claiming is a photo of "an original front panel for a PDP-11/20".  AFAIK while green is striking it wasn't the default livery.

The initial production -- no model numbers yet, just labeled "PDP-11" -- was the light/dark purple combination..  I have one that originally was installed at CMU (presumably due to the Gordon Bell connection).

Garry Lockyer

unread,
Feb 12, 2020, 4:12:22 AM2/12/20
to Paul Birkel, [PiDP-11]
Commercial Systems used blue and white scheme -11s went they started using mid height racks.


Regards,

Garry Lockyer
E: Ga...@Lockyer.ca


On Feb 12, 2020, at 01:08, Paul Birkel <pbi...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.

Wolfgang Houben

unread,
Feb 12, 2020, 5:25:23 AM2/12/20
to [PiDP-11]
I saw this green colored pdp11/20 in the early seventies with Foxboro process control systems FOX 2/30.

Mark Matlock

unread,
Feb 12, 2020, 11:54:12 AM2/12/20
to Johnny Billquist, [PiDP-11]
Johnny, All,

I realize that E11 is the most simple way to explore the MP architecture of the 11/74, but I was wondering what your thoughts would be about a multi-J11 implementation?

When I think about the 11/94 (or 11/93 or M100-04) processor boards where 4 MB of fast RAM locally connected to the CPU, (eliminating the need for complications of cache memory) I was wondering if given todays technology could one design 4MB of multi-ported RAM that would allow multiple J11s to run RSX11M+ in MP mode?

It wouldn’t have the redundant capabilities of multiple Unibuses with DT07s or multi-ported disks but it still might be pretty interesting to play with. We see J11s come up from time to time and I once remember seeing a lot of 100 for sale.

Also if the RAM was battery backed up the fast power recovery feature of RSX is one I’ve wanted to explore. I remember “back in the day” having the power go out and the 11/44 at work which had battery backed up RAM (to simulate core). When the power came back and my VT100 came back I could hit ^R at KED or EDT and at most I might be missing a keystroke or two. The RK07s and RL02s all came back up the file system stayed sound. Just the sort of system one would want to run your nuclear reactors.

Best,
Mark
> --
> You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/pidp-11/5a5444b9-da68-940e-69b4-75de0e851a31%40softjar.se.

Steve Tockey

unread,
Feb 12, 2020, 5:14:09 PM2/12/20
to [PiDP-11]
John Brase wrote:

"Have you ever been tested for colorblindness?"

Actually, yes. Formally tested. I've known for many years. And my flavor of colorblindness even goes beyond the common male red-green kind. I have some coping mechanisms so usually it's not much of a problem. Looks like it tripped me up here, big time.

For anyone worried about the PiDP-10 switch color matching job I reported on last Saturday, the actual matching was not done by me. I recruited a non-colorblind friend for that--on purpose.

Charley Jones

unread,
Feb 12, 2020, 5:25:59 PM2/12/20
to Steve Tockey, [PiDP-11]
True story, working at a company, one if the Sql Servers went down.  I went and inspected the physical hardware, red blinking light.  I got one the techs and said, do you see that red blinking light.  He said no, colorblind.  Wow...  so I asked the other tech, also color blind.  I pointed this out to their supervisor, also color blind.  Note to self, red/green is probably a bad indicator color.


Sent from my iPhone Xs!

On Feb 12, 2020, at 2:14 PM, Steve Tockey <steve...@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.

Jonathan Morton

unread,
Feb 12, 2020, 5:33:05 PM2/12/20
to Charley Jones, Steve Tockey, [PiDP-11]
> On 13 Feb, 2020, at 12:25 am, Charley Jones <data...@gmail.com> wrote:
>
> True story, working at a company, one if the Sql Servers went down. I went and inspected the physical hardware, red blinking light. I got one the techs and said, do you see that red blinking light. He said no, colorblind. Wow... so I asked the other tech, also color blind. I pointed this out to their supervisor, also color blind. Note to self, red/green is probably a bad indicator color.

Certainly if information is conveyed *only* by colour, red/green might be the worst choice. Unfortunately they're also two of the most common (and earliest available) colours of LED, and blue light appears much less bright for a given intensity in human vision.

The best policy is to convey critical information by some means other than colour, and use the latter mainly for aesthetic purposes and as a non-critical information channel.

- Jonathan Morton

Jon Brase

unread,
Feb 12, 2020, 6:06:10 PM2/12/20
to [PiDP-11]


On Wednesday, February 12, 2020 at 4:33:05 PM UTC-6, Jonathan Morton wrote:


Certainly if information is conveyed *only* by colour, red/green might be the worst choice.  Unfortunately they're also two of the most common (and earliest available) colours of LED, and blue light appears much less bright for a given intensity in human vision.

The best policy is to convey critical information by some means other than colour, and use the latter mainly for aesthetic purposes and as a non-critical information channel.

One of the reasons that error conditions are often indicated by flashing, and normal status with a steady light. Ethernet activity lights are a bit of an exception: there you have a flashing green indicator for activity, but unlike your typical red flashing error light, there isn't a particular frequency to the flashing.

Geoffrey McDermott

unread,
Feb 12, 2020, 6:22:36 PM2/12/20
to pid...@googlegroups.com
50 years ago, when selecting what rate I wish to be in the Navy, NONE of
the electronics fields were available if you had any level of
color-blindness.

I'm surprised about the techs and their supervisor being even in the
technical field with that impediment so important in electronics and
especially to someone who has to maintain any electronics.


Jon Brase

unread,
Feb 12, 2020, 7:14:27 PM2/12/20
to [PiDP-11]


On Wednesday, February 12, 2020 at 5:22:36 PM UTC-6, Geoffrey McDermott wrote:

50 years ago, when selecting what rate I wish to be in the Navy, NONE of
the electronics fields were available if you had any level of
color-blindness.

I'm surprised about the techs and their supervisor being even in the
technical field with that impediment so important in electronics and
especially to someone who has to maintain any electronics.

I have a feeling though that most of the electronics fields available in the Navy involved considerably more high voltage work than the typical civillian IT worker is ever expected to do. In civilian life power supplies are typically not user-serviceable, most stuff just plugs in to mains power with a standard plug, the site electrical system is handled by grounds and maintenance, not IT, whereas in the Navy, you don't have time to RMA a power supply in combat, and you may not have access to your box of spare whole power supplies due to battle damage, so you have to open it up and repair it *now*, and you'd better not use the wrong capacitor or reverse hot and ground. Also, even without inadvertently putting yourself across a couple hundred volts, a screwup that causes 5 minutes delay in combat can be a safety-of-life issue, because the enemy ain't gonna wait.

Geoffrey McDermott

unread,
Feb 12, 2020, 8:25:15 PM2/12/20
to pid...@googlegroups.com
On 2/12/2020 7:14 PM, Jon Brase wrote:
>
> I have a feeling though that most of the electronics fields available
> in the Navy involved considerably more high voltage work than the
> typical civillian IT worker is ever expected to do. In civilian life
> power supplies are typically not user-serviceable, most stuff just
> plugs in to mains power with a standard plug, the site electrical
> system is handled by grounds and maintenance, not IT, whereas in the
> Navy, you don't have time to RMA a power supply in combat, and you may
> not have access to your box of spare whole power supplies due to
> battle damage, so you have to open it up and repair it *now*, and
> you'd better not use the wrong capacitor or reverse hot and ground.
> Also, even without inadvertently putting yourself across a couple
> hundred volts, a screwup that causes 5 minutes delay in combat can be
> a safety-of-life issue, because the enemy ain't gonna wait.
> --
> You received this message because you are subscribed to the Google
> Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to pidp-11+u...@googlegroups.com
> <mailto:pidp-11+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pidp-11/fbe72391-7300-4acf-8888-e0ade9571a9a%40googlegroups.com
> <https://groups.google.com/d/msgid/pidp-11/fbe72391-7300-4acf-8888-e0ade9571a9a%40googlegroups.com?utm_medium=email&utm_source=footer>.

Well, I was a hardware computer geek (Data Systems Technician) and we
*normally* didn't work on anything with life threatening voltages, but
EVERYTHING was color coded, even the module numbers on the PCBs we had
to replace, and eventually repair, not to mention the components
themselves used in the repair.

As for battle damage, the systems we maintained were reconfigurable so
if one of the computers failed (we used 3 at a time, with a 4th as a hot
spare), we could reconfigure in about a minute, reboot all the computers
from magnetic tape, and be back online in less than 3 minutes. We also
had a really degraded configuration which only needed 2 computers to
function.

I did take a high voltage hit while in the shipyard doing maintenance,
and believe me 220VAC at 400HZ isn't pleasant, and certainly could have
killed me. The screw up wasn't because of color codes being read wrong.


Johnny Billquist

unread,
Feb 13, 2020, 3:37:17 AM2/13/20
to Mark Matlock, [PiDP-11]
Hi.

On 2020-02-12 17:54, Mark Matlock wrote:
> Johnny, All,
>
> I realize that E11 is the most simple way to explore the MP architecture of the 11/74, but I was wondering what your thoughts would be about a multi-J11 implementation?

You're thinking about building a physical machine? That is of course
possible...

> When I think about the 11/94 (or 11/93 or M100-04) processor boards where 4 MB of fast RAM locally connected to the CPU, (eliminating the need for complications of cache memory) I was wondering if given todays technology could one design 4MB of multi-ported RAM that would allow multiple J11s to run RSX11M+ in MP mode?

Well, the processor boards that you have there today are not relevant,
but the J11 is.

> It wouldn’t have the redundant capabilities of multiple Unibuses with DT07s or multi-ported disks but it still might be pretty interesting to play with. We see J11s come up from time to time and I once remember seeing a lot of 100 for sale.
>
> Also if the RAM was battery backed up the fast power recovery feature of RSX is one I’ve wanted to explore. I remember “back in the day” having the power go out and the 11/44 at work which had battery backed up RAM (to simulate core). When the power came back and my VT100 came back I could hit ^R at KED or EDT and at most I might be missing a keystroke or two. The RK07s and RL02s all came back up the file system stayed sound. Just the sort of system one would want to run your nuclear reactors.

So, to answer your question - DEC already did this. However, it was
never officially announced. But it was mentioned at some DECworld or
DECUS meeting, or so, at least once.

The J11 have all the hooks for running MP. All you need is the
processors, and a shared memory. And yes, today you could easily run it
at full speed without any separate cache. So it would be pretty nice, I
think.

But you'd have to start by doing all the electrical design. So it would
be a lot of work.

Mark Matlock

unread,
Feb 13, 2020, 12:35:56 PM2/13/20
to Johnny Billquist, [PiDP-11]
Johnny,
Thanks for your reply and info. I am intrigued by your comment that DEC had already worked on a J11 based MP system. It would be a very interesting read to find any details about that project.

One part that I thought about afterward was how the IIST system might be implemented in addition to the multiported memory. If there is no cache involved the ASBR trick would not be needed.

I could see that some FPGA logic could be utilized or possibly the PRUs of a BeagleBone similar to what Jorge Hoppe did on his UniBone board where the UniBus logic is handled by the two PRUs.

Best,
Mark

Johnny Billquist

unread,
Feb 13, 2020, 8:40:14 PM2/13/20
to Mark Matlock, [PiDP-11]
Hi.

On 2020-02-13 18:35, Mark Matlock wrote:
> Johnny,
> Thanks for your reply and info. I am intrigued by your comment that DEC had already worked on a J11 based MP system. It would be a very interesting read to find any details about that project.

Miim (that's machine intelligence and something, Bruce Mitchells site)
had the notes from a DECUS meeting where the questions were asked, and
where DEC more or less admitted that they had already done an MP PDP-11
using the J11.

But now I cannot access Miim.com. I don't know if it is truly down, or
if it is just Bruce blocking addresses again.

Anyhow, I think the link is http://www.miim.com/faq/hardware/multipro.shtml

> One part that I thought about afterward was how the IIST system might be implemented in addition to the multiported memory. If there is no cache involved the ASBR trick would not be needed.

Well, IIST is simply a device that can interrupt any arbitrary processor
from any arbitrary processor.

And no, the ASRB trick goes slightly beyond the cache. Although you are
correct that if there is no cache, things becomes much simpler.
However, you must still ensure that RMW memory access is atomic. I think
by default, the 11/70 instructions generating RMW, or if it was even in
the memory box, was not truly atomic. However, since you only have one
processor, that wasn't a problem anyway.
But for the 11/74, this became important.

The same thing would be true for your machine as well. You really must
make sure that the multiple memory cycles involved in a RMW instruction
are atomic.

After that, you should be good to go.

> I could see that some FPGA logic could be utilized or possibly the PRUs of a BeagleBone similar to what Jorge Hoppe did on his UniBone board where the UniBus logic is handled by the two PRUs.

Sure.

Johnny

Mark Matlock

unread,
Feb 14, 2020, 1:02:39 AM2/14/20
to Johnny Billquist, [PiDP-11]
Miim (that's machine intelligence and something, Bruce Mitchells site) had the notes from a DECUS meeting where the questions were asked, and where DEC more or less admitted that they had already done an MP PDP-11 using the J11.

But now I cannot access Miim.com. I don't know if it is truly down, or if it is just Bruce blocking addresses again.

Johnny,
   Thanks for reminding me about Bruce Mitchell’s web site. I had almost forgot about it. For those new to RSX, Bruce was the editor of the Multitasker for many years. I have spoken to Bruce in the past and if I remember one of those conversations with him correctly, there was a DECUS event at some point where he won one of the 11/74 front panels. At any rate he does still have the www.Miim.com website up, but it has some security that often makes it appear offline. It does have a good amount of RSX11M+ MP info which I’m sharing below for the folks interested in this topic. The J11 MP stuff Johnny remembered was (I think) under the Q-Bus question #10 below.

Best,
Mark 

Multiprocessor FAQ

 
 

This document answers questions about Digital's 11/70-based multiprocessor UNIBUS PDP-11s variously known as the PDP-11/72 and PDP-11/74.  It does not relate to any of the various Carnegie-Mellon multiprocessor PDP-11 systems.

1.  Were any multiprocessor PDP-11s released commercially?

2.  Why weren't the multiprocessor PDP-11s released commercially?

3.  What was the hardware configuration of the multiprocessor 11/70?

4.  What was the software configuration of the multiprocessor 11/70?

5.  What was the performance of the multiprocessor?

6.  How was the multiprocessor system bootstrapped?

7.  How did the multiprocessor system handle CPU crashes?

8.  Why was ASRB used for cache interlock?

9.  How many multiprocessors were built?

10.  Was a Q-bus version ever built?

11.  I don't believe that such a thing ever existed.

12.  What happened to the RSX implementation group's system?

13.  What was the 11/68?

14.  How did XXDP run on one CPU without running on all CPUs?

For information about the modifications to M-Plus and how the operating system coexisted with the mP hardware, see the transcript of Brian McCarthy's "Multiprocessor RSX" presentation at the Spring 1985 DECUS Symposium in New Orleans.  It remains the only readily accessible information about multiprocessor M-Plus.  (transcript at this link).

 

Were any multiprocessor PDP-11s released commercially?

What was to have been DEC's commercial version of the multiprocessor was called (variously) the 11/70mP, 11/72, or PDP-11/74.  Despite many rumors to the contrary, no multiprocessors were ever released or sold commercially.

One machine went to the field for beta test, but was returned to Digital at the end of the test period.

Source: The Big Book of RSX Applications, Volume II, Appendix B

 

Why weren't the multiprocessor PDP-11s released commercially?

There are rumors about this, all emanating from non-DEC sources with no direct experience of the multiprocessor systems.  The most commonly heard rumor is associated with the introduction of the VAX 11/780 about the same time.  It holds that the 11/74 would have outperformed the VAX by a factor of (2, 4, 8, 16, pick any random number), and so DEC pulled the PDP-11/74 to sell VAXes.

Emotionally appealing, but untrue.  A 780 was faster than an 11/70 and a 16-bit multiprocessor cannot reasonably be compared to a 32-bit uniprocessor.  Also implausible in light of Ken Olsen's well known love of the PDP series.

It has also been said that the KB11-CM backplane was too expensive to manufacture and sell at a profit.  However, the differences between a KB11-C and KB11-CM backplane were small, and in any case DEC's wire-wrapped backplanes were produced on automated Gardner-Denver wirewrap machines.  Complexity of that particular part was not an issue.

The truth about the 11/74's non-introduction is that it was a disappointingly simple business decision.

 

"The product was technically quite successful.  Financially, it probably would have been successful.  But, at the time, it took a lot of resources to build 11/74s and configure them.  There was an 11/74 configuration committee that had to review each of the PDP-11/74 orders and approve the list of devices that were on them, et cetera.  This was diametrically opposed to the PDP-11 line philosophy, being actually more like the DECsystem-10 and DECsystem-20 approach to systems – individually tailored systems, unique to each customer, requiring lots of DEC support.

"It was clear that it was going to take a lot of effort to get the systems shipped.  It wasn't clear that DEC made more money by shipping 11/74s and using resources there, rather than by selling, say, 11/44s which were easy to configure.

"There were also Field Service issues.  Field Service runs in a 'give us the entire system and we'll run the diagnostics and fix it' mode.  This is diametrically opposed to the philosophy of the multiprocessor system.

"The complexity of 11/74s was high.  The RSX staff worked on the hardware because they didn't trust other people to touch theirs.  There were 500-some BC06R cables in it of varying length, which tended to go bad every once in a while and were difficult to find.  There were problems with feedthrough connectors going bad where BC06Rs went through a bulkhead, which were nightmares to find.

"It would have been very difficult to support anything other than a dual processor in the field because of the Field Service aspects."

 

Source:  Brian S. McCarthy, DEC RSX group

 

What was the hardware configuration of the multiprocessor 11/70?

The system was a shared-memory, symmetric multiprocessor.  Between one and four CPUs were supported.  Any subset with at least one disk, one memory, one CPU and a console terminal was capable of running the operating system.

Each of the CPUs had independent consoles.

There were also other elements such as the interprocessor interrupt mechanism.  A mechanism is needed to get the attention of another CPU when scheduling.  Those were done through the Interprocessor Interrupt and Sanity Timer (IIST), a slightly modified CSS* product called the DIP11.

The CPUs had separate I/O buses.  There were some peripherals on one or another I/O bus.  There were also bus switches between the various buses so that peripherals could be moved around.

Other than "who handles the clock interrupt," since there can be only one clock updating the calendar at a time, there were no distinctions between the CPUs.  All were scheduled as resources, and the system was truly symmetric.

Source:  The Big Book of RSX Applications, Volume II, Appendix B

 

What was the software configuration of the multiprocessor 11/70?

There was one single copy of the RSX-11M-Plus Executive running in the shared memory at a time.

The major software work was distribution of I/O.  RSX-11M-Plus was modified to have a field called the UNIBUS Run Mask (URM) in each controller data structure.  When a driver needs to execute on the device it forks to get to the correct processor.

There was a need for hardware reconfiguration under software control.  This is where CON and HRC came from.  Support for multiport disks was added.  A number of these features are also available in single CPU systems, and are useful.

Support for switched buses was added.  This gave the ability to link and unlink DT07 bus switches and access peripherals on the shared portion of the bus.

Shadow recording was added and this is also useful in the single-CPU environment.  It was there to duplicate data, and keep it duplicated while the application ran, so that in the event of a catastrophic disk failure, (1) the application could continue to run, and (2) there was still a good copy of the data.

Source:  The Big Book of RSX Applications, Volume II, Appendix B

 

What was the performance of the multiprocessor system?

For the PDP-11/74, configured with four processors and all of them running, about three times that of the 11/70.  (Not much competition for an 11/780, actually.) What metric was used is unknown.

Source: The Big Book of RSX Applications, Volume II, Appendix B

 

How did the multiprocessor system handle CPU crashes?

Surprisingly enough, very badly.  When one CPU crashed, all the CPUs crashed.

 

"The philosophy of the 11/74 was high availability, not high reliability.  As such, from a philosophical viewpoint, we wanted crash dumps of all the CPUs to catch software problems.

"Pragmatically speaking, continuing would be difficult.  The crashing CPU is in the kernel, owning at least $EXECL in all likelihood, and perhaps some other spin locks.  Of course, any lock it owned was owned to protect an atomic transaction, and the crash caused some decay."

"The fork list may not be intact, the Pool may not be intact, device states may be inconsistent, the context of the running task on the crashed CPU (which could be MCR or F11ACP) is lost in what may have been an atomic transaction inside the component (remember $LOCKL?), and a host of other problems may exist.  [These] will simply cascade into a mass of wreckage where a crash dump ought to be."

 

Source:  Brian S. McCarthy, DEC RSX group (July 2005)

 

Why was ASRB used for cache interlock?

Why was ASRB was used for interlocking in the 11/74 instead of a custom instruction?  The CPU was microcoded and there were unused opcodes.

 

"The 11/70 only had a microcode address space of 8 or 9 bits, which was pretty much full, so adding much to the instruction set was difficult.  I don't believe the microcode was changed, and if it was it was very small.  So adding an instruction was off the table (mind you, I wasn't AT the table), even though ultimately the instruction set was expanded in the J-11 to have TSTSET.  Bear in mind that inventing a new instruction would have meant on its own that an mP kernel would have to conditionalize locks at run time or not run on an 11/70.

"ASRB was really easy because it already had the right logic.

"The ASRB instruction does, of course a read-modify right as follows:

r <- (address)
shift r, carry
(address) -> r

"The MKA11 memory controller had (and I think the MK11 originally had) an exchange cycle.  So this was used:

r <- 0
; The exchange cycle is interlocked.
r <-> (address)
shift r, carry
; If (address) was 0 or 1 the result (0) was atomically written above.
if (r!=0), (address) -> r

"Granted that you might have been able to do the same thing with the INCB or DECB instructions.  However, they would have yielded the result in the Z-bit instead of the C-bit, which wasn't very 'RSX‑ish', for good reason.  Consider the following code sequence:

INC   locktries    ; Count times we've passed this way
ASRB  lock         ; Lock the lock
ADC   uncontested  ; Count times we got the lock first try
BCC   locked       ; Locked already, do something else

"This would require branches to maintain the counts if the lock return was the Z-bit.  (There's another problem in the example, left as an exercise for the reader).

"Also, ASRB was known to be very infrequently used, which made it the best candidate.

"So, primarily, a new instruction was out of scope for the 11/74 project, and ASRB was viewed as having the least impact."

 

Source:  Brian S. McCarthy, DEC RSX group (February 2014)

 

How many multiprocessors were built?

Only six.  Nowhere near as many as people outside of Digital suspected.

The RSX implementers at Spit Brook (ZKO) in Nashua, NH had a four processor 11/74 with the DECnet names CASTOR and POLLUX, depending on whether it was configured as a single quad or two dual CPUs.

 

"The only other systems I know of that ever existed were:

"1.  The quad prototype in Tewksbury.  The front panels said 11/70mp, not 11/74.  The prototype was neat in that it had a fault insertion panel on the back with about 20 toggle switches.  With these you could: Disable one line of the MASSBUS data paths; disable the IIST, and my favorite, the most nefarious inserted fault, disable cache bypass on one of the CPUs.  It was later replaced by the quad in Spit Brook.

"2.  The hardware group had a dual processor.

"3.  The DecNet group had what I think was a dual, but it may have been a quad.

"4.  The performance lab in Merrimack had a quad.

"5.  The First Customer Ship system, produced for first customer ship to GTE in Lyle Ohio.  It was a dual as I recall.

"There were other parts, but I think those 6 systems were the only ones ever booted."

 

Source:  Brian S. McCarthy, DEC RSX group (July 2011)

 

How was the multiprocessor system bootstrapped?

Changes were required to the M9312 boot ROM.  This was reportedly the hardest part of the project to figure out.  In those days boot ROMs were very small, and it was difficult to figure out how to get a CPU up from a completely unknown state.

What was done used the interprocessor interrupt mechanism.  The IIST forced a power failure on the CPU coming on line.  The boot ROM then enabled interrupts on the IIST, created a very small stack, and looped for about six seconds.

During that time, the other CPUs broadcast an interrupt to it, which got it out of the boot ROM, into the Executive, and things went from there.

A result of this was that only one manual boot was needed to get the system up, and the rest was achieved with reconfiguration commands, e.g. "CON ONLINE CPC".

The BOOt and SAVe components of M-Plus were modified so that they didn't have to run on a particular CPU, and didn't know anything about which console was which.

Source: The Big Book of RSX Applications, Volume II, Appendix B

 

Was a Q-bus version ever built?

Yes.  People report having seen it on tours through Spit Brook.

Officially, DEC never worked on anything.  They did, however ...

 

"... look into the feasibility of building a Q-bus multiprocessor using modified KDJ11-B CPUs.

"Of course, we wouldn't comment if we had built a prototype.  If we were going to do that, what it would probably require is modifying the CPU board and adding an external arbiter board that replaces the on-board arbiter in the KDJ11-B."

 

Source:  Brian S. McCarthy, DEC RSX group

 

At a U.S. DECUS RSX SIG session about the 11/74, a question was asked about how many man-years would be required to generate multiprocessor RSX for a Q-bus system.  The response was, "It was about 10 hours."

Source:  The Big Book of RSX Applications, Volume II, Appendix B

 

"The 83 MP system was completed and did in fact boot M+.  A dual processor system was not significantly faster than a single due to Q-bus contention, so we never went past there.

"As to the whereabouts of the hardware, we'll need Leonard Nimoy or Geraldo Rivera to unravel that mystery. I believe that the modified CPUs were in fact given away as one of the prizes at the PDP-11 trivia game in, maybe Cincinnati?"

 

Source:  Brian S. McCarthy, DEC RSX group

 

I don't believe that such a thing ever existed.

Pictures of the systems exist, have been shown at U.S. DECUS Symposia, and many people saw the RSX development group's system, CASTOR, "in the flesh" at Spit Brook (ZKO).

Bruce Mitchell (editor emeritus of the Multi-Tasker, U.S. DECUS RSX SIG) was given special permission to photograph the 11/74 during a SIG "Woods" meeting, and has a PDP-11/74 front panel bezel and various other 11/74 paraphernalia given to him as a souvenir.

As late as 1986, there is e-mail to show that the DECnet support group at Colorado Springs conducted a DEC-wide search for 11/74 CPUs to build a multiprocessor system of their own.

If it was a DEC plot to confuse the user community, it was overwhelmingly successful.  Many people have hallucinated a 10-ton, 12 by 18 foot quad 11/70 in the PDP-11 sky-blue "corporate cabinet" with MKA11 shared memory control panels, and Brian McCarthy standing next to it.   Now you're hallucinating it too.*

 

What happened to the RSX implementation group's system?

The RSX implementation group's PDP-11/74 was located at ZKO, the DEC software development facility in the Spit Brook woods of Nashua, New Hampshire. It had the DECnet name CASTOR.

After it was decommissioned at ZKO, it went to CXO (Colorado Springs).  The DECnet group recommissioned it as PHEANX, which was the only unused spelling of "phoenix" left on the corporate DECnet.

 

"CASTOR, after we decommissioned it at ZKO, went out to CXO to Dave Carroll.  [....]  I have no idea where either it or Dave are at this point."


 

And nobody else does either.  Not CASTOR, nor any of the other 11/70mP CPUs that were built and eventually absorbed back into the company as single-CPU 11/70s.

Source:  Brian S. McCarthy, DEC RSX group [March 2002]

 

What was the 11/68?

The 11/68 was, according to rumors even less substantial than those about the 11/74, a proposed multiprocessor — reportedly — loosely based on the PDP-11/60 architecture.  It would not have been simply an 11/60 with different microcode.  Little is known about it outside the few people at DEC who were either involved with or had contact with the project.

 

"If you recall, the 11/74 is a new cache controller and a change to the ASRB instruction away from the 11/70.  The 11/68 (which was either Bluefish or Dolphin, I forget which), was a processor that was actually designed for mp.  The most significant feature it would have had was cache coherency across CPUs, eliminating the need for cache flushes and bypasses in the kernel.  It also sported user writable microcode àla the 11/60 (hence the name), but with the improvement that the floating point processor would have been addressable easily in microcode.

"I don't know if there was ever a functional 11/68.  That would have been awesome."

 

Source:  Brian S. McCarthy, DEC RSX group (July 2011)

 

How did XXDP run on one CPU without running on all CPUs?

If a single CPU or a peripheral on a single CPU failed, how could the XXDP diagnostics be run?  Loading XXDP into the shared memory would force all the CPUs to run XXDP at the next contxt switch (if they were remarkably lucky) or, much more likely, crash with unpredictable results.

 

"The MKA-11 memory controller allowed each CPU to see or not see each of the memory boxes at a settable address.

"So one could offline a CPU and a memory box and configure them as a separate CPU to run diagnostics, or operating systems for that matter.  (The IIST had two separate busses, so a quad could be configured as two duals.)

"It was also possible to configure a memory box at the top of memory in the mP configuration and at 0 in the standalone CPU.  This allowed some mechanism that escapes me now* to load XXDP from the M system into the memory box.  It could then be started from the front panel of the standalone CPU."

 

Source:  Brian S. McCarthy, DEC RSX group (May 2015)

 

back  Computer Special Systems, which built small-quantity and special-purpose hardware.

back  It is not obvious in this picture, but 11/74s had a knob to turn the system power on instead of the usual switch panel keylock.

back  The M-Plus diagnostics loader, which I have seen and heard discussed but I have forgotten the location and the details.  It was probably at the Symposium session where Brian discussed the mP (transcript at this link).  – BRM

 

mko...@gmail.com

unread,
Feb 13, 2022, 10:56:57 PM2/13/22
to [PiDP-11]
Just found this group about this exciting machine (and the like-wise exciting software support in RSX-11M-Plus). Way back in the golden age oof DEC, I spent some time working on CASTOR (the RSX development 11/74) when on visit ins Spitbrook from Germany. I also got a grand tour from Brian McCarthy. I could swear when I got the tour the machine was still in the traditional black and magenta cabinets (Maybe it was at that time the prototype brought over from Tewksbury, I remember it had fault-insertion panels). It was set-up in two rows of cabinets, back to back, with just enough space to work on the back of the cabinets. The cabinets were really full and the doors always open to let the hardware breath. Also, it was the ONLY machine in the whole Spitbrook facility standing on a raised floor, to accommodate the Unibus cables between the two cabinet rows. I also remember that some floor tiles had their edges converted into kind of ramps, just to save some distance for the really maxed-out inter-row Unibus cables.

As Brian told me then, the machine was originally designed for the Bell Telephone company to host electronic switching. The multiprocessor was necessary to provide the required level of fault tolerance. However, the machine was at the edge of what hardware could do at that time. In fact, at least CASTOR was in its original configuration (prototype?) very sensitive. The deal with Bell fell through because of maintenance difficulties, therefore only a few machines were built. When CASTOR had a hardware problem, only a few Field Service technicians (even in Spitbrook!) would response to a fault, and the technicians would not even dare to touch the machine if Brian McCarthy was not standing next to them. Part of the instability was also software-born in those days, particularly around the Unibus switches and dual-ported disks, at least in early versions of RSX-11M-Plus. Brian tried to demonstrate me a dual ported disk failover, which promptly killed the machine (4x XDT> ). His only comment was "Now you can see what happens if she comes up". Given the time, it was an extraordinary machine running extraordinary software. The early VAX did not have symmetric multiprocessing, in fact it took many years for that. Brian McCarthy was for some time on loan to VAX and VMS engineering to communicate his symmetric multiprocessor experience.

Sadly, neither Brian S. McCarthy nor the (real) PDP-11/74 are with us anymore.

-Manfred

Garry A Lockyer

unread,
Feb 14, 2022, 12:42:21 AM2/14/22
to mko...@gmail.com, [PiDP-11]
I worked with a little more than 1/2 of an 11/74 in 1981.  An 11/74 had been sold to Alberta Government Telephones (AGT).  Not sure what happened to the other 2 CPUs but Western Canada Field Services ended up with 2 CPUs and the memory subsystem.  I used one machine to develop the FIRST Contract Management system and the other to teach 11/70 troubleshooting.  I eventually split the system into 2 separate 11/70 systems so that we could move one to a new office while the other kept on running FIRST.

Everything got migrated to VAX/VMS a couple of years later.

Regards,

Garry A. Lockyer


On Feb 13, 2022, at 19:57, mko...@gmail.com <mko...@gmail.com> wrote:

Just found this group about this exciting machine (and the like-wise exciting software support in RSX-11M-Plus). Way back in the golden age oof DEC, I spent some time working on CASTOR (the RSX development 11/74) when on visit ins Spitbrook from Germany. I also got a grand tour from Brian McCarthy. I could swear when I got the tour the machine was still in the traditional black and magenta cabinets (Maybe it was at that time the prototype brought over from Tewksbury, I remember it had fault-insertion panels). It was set-up in two rows of cabinets, back to back, with just enough space to work on the back of the cabinets. The cabinets were really full and the doors always open to let the hardware breath. Also, it was the ONLY machine in the whole Spitbrook facility standing on a raised floor, to accommodate the Unibus cables between the two cabinet rows. I also remember that some floor tiles had their edges converted into kind of ramps, just to save some distance for the really maxed-out inter-row Unibus cables.
--
You received this message because you are subscribed to the Google Groups "[PiDP-11]" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pidp-11+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pidp-11/0a45dc5c-b554-4e14-9006-93e2fc650e74n%40googlegroups.com.

Johnny Billquist

unread,
Feb 22, 2022, 2:00:05 PM2/22/22
to pid...@googlegroups.com
And as I think I've mentioned elsewhere, CASTOR did end up in CXO, under
the care of Dave Carroll as PHEANX. It was running until about 2000 or
2001, when it developed hardware problems. It sat that way for a while,
but I believe it was eventually dismantled, and I don't know what
happened to the bits after that.

Dave did take a whole bunch of photographs of the machine around the
time, which were available online on Picasa. However, Picasa has since
shut down. A few pictures have surfaced elsewhere, by people who made
copies. It would be cool if someone had them all (I didn't think about
making copies at the time, and I haven't heard from Dave in a while now
either.)

And yes, Brian left us a few years ago as well. :-(

Johnny
>>> But now I cannot access Miim.com <http://miim.com/>. I don't know
>>> if it is truly down, or if it is just Bruce blocking addresses again.
>>
>> Johnny,
>>    Thanks for reminding me about Bruce Mitchell’s web site. I had
>> almost forgot about it. For those new to RSX, Bruce was the editor
>> of the Multitasker for many years. I have spoken to Bruce in the
>> past and if I remember one of those conversations with him
>> correctly, there was a DECUS event at some point where he won one
>> of the 11/74 front panels. At any rate he does still have the
>> www.Miim.com <http://Miim.com> website up, but it has some
>> security that often makes it appear offline. It does have a good
>> amount of RSX11M+ MP info which I’m sharing below for the folks
>> interested in this topic. The J11 MP stuff Johnny remembered was
>> (I think) under the Q-Bus question #10 below.
>>
>> Best,
>> Mark
>>
>>
>> Multiprocessor FAQ
>>
>>
>>
>> This document answers questions about Digital's 11/70-based
>> multiprocessor UNIBUS PDP-11s variously known as the PDP-11/72 and
>> PDP-11/74.  It does /not/ relate to any of the various
>> Carnegie-Mellon multiprocessor PDP-11 systems.
>>
>> 1. <http://www.miim.com/faq/hardware/multipro.shtml#release>  Were
>> any multiprocessor PDP-11s released commercially?
>>
>> 2. <http://www.miim.com/faq/hardware/multipro.shtml#whynot>  Why
>> weren't the multiprocessor PDP-11s released commercially?
>>
>> 3. <http://www.miim.com/faq/hardware/multipro.shtml#hconfig>  What
>> was the hardware configuration of the multiprocessor 11/70?
>>
>> 4. <http://www.miim.com/faq/hardware/multipro.shtml#sconfig>  What
>> was the software configuration of the multiprocessor 11/70?
>>
>> 5. <http://www.miim.com/faq/hardware/multipro.shtml#boot>  What
>> was the performance of the multiprocessor?
>>
>> 6. <http://www.miim.com/faq/hardware/multipro.shtml#perf>  How was
>> the multiprocessor system bootstrapped?
>>
>> 7. <http://www.miim.com/faq/hardware/multipro.shtml#crash>  How
>> did the multiprocessor system handle CPU crashes?
>>
>> 8. <http://www.miim.com/faq/hardware/multipro.shtml#asrb>  Why was
>> ASRB used for cache interlock?
>>
>> 9. <http://www.miim.com/faq/hardware/multipro.shtml#count>  How
>> many multiprocessors were built?
>>
>> 10. <http://www.miim.com/faq/hardware/multipro.shtml#qbus>  Was a
>> Q-bus version ever built?
>>
>> 11. <http://www.miim.com/faq/hardware/multipro.shtml#exist>  I
>> don't believe that such a thing ever existed.
>>
>> 12. <http://www.miim.com/faq/hardware/multipro.shtml#castor>  What
>> happened to the RSX implementation group's system?
>>
>> 13. <http://www.miim.com/faq/hardware/multipro.shtml#1168>  What
>> was the 11/68?
>>
>> 14. <http://www.miim.com/faq/hardware/multipro.shtml#xxdp>  How
>> did XXDP run on one CPU without running on all CPUs?
>>
>> For information about the modifications to M-Plus and how the
>> operating system coexisted with the mP hardware, see the
>> transcript of Brian McCarthy's "Multiprocessor RSX" presentation
>> at the Spring 1985 DECUS Symposium in New Orleans.  It remains the
>> only readily accessible information about multiprocessor M-Plus.
>> (transcript at this link)
>> <http://www.miim.com/documents/monographs/mpmpro.doc>.
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> Were any multiprocessor PDP-11s released commercially?
>>
>> What was to have been DEC's commercial version of the
>> multiprocessor was called (variously) the 11/70mP, 11/72, or
>> PDP-11/74.  Despite many rumors to the contrary, no
>> multiprocessors were ever released or sold commercially.
>>
>> One machine went to the field for beta test, but was returned to
>> Digital at the end of the test period.
>>
>> Source: The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> Why weren't the multiprocessor PDP-11s released commercially?
>>
>> There are rumors about this, all emanating from non-DEC sources
>> with no direct experience of the multiprocessor systems.  The most
>> commonly heard rumor is associated with the introduction of the
>> VAX 11/780 about the same time.  It holds that the 11/74 would
>> have outperformed the VAX by a factor of (2, 4, 8, 16, pick any
>> random number), and so DEC pulled the PDP-11/74 to sell VAXes.
>>
>> Emotionally appealing, but untrue.  A 780 was faster than an 11/70
>> and a 16-bit multiprocessor cannot reasonably be compared to a
>> 32-bit uniprocessor.  Also implausible in light of Ken Olsen's
>> well known love of the PDP series.
>>
>> It has also been said that the KB11-CM backplane was too expensive
>> to manufacture and sell at a profit.  However, the differences
>> between a KB11-C and KB11-CM backplane were small, and in any case
>> DEC's wire-wrapped backplanes were produced on automated
>> Gardner-Denver wirewrap machines.  Complexity of that particular
>> part was not an issue.
>>
>> The truth about the 11/74's non-introduction is that it was a
>> disappointingly simple business decision.
>>
>>
>> "The product was technically quite successful.  Financially, it
>> probably would have been successful.  But, at the time, it took a
>> lot of resources to build 11/74s and configure them.  There was an
>> 11/74 configuration committee that had to review each of the
>> PDP-11/74 orders and approve the list of devices that were on
>> them, /et cetera/.  This was diametrically opposed to the PDP-11
>> line philosophy, being actually more like the DECsystem-10 and
>> DECsystem-20 approach to systems – individually tailored systems,
>> unique to each customer, requiring lots of DEC support.
>>
>> "It was clear that it was going to take a lot of effort to get the
>> systems shipped.  It wasn't clear that DEC made more money by
>> shipping 11/74s and using resources there, rather than by selling,
>> say, 11/44s which were easy to configure.
>>
>> "There were also Field Service issues.  Field Service runs in a
>> 'give us the entire system and we'll run the diagnostics and fix
>> it' mode.  This is diametrically opposed to the philosophy of the
>> multiprocessor system.
>>
>> "The complexity of 11/74s was high.  The RSX staff worked on the
>> hardware because they didn't trust other people to touch theirs.
>> There were 500-some BC06R cables in it of varying length, which
>> tended to go bad every once in a while and were difficult to
>> find.  There were problems with feedthrough connectors going bad
>> where BC06Rs went through a bulkhead, which were nightmares to find.
>>
>> "It would have been very difficult to support anything other than
>> a dual processor in the field because of the Field Service aspects."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> What was the hardware configuration of the multiprocessor 11/70?
>>
>> The system was a shared-memory, symmetric multiprocessor.  Between
>> one and four CPUs were supported.  Any subset with at least one
>> disk, one memory, one CPU and a console terminal was capable of
>> running the operating system.
>>
>> Each of the CPUs had independent consoles.
>>
>> There were also other elements such as the interprocessor
>> interrupt mechanism.  A mechanism is needed to get the attention
>> of another CPU when scheduling.  Those were done through the
>> Interprocessor Interrupt and Sanity Timer (IIST), a slightly
>> modified CSS^*
>> <http://www.miim.com/faq/hardware/multipro.shtml#sub2> product
>> called the DIP11.
>>
>> The CPUs had separate I/O buses.  There were some peripherals on
>> one or another I/O bus.  There were also bus switches between the
>> various buses so that peripherals could be moved around.
>>
>> Other than "who handles the clock interrupt," since there can be
>> only one clock updating the calendar at a time, there were no
>> distinctions between the CPUs.  All were scheduled as resources,
>> and the system was truly symmetric.
>>
>> Source:  The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> What was the software configuration of the multiprocessor 11/70?
>>
>> There was one single copy of the RSX-11M-Plus Executive running in
>> the shared memory at a time.
>>
>> The major software work was distribution of I/O.  RSX-11M-Plus was
>> modified to have a field called the UNIBUS Run Mask (URM) in each
>> controller data structure.  When a driver needs to execute on the
>> device it forks to get to the correct processor.
>>
>> There was a need for hardware reconfiguration under software
>> control.  This is where CON and HRC came from.  Support for
>> multiport disks was added.  A number of these features are also
>> available in single CPU systems, and are useful.
>>
>> Support for switched buses was added.  This gave the ability to
>> link and unlink DT07 bus switches and access peripherals on the
>> shared portion of the bus.
>>
>> Shadow recording was added and this is also useful in the
>> single-CPU environment.  It was there to duplicate data, and keep
>> it duplicated while the application ran, so that in the event of a
>> catastrophic disk failure, (1) the application could continue to
>> run, and (2) there was still a good copy of the data.
>>
>> Source:  The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> What was the performance of the multiprocessor system?
>>
>> For the PDP-11/74, configured with four processors and all of them
>> running, about three times that of the 11/70.  (Not much
>> competition for an 11/780, actually.) What metric was used is unknown.
>>
>> Source: The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> How did the multiprocessor system handle CPU crashes?
>>
>> Surprisingly enough, very badly.  When one CPU crashed, all the
>> CPUs crashed.
>>
>>
>> "The philosophy of the 11/74 was high availability, not high
>> reliability.  As such, from a philosophical viewpoint, we wanted
>> crash dumps of all the CPUs to catch software problems.
>>
>> "Pragmatically speaking, continuing would be /difficult/.  The
>> crashing CPU is in the kernel, owning at least $EXECL in all
>> likelihood, and perhaps some other spin locks.  Of course, any
>> lock it owned was owned to protect an atomic transaction, and the
>> crash caused some decay."
>>
>> "The fork list may not be intact, the Pool may not be intact,
>> device states may be inconsistent, the context of the running task
>> on the crashed CPU (which could be MCR or F11ACP) is lost in what
>> may have been an atomic transaction inside the component (remember
>> $LOCKL?), and a host of other problems may exist.  [These] will
>> simply cascade into a mass of wreckage where a crash dump ought to
>> be."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group (July 2005)
>>
>>
>> ------------------------------------------------------------------------
>> ------------------------------------------------------------------------
>>
>>
>> How many multiprocessors were built?
>>
>> Only six.  Nowhere near as many as people outside of Digital
>> suspected.
>>
>> The RSX implementers at Spit Brook (ZKO) in Nashua, NH had a four
>> processor 11/74 with the DECnet names CASTOR and POLLUX, depending
>> on whether it was configured as a single quad or two dual CPUs.
>>
>>
>> "The only other systems I know of that ever existed were:
>>
>> "1.  The quad prototype in Tewksbury.  The front panels said
>> 11/70mp, not 11/74.  The prototype was neat in that it had a fault
>> insertion panel on the back with about 20 toggle switches.  With
>> these you could: Disable one line of the MASSBUS data paths;
>> disable the IIST, and my favorite, the most nefarious inserted
>> fault, disable cache bypass on one of the CPUs.  It was later
>> replaced by the quad in Spit Brook.
>>
>> "2.  The hardware group had a dual processor.
>>
>> "3.  The DecNet group had what I think was a dual, but it may have
>> been a quad.
>>
>> "4.  The performance lab in Merrimack had a quad.
>>
>> "5.  The First Customer Ship system, produced for first customer
>> ship to GTE in Lyle Ohio.  It was a dual as I recall.
>>
>> "There were other parts, but I think those 6 systems were the only
>> ones ever booted."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group (July 2011)
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> How was the multiprocessor system bootstrapped?
>>
>> Changes were required to the M9312 boot ROM.  This was reportedly
>> the hardest part of the project to figure out.  In those days boot
>> ROMs were very small, and it was difficult to figure out how to
>> get a CPU up from a completely unknown state.
>>
>> What was done used the interprocessor interrupt mechanism.  The
>> IIST forced a power failure on the CPU coming on line.  The boot
>> ROM then enabled interrupts on the IIST, created a very small
>> stack, and looped for about six seconds.
>>
>> During that time, the other CPUs broadcast an interrupt to it,
>> which got it out of the boot ROM, into the Executive, and things
>> went from there.
>>
>> A result of this was that only one manual boot was needed to get
>> the system up, and the rest was achieved with reconfiguration
>> commands, /e.g./ "CON ONLINE CPC".
>>
>> The BOOt and SAVe components of M-Plus were modified so that they
>> didn't have to run on a particular CPU, and didn't know anything
>> about which console was which.
>>
>> Source: The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> Was a Q-bus version ever built?
>>
>> Yes.  People report having seen it on tours through Spit Brook.
>>
>> Officially, DEC never worked on anything.  They did, however ...
>>
>>
>> "... look into the feasibility of building a Q-bus multiprocessor
>> using modified KDJ11-B CPUs.
>>
>> "Of course, we wouldn't comment if we had built a prototype.  If
>> we were going to do that, what it would probably require is
>> modifying the CPU board and adding an external arbiter board that
>> replaces the on-board arbiter in the KDJ11-B."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group
>>
>>
>> At a U.S. DECUS RSX SIG session about the 11/74, a question was
>> asked about how many man-years would be required to generate
>> multiprocessor RSX for a Q-bus system.  The response was, "It was
>> about 10 hours."
>>
>> Source:  The Big Book of RSX Applications, Volume II, Appendix B
>>
>>
>> "The 83 MP system was completed and did in fact boot M+.  A dual
>> processor system was not significantly faster than a single due to
>> Q-bus contention, so we never went past there.
>>
>> "As to the whereabouts of the hardware, we'll need Leonard Nimoy
>> or Geraldo Rivera to unravel that mystery. I believe that the
>> modified CPUs were in fact given away as one of the prizes at the
>> PDP-11 trivia game in, maybe Cincinnati?"
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> I don't believe that such a thing ever existed.
>>
>> Pictures of the systems exist, have been shown at U.S. DECUS
>> Symposia, and many people saw the RSX development group's system,
>> CASTOR, "in the flesh" at Spit Brook (ZKO).
>>
>> Bruce Mitchell (editor emeritus of the Multi-Tasker, U.S. DECUS
>> RSX SIG) was given special permission to photograph the 11/74
>> during a SIG "Woods" meeting, and has a PDP-11/74 front panel
>> bezel and various other 11/74 paraphernalia given to him as a
>> souvenir.
>>
>> As late as 1986, there is e-mail to show that the DECnet support
>> group at Colorado Springs conducted a DEC-wide search for 11/74
>> CPUs to build a multiprocessor system of their own.
>>
>> If it was a DEC plot to confuse the user community, it was
>> overwhelmingly successful.  Many people have hallucinated a
>> 10-ton, 12 by 18 foot quad 11/70 in the PDP-11 sky-blue "corporate
>> cabinet" with MKA11 shared memory control panels, and Brian
>> McCarthy standing next to it.   Now you're hallucinating it too.^*
>> <http://www.miim.com/faq/hardware/multipro.shtml#sub3>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> What happened to the RSX implementation group's system?
>>
>> The RSX implementation group's PDP-11/74 was located at ZKO, the
>> DEC software development facility in the Spit Brook woods of
>> Nashua, New Hampshire. It had the DECnet name CASTOR.
>>
>> After it was decommissioned at ZKO, it went to CXO (Colorado
>> Springs).  The DECnet group recommissioned it as PHEANX, which was
>> the only unused spelling of "phoenix" left on the corporate DECnet.
>>
>>
>> "CASTOR, after we decommissioned it at ZKO, went out to CXO to
>> Dave Carroll.  [....]  I have no idea where either it or Dave are
>> at this point."
>>
>>
>>
>> And nobody else does either.  Not CASTOR, nor any of the other
>> 11/70mP CPUs that were built and eventually absorbed back into the
>> company as single-CPU 11/70s.
>>
>> Source:  Brian S. McCarthy, DEC RSX group [March 2002]
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> What was the 11/68?
>>
>> The 11/68 was, according to rumors even less substantial than
>> those about the 11/74, a proposed multiprocessor — reportedly —
>> loosely based on the PDP-11/60 architecture.  It would /not/ have
>> been simply an 11/60 with different microcode.  Little is known
>> about it outside the few people at DEC who were either involved
>> with or had contact with the project.
>>
>>
>> "If you recall, the 11/74 is a new cache controller and a change
>> to the ASRB instruction away from the 11/70.  The 11/68 (which was
>> either Bluefish or Dolphin, I forget which), was a processor that
>> was actually designed for mp.  The most significant feature it
>> would have had was cache coherency across CPUs, eliminating the
>> need for cache flushes and bypasses in the kernel.  It also
>> sported user writable microcode /àla/ the 11/60 (hence the name),
>> but with the improvement that the floating point processor would
>> have been addressable easily in microcode.
>>
>> "I don't know if there was ever a functional 11/68.  That would
>> have been awesome."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group (July 2011)
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> How did XXDP run on one CPU without running on all CPUs?
>>
>> If a single CPU or a peripheral on a single CPU failed, how could
>> the XXDP diagnostics be run?  Loading XXDP into the shared memory
>> would force all the CPUs to run XXDP at the next contxt switch (if
>> they were remarkably lucky) or, much more likely, crash with
>> unpredictable results.
>>
>>
>> "The MKA-11 memory controller allowed each CPU to see or not see
>> each of the memory boxes at a settable address.
>>
>> "So one could offline a CPU and a memory box and configure them as
>> a separate CPU to run diagnostics, or operating systems for that
>> matter.  (The IIST had two separate busses, so a quad could be
>> configured as two duals.)
>>
>> "It was also possible to configure a memory box at the top of
>> memory in the mP configuration and at 0 in the standalone CPU.
>> This allowed some mechanism that escapes me now^*
>> <http://www.miim.com/faq/hardware/multipro.shtml#sub1> to load
>> XXDP from the M system into the memory box.  It could then be
>> started from the front panel of the standalone CPU."
>>
>>
>> Source:  Brian S. McCarthy, DEC RSX group (May 2015)
>>
>>
>> ------------------------------------------------------------------------
>>
>> ^back <http://www.miim.com/faq/hardware/multipro.shtml#sub2back>
>> Computer Special Systems, which built small-quantity and
>> special-purpose hardware.
>>
>> ^back <http://www.miim.com/faq/hardware/multipro.shtml#sub3back>
>> It is not obvious in this picture, but 11/74s had a knob to turn
>> the system power on instead of the usual switch panel keylock.
>>
>> ^back <http://www.miim.com/faq/hardware/multipro.shtml#sub1back>
>> The M-Plus diagnostics loader, which I have seen and heard
>> discussed but I have forgotten the location and the details.  It
>> was probably at the Symposium session where Brian discussed the mP
>> (transcript at this link)
>> <http://www.miim.com/documents/monographs/mpmpro.doc>.  – BRM
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "[PiDP-11]" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to pidp-11+u...@googlegroups.com
>> <mailto:pidp-11+u...@googlegroups.com>.
>> <https://groups.google.com/d/msgid/pidp-11/0a45dc5c-b554-4e14-9006-93e2fc650e74n%40googlegroups.com?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to pidp-11+u...@googlegroups.com
> <mailto:pidp-11+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pidp-11/1C69F311-DC57-475D-BC7A-C573F3782FB0%40gmail.com
> <https://groups.google.com/d/msgid/pidp-11/1C69F311-DC57-475D-BC7A-C573F3782FB0%40gmail.com?utm_medium=email&utm_source=footer>.

Mark Matlock

unread,
Feb 22, 2022, 4:58:03 PM2/22/22
to [PiDP-11]
Johnny,
    I have a copy of the PDP-11/74 photos that I downloaded back in 2015 from the
site you mentioned back then.I just zipped them and all the other 11/74  info that
I've collected and put it in a .zip file.

You can download it from:


Best,
Mark

mko...@gmail.com

unread,
Feb 22, 2022, 10:36:37 PM2/22/22
to [PiDP-11]
Johny and Mark,

I need to dig on my back-up disk, I think I saved a number of 11/74 pictures too.

I was also trying hard to remember about that day I got the first tour by Brian. I think it was 1983, or 1984 at most, and most likely it was the 11/70mp prototype. I really remember it was in the black and magenta cabinets, two rows of four cabinets each, facing back-to-back. The CPUs were at the corners, and memory boxes in the two middle cabinets. All memory controls were from the back, as were the debugging panels too.

Maybe it got later replaced with a “production” version in the white and blue cabinets, and the final processor with CIS (the 11/70mp was a patched 11/70 without CIS). Unfortunately no pictures exist from those early days.

Manfred

Johnny Billquist

unread,
Feb 23, 2022, 4:47:43 AM2/23/22
to pid...@googlegroups.com
I don't think the blue/white 11/74 that went to CXO was CIS capable. The
machines I know were still just modified 11/70 with patched microcode I
believe. I know that a prototype for the 11/74 with CIS capability was
worked on, but I don't know if any got beyond the developers of the
hardware.
There is also no trace of CIS usage in RSX for the 11/74.

But if you look at the pictures from Mark (originally from Dave), you
can see that some machines are marked 11/70MP and some 11/74MP. But all
in corporate cabinets. And I had forgotten the machine was named DAEMON.

Johnny

On 2022-02-23 04:36, mko...@gmail.com wrote:
> Johny and Mark,
>
> I need to dig on my back-up disk, I think I saved a number of 11/74
> pictures too.
>
> I was also trying hard to remember about that day I got the first tour
> by Brian. I think it was 1983, or 1984 at most, and most likely it was
> the 11/70mp prototype. I really remember it was in the black and magenta
> cabinets, two rows of four cabinets each, facing back-to-back. The CPUs
> were at the corners, and memory boxes in the two middle cabinets. All
> memory controls were from the back, as were the debugging panels too.
>
> Maybe it got later replaced with a “production” version in the white and
> blue cabinets, and the final processor with CIS (the 11/70mp was a
> patched 11/70 without CIS). Unfortunately no pictures exist from those
> early days.
>
> Manfred
>
> On Tuesday, February 22, 2022 at 1:58:03 PM UTC-8 Mark Matlock wrote:
>
> Johnny,
>     I have a copy of the PDP-11/74 photos that I downloaded back in
> 2015 from the
> site you mentioned back then.I just zipped them and all the other
> 11/74  info that
> I've collected and put it in a .zip file.
>
> You can download it from:
>
> http://www.rsx11m.com/PDP-1174.zip <http://www.rsx11m.com/PDP-1174.zip>
>
> Best,
> Mark
>
> On Tuesday, February 22, 2022 at 1:00:05 PM UTC-6 b...@softjar.se wrote:
>
> And as I think I've mentioned elsewhere, CASTOR did end up in
> CXO, under
> the care of Dave Carroll as PHEANX. It was running until about
> 2000 or
> 2001, when it developed hardware problems. It sat that way for a
> while,
> but I believe it was eventually dismantled, and I don't know what
> happened to the bits after that.
>
> Dave did take a whole bunch of photographs of the machine around
> the
> time, which were available online on Picasa. However, Picasa has
> since
> shut down. A few pictures have surfaced elsewhere, by people who
> made
> copies. It would be cool if someone had them all (I didn't think
> about
> making copies at the time, and I haven't heard from Dave in a
> while now
> either.)
>
> And yes, Brian left us a few years ago as well. :-(
>
> Johnny
>
> --
> You received this message because you are subscribed to the Google
> Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to pidp-11+u...@googlegroups.com
> <mailto:pidp-11+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pidp-11/f2ceec93-31f8-4d56-a704-9093bd593eb2n%40googlegroups.com
> <https://groups.google.com/d/msgid/pidp-11/f2ceec93-31f8-4d56-a704-9093bd593eb2n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Walter F.J. Müller

unread,
Jun 8, 2022, 4:44:16 AM6/8/22
to [PiDP-11]
Mark Matlock wrote:
> One part that I thought about afterward was how the IIST system might be implemented in addition
> to the multiported memory. If there is no cache involved the ASBR trick would not be needed.
> I could see that some FPGA logic could be utilized or possibly the PRUs of a BeagleBone similar to
> what Jorge Hoppe did on his UniBone board where the UniBus logic is handled by the two PRUs.

It is always fun to talk about the 11/74.

One needs an IIST for each processor, and they are connect to each other via serial links.
The IIST are also connected to the CPU start logic.
And one needs of course a new boot PROM.
Doing that with existing J11 may be possible, but it is certainly a major hardware project.

And on ASRB trick:
- it is used to implement a lock in a single instruction, does an atomic 'test and clear'.
- the J11 has a dedicated instruction, but the existing 11/74 RSX code doesn't know about it
- so some software modifications are also need, so it's also a kernel hacking project

Tom Szolyga

unread,
Jun 8, 2022, 2:25:39 PM6/8/22
to [PiDP-11]
The Computer History Museum in Mountain View, CA has 11/74 images and physical objects in its collection.  Search the catalog for "11/74".
For example, catalog number102688151 is for a 11/74 physical object. 

Johnny Billquist

unread,
Jun 11, 2022, 9:37:05 PM6/11/22
to pid...@googlegroups.com
It is indeed always fun to talk about the 11/74.

The ASRB "trick" is (I assume) about the fact that this instruction
always bypass the cache. If you don't have cache, that trick is
certainly not needed. Just as you don't need to implement the cache
bypass bit in the PDR (the bit still needs to exist, and be possible to
control, but it don't have to do anything), nor the general disable
cache control.

However, the ASRB still needs to be atomic, in relation to other
processors, in its transactions to memory (this might not be trivial -
you really need to understand the system at a low level for this). As
Walter observes, this is because it is used as a test-set instruction.
So it both tests, and modifies the destination. This is used for spin
locks. And it is absolutely essential that this works correctly for any
chance of mP RSX to work, since ASRB is the instruction picked for this.

On a Unibus, these instructions results in a DATIP cycle, followed by a
DATO. And for mP, the memory must not allow any other memory access to
that address between these two transactions. If the memory don't
implement that locking, then you are in trouble. I think Qbus have the
same transactions here, but I don't remember for sure. But if most
memories ensure the lock behavior, I have no idea. The 11/74 have a
separate memory bus for each cpu, and they all run through the MKA11
memory box, which handles this arbitration.

And yes, the IIST is also needed for mP, since this is the device that
enables one CPU to interrupt another. That is used when there is work
that needs to be done, but which cannot be completed on the current CPU,
Typically that would be I/O, but it might be some other things as well.

Exactly how the IIST is implemented is not so important. Yes, the
original IISTs were connected by a serial link. But you can do it any
way you want, as long as it have the same functionality.

Speaking of the boot logic, what the IIST do, if I remember right, is
just fake a power cycle in the end. But then the boot roms are mP aware,
and throw the cpu into a spin loop waiting for some handshaking from
another CPU, which will bring it online. I can locate the details, if
needed. But I think parts are already on bitsavers, and parts are in the
RSX code, which anyone can read.

Johnny
> --
> You received this message because you are subscribed to the Google
> Groups "[PiDP-11]" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to pidp-11+u...@googlegroups.com
> <mailto:pidp-11+u...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/pidp-11/7d04b615-23c0-460e-a61d-d2abafae052an%40googlegroups.com
> <https://groups.google.com/d/msgid/pidp-11/7d04b615-23c0-460e-a61d-d2abafae052an%40googlegroups.com?utm_medium=email&utm_source=footer>.
Reply all
Reply to author
Forward
0 new messages