1. B6700 DataComm was an option for communications. Was this an option
or mandatory?
Can anyone remember any other differences?
Was the B6700 DataComm option introduced because they wanted more
expansion capacity for communication lines? or to avoid having to
support old communications hardware? I assume the original B5500
communications hardware would be rapidly falling behind the early
1970s requirements in terms of speed and protocol compatibility?
---------- Forwarded message ----------
From: Mark Morgan Lloyd <markM...@telemetry.co.uk>
Date: Sun, Jan 12, 2014 at 6:42 PM
Subject: Re: [retro-B5500] B5500 versus B5700 model differences
B5500/B5700 Electronic Information Processing Systems Operation Manual
mentions multiple systems and File Protect Memory (B5376?).
Did the 5700 allow multiple systems to be clustered around common disk?
Sorry for the delayed reply, but I had been out of the country
on business for the past two weeks, and am just now catching up
at home.
I think the "with different options" had some limits, namely
that the differences would not affect how the MCP worked with
disk files. In particular, all of the MCPs would need to be
compiled with SHAREDISK=TRUE, the same setting for DFX, and
possibly the same setting for PACKETS. Since the systems were
linked only through disk I/Os and the FPM, anything that did not
affect the disk structures or shared protocols could work
independently.
I am sure this was true of both B5500 and B5700 systems.
Remember, the B5700 was a marketing designation, not an
engineering one. The B5700 had the same processor, same memory,
same I/O Control Unit, same Central Control. The B5700 supported
the new DCP and the AuxMem storage, but there was no reason
those could not be installed on a box with a B5500 nameplate
instead.
There was no orderly shutdown process for the B5500 -- or at
least I don't know of one. You simply pressed HALT.
The documentation for the LNDK command I've seen indicates it
was used to reset the creation timestamps for all files in the
disk directory. This was probably done to reset the file
archiving/roll-out mechanism for a site. In any case, there was
nothing to make the disks consistent with -- there were
only one set of disks for the systems in a SHAREDISK
configuration -- all systems had access to and used the same set
of files.
You may want to look at the discussion of the SHAREDISK system
in the 1972 MCP Reference Manual, AA119117, starting on page
345.
Please pass to list if appropriate.
The description of the CM command in 1024916 C-46 appears to state that the MCPs had to be of the same level but could be compiled with different options.
The interesting question is whether this was a B5700-only selling point, or if it was also (at least technically) possible for a B5500.
I'm currently working through the MCP commands thoroughly looking for a command that can be used for a forced shutdown if the emulator is notified of e.g. an impending power fail. Best I can find so far is LNDK, which looks as though it forces disks to be consistent.
In the early 1970s, I was told by one of the engineers who designed network adapter cards for the DCP that it was originally conceived to be a programmable I/O controller for the B6500, but that it turned out to have nowhere near the performance required for that job. So it was repurposed as a communications front-end processor. At least on the B6x00/7x00 systems, it could handle a large number of low-speed circuits quite well, but only a limited number of high-speed ones. My understanding is that, using the standard protocols, its aggregate throughput maxed out at 50-60kbps. As far as I know, the only I/O capability it had was character-level register transfers to and from the adapter clusters.I assume the cross-compiler you refer to is Network Definition Language, NDL. It had a few control structures that were similar to Algol, but it was not Algol. You could not, for example, declare variables. NDL was a special purpose language designed specifically to program character-by-character line protocols and to describe the characteristics and connectivity of network circuits and devices.
--
You received this message because you are subscribed to the Google Groups "retro-B5500" group.
To unsubscribe from this group and stop receiving emails from it, send an email to retro-b5500...@googlegroups.com.
Visit this group at https://groups.google.com/group/retro-b5500.
To view this discussion on the web visit https://groups.google.com/d/msgid/retro-b5500/34f6289d-c68d-4536-993f-b5326ce8d530%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
When I went to work in the Pasadena plant in 1969 the first DCP was not quite operational. It was specifically designed for the B6500 as the B7500 design was really never started. When the B6500 had "design problems" they did minimal production of it until the fixes, which were substantial, were incorporated to make the B6700. It was about the time they decided to make the B6700 that they decided to also build the B7700. They decided to build the B5700 (essentially cleaned up and repainted B5500 with the DCP and AuxMemory added) in early 1969 before I got there. I don't know what things were options and what was in the basic package, that frequently changed in any case. Usually, the general practice was to ship a minimal functioning system with everything else optional. I do know that there was one network adapter card that handled 16 lines. The DCP handled 16 clusters, each cluster could handle 16 adapters. That gave us 256 lines, many typically, in Banks at least, were multi-drop lines with 4-10 TC500's for example. We ran tests with heavily configured systems and even and even in those cases the processor in the DCP was idle most of the time. The interesting thing about the one network adapter card was that it could handle 110 baud to 9600 baud by program control but marketing sold it as 3 or more separate models, the faster models obviously costing substantially more money. Since the DCP could not detect which model the customer had purchased since they were all the same anyway, we just ran the line at whatever speed it ran. NDL actually had a speed setting per line in the configuration but the system software just ignored it since it could not enforce it in any case. It took a while but customer eventually got wise to the rip off.
<snip>