B5500 versus B5700 model differences

179 views
Skip to first unread message

Nigel Williams

unread,
Jan 11, 2014, 10:27:26 PM1/11/14
to retro-b5500
As per Sid's helpful note about B5700 differences, I would like to
capture in this new thread what we know so far about the model
differences between the B5500 and B5700.

B5700 was announced 6-Oct-1970 mainly as a marketing name change when
Burroughs introduced the new 700-series of systems that included the
B6700, and B7700, both renamed from B6500 and B7500.

Pictures are here of the B5700 towards the bottom of this page:

http://www.retrocomputingtasmania.com/home/projects/burroughs-b5500/b5000_b5500_gallery


1. B6700 DataComm was an option for communications. Was this an option
or mandatory?

2. Option of AuxMem (Auxiliary Memory) using B6700 memory modules to
reduce the MCP footprint and to improve system performance. The AuxMem
appeared as DRA and DRB the device names for the original B5000
drum-based storage. The use of the drum interface limited the storage
to 32kW each.

3. Cabinetry changes: the light/dark front-skin motif was inverted.
B5500 had top one-third of the cabinets as a dark panel, remaining
two-thirds light coloured, the B5700 switched this so the top
two-thirds lighter coloured, but not as light as the B5500, bottom one
third was dark coloured. Badging on CPU cabinets read "Burroughs
B5700" located on the Display&Distribution cabinet, rather than on CPU
A for the Burroughs B5500. A vertical line is also visible at the
midpoint of the B5700 cabinets.


Can anyone remember any other differences?

Was the B6700 DataComm option introduced because they wanted more
expansion capacity for communication lines? or to avoid having to
support old communications hardware? I assume the original B5500
communications hardware would be rapidly falling behind the early
1970s requirements in terms of speed and protocol compatibility?



On Thu, Jan 9, 2014 at 12:54 PM, Sid McHarg <smc...@gmail.com> wrote:
> The other change for the B5700 mentioned below was the DCP for data comm.

Nigel Williams

unread,
Jan 12, 2014, 3:59:44 AM1/12/14
to retro-b5500
---------- Forwarded message ----------
From: Mark Morgan Lloyd <markM...@telemetry.co.uk>
Date: Sun, Jan 12, 2014 at 6:42 PM
Subject: Re: [retro-B5500] B5500 versus B5700 model differences



B5500/B5700 Electronic Information Processing Systems Operation Manual
mentions multiple systems and File Protect Memory (B5376?).

Did the 5700 allow multiple systems to be clustered around common disk?

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]

Paul Kimpel

unread,
Jan 13, 2014, 5:32:05 AM1/13/14
to retro...@googlegroups.com


On Saturday, January 11, 2014 7:27:26 PM UTC-8, Nigel Williams wrote:
<snip>


1. B6700 DataComm was an option for communications. Was this an option
or mandatory?

From an enginnering perspective, it was an option. Whether it was required for marketing purposes or not, I don't know. The MCP could be compiled for either B487 or DCP communications (I don't think it would support both). If the DCP was used, it took the place of the fourth I/O Control Unit, which gave it direct access to the B5500 core memory. One interesting feature of the DCP was that it could directly address both it's own (relatively small) local memory and that of the host system. It could execute its code from either memory, and could even branch between them. This was also the case for DCPs used on B6500/B6700/B7700 systems.

<snip>
 
Can anyone remember any other differences?

None that I can remember.
 

Was the B6700 DataComm option introduced because they wanted more
expansion capacity for communication lines? or to avoid having to
support old communications hardware? I assume the original B5500
communications hardware would be rapidly falling behind the early
1970s requirements in terms of speed and protocol compatibility?

I think the whole purpose of the B5700 was (a) to try to give some relief to customers who were at the limit with their B5500s and were waiting for the B6500, which was well behind schedule, and (b) to announce something (anything!) that looked like a new product at the top end of the model line, as this was about 1968, and the B5500 had been announced in 1964. Burroughs probably looked at what they had developed for the B6500 and decided that the DCP and AuxMem were the two things that would be useful additions to the B5500 and would not be too difficult to do.

As to the DCP, I think the answer is "yes" to all of Nigel's questions. The B487 was a bit of a strange device, but it was somewhat difficult to use, had minimal buffering, and almost no smarts beyond some limited actions based on detecting certain characters in the data stream. There was a big tradeoff between buffer capacity per line and the number of lines that it would support, ranging from 15 lines with 28 characters each, to one line with 420 characters. The small buffer sizes could be made to work with simple, low-speed terminals such as teletypes, but high-speed circuits (2400-9600 bits/sec in those days) require more buffering, so the cost per connection starts to go way up and the number of possible connections goes way down for higher speeds. I know of at least one B5500 site that used circuit multiplexors to provide more connectivity, but that was expensive, too.

A second factor was that if you needed anything other than a conversational teletype protocol, that protocol had to be handled within the B5500 host. By the late '60s, multi-point circuits were becoming more common, and those required more complex protocols such as poll-select or bi-sync. RJE (Remote Job Entry) also required more complex protocols. A lot of these protocols required short messages with rapid turnarounds, so that was a lot of high-priority overhead to put on the host. The B5500 DCMCP supported a couple of types of RJE, and the at least TSMCP eventually supported poll-select devices and some others with more complex protocols, but all of that protocol work had to be done on the host.

Thus the DCP could be a real advantage for a site that needed more connectivity, higher data rates, more complex protocol support, or all of the above. I heard from an engineer in the early 70s that the DCP had been originally developed for use as a smart I/O controller, but it couldn't handle the data rates required of I/O, so it was redirected as a smart communications controller. It did okay in that role, but it still had problems with data throughput -- its aggregate was only about 56KB/sec.

<snip>

Paul Kimpel

unread,
Jan 13, 2014, 5:57:26 AM1/13/14
to retro...@googlegroups.com
On Sunday, January 12, 2014 12:59:44 AM UTC-8, Nigel Williams wrote:
---------- Forwarded message ----------
From: Mark Morgan Lloyd <markM...@telemetry.co.uk>
Date: Sun, Jan 12, 2014 at 6:42 PM
Subject: Re: [retro-B5500] B5500 versus B5700 model differences

B5500/B5700 Electronic Information Processing Systems Operation Manual
mentions multiple systems and File Protect Memory (B5376?).

Did the 5700 allow multiple systems to be clustered around common disk?

Yes. They were referred to as "Shared Disk" systems. Up to four B5500 systems could be connected to the FPM, which was then connected to the disk. I think they all had to run the same MCP, but each host was running its own copy. It was only the disk subsystem that was shared, although I think all systems could work off a common job schedule to help balance the load.

I never used one, so I only know what I have read about them. They appear to have required some, ah, careful administration. They also required the MCP to be compiled with certain options set.

There is a nice write-up on Shared Disk and the FPM in the following document, starting on page 345:

http://www.bitsavers.org/pdf/burroughs/B5000_5500_5700/AA119117_B5700_MCP_Ref_1972.pdf

The FPM approach continued to be developed and used with the Burroughs "Medium Systems" (B2500/3500 and successor V Series) line. To my knowledge, it was never available for the B6500/6700 and later "Large Systems" (A Series) line, where the approach was instead to have a monolithic system and grow the number of processors. Eventually some of the job-exchange features showed up in the BNA networking mechanism, and today of course most of the high-end Unisys successors to the B5500 use SAN (Storage Area Networks) for storage, but the disks are not shared between systems, at least not in the same way.

It is possible today to have multiple MCP systems share a common disk subsystem, but only one MCP host can have write capability. All of the others must access the disk read-only.

Paul Kimpel

unread,
Jan 29, 2014, 11:09:15 AM1/29/14
to retro...@googlegroups.com

Sorry for the delayed reply, but I had been out of the country on business for the past two weeks, and am just now catching up at home.

I think the "with different options" had some limits, namely that the differences would not affect how the MCP worked with disk files. In particular, all of the MCPs would need to be compiled with SHAREDISK=TRUE, the same setting for DFX, and possibly the same setting for PACKETS. Since the systems were linked only through disk I/Os and the FPM, anything that did not affect the disk structures or shared protocols could work independently.

I am sure this was true of both B5500 and B5700 systems. Remember, the B5700 was a marketing designation, not an engineering one. The B5700 had the same processor, same memory, same I/O Control Unit, same Central Control. The B5700 supported the new DCP and the AuxMem storage, but there was no reason those could not be installed on a box with a B5500 nameplate instead.

There was no orderly shutdown process for the B5500 -- or at least I don't know of one. You simply pressed HALT.

The documentation for the LNDK command I've seen indicates it was used to reset the creation timestamps for all files in the disk directory. This was probably done to reset the file archiving/roll-out mechanism for a site. In any case, there was nothing to make the disks consistent with -- there were only one set of disks for the systems in a SHAREDISK configuration -- all systems had access to and used the same set of files.

You may want to look at the discussion of the SHAREDISK system in the 1972 MCP Reference Manual, AA119117, starting on page 345.

Paul
 
-------- Original Message --------
Subject: Re: [retro-B5500] B5500 versus B5700 model differences
From: Mark Morgan Lloyd <markM...@telemetry.co.uk>
To: Paul Kimpel <paul....@digm.com>
Date: 1/13/2014 3:13 AM
Please pass to list if appropriate.

The description of the CM command in 1024916 C-46 appears to state that the MCPs had to be of the same level but could be compiled with different options.

The interesting question is whether this was a B5700-only selling point, or if it was also (at least technically) possible for a B5500.

I'm currently working through the MCP commands thoroughly looking for a command that can be used for a forced shutdown if the emulator is notified of e.g. an impending power fail. Best I can find so far is LNDK, which looks as though it forces disks to be consistent.

Bob McKenzie

unread,
Jan 11, 2019, 12:19:49 AM1/11/19
to retro-B5500
I was project leader of the DEP for the B5700.  The B5700 was simply a stop gap because of the problems with the B6500.  Besides both hardware and software problems with the B6500, the design of the hardware was very inefficient... once realizing the problems they started work on the 5700.  The only upgrades were for the DCP support and the replacement of the DRUM interface with B6500 memory.  The new memory was 5 times faster the B5500 memory which made no sense but Detroit advertised it as such, to some embarrassment later.  With regards to the DCP, it had a twenty bit address on 52 bit words.  Of course only 48 could be read or written to the B5500.  In communication it could support up to 256 lines, any number could be multiplexed.  Systems were sold to the Boarder Patrol that supported the maximum configurations with no performance problems.  While there were default drivers, the user could program his own drivers for any device, synchronous or not in a dedicated high level language. It even did front end work for CANDI such as inserting line numbers automatically for each new line. The idea was to offload as much as we could from the B5500.  Interestingly, the only hardware change it required was the requirement of a new instruction for the B5500 to interrupt the DCP to let it know it had some output.  Size of messages were unlimited particularly for high speed connections although were had to limit them to 1024 words because of the limits of the B5500.

We never ran into performance problems with the DCP, it handled all the loads we could through at it and still be idle most of the time.  I left Burroughs to get my Masters in 1971 and when I came back after, they assigned me to redesign the software on the DCP to handle high speed devices for the B6700, focusing on sophisticated drivers such as handling large numbers of check reader/sorters.  We calculated we could support at least 16 of the reader sorters with 1 DCP..  The reason that was an issue was because of all the Banks Burroughs sold to and the fact that the B6700 could only handle 2 reader sorters while the B3700 could handle 4.  The real problem was non of the B5000, B6000 or B7000 central systems were built to handle real time responses.  On the other hand that was exactly what the DCP did easily.  Even on the B5500, the DC could multi-process all 256 internal processes with ease although in a few application we ran out of memory at the high end.  The design of the DCP OS was small except for the device drivers.  It more closely resembled a actor based system with large number of relatively small, communicating processes. By the way I only lasted 4 or 5 months on my return to Burroughs.  The engineering had become very bureaucratic and was no longer a fun place to work. Also worth noting, the B5700 were not manufactured, but were B5500 returned, refurbished, and repainted.  The only hardware change was the hardware patch for the added B5500 instruction to interrupt the DCP.  The odd thing about that was the instruction was an invalid dial instruction on the B5500 which was sensed by the DCP in the B5500 instruction register which then set the interrupt in the DCP.

I have no idea how many 5700s were sold, but once the B6700 was running and selling I am sure they withdrew the B5700. I am not sure if the DCP was mandatory or not.  That was strictly a marketing issue like so much of Burroughs computers.  Detroit did not understand computers and did not want to invest in development.  The original B6500 development funding as acquired indirectly from Barkleys Bank in England by selling them the computers while they were still paper models.  In fact, the first models of the B6500 went to Barkleys rather than to engineering to fulfill the sales contract.  Barkleys ended up dropping the order because of the poor performance of the system, but by than the B6500 was built.  I still remember software engineers doing testing of the software on B6500s on the manufacturing line since they had few systems themselves.  I should point out that the old B5500 system 102 which as the second B5000 made and later upgraded to a B5500, being used by both the B5500 engineering group and the B6500 group simultaneously because we had so little hardware.  The B6500 group used XAlgol instead of the regular B5500 Algol as XAlgol was 100% compatible with B6500 Algol.  We used it for creating the DCP software (i.e., that in turn generated DCP code) since it was easer and safer to use... no stream procedures.

Bob McKenzie

unread,
Jan 11, 2019, 12:32:18 AM1/11/19
to retro-B5500
Sorry in my last post here I got the name of the Bank wrong... it was Barclays Bank, not Berkleys.  The situation was still the same.  Buroughs made promises first about the B8500, then the B6500, to get the financing for the development for the B6500,  The B5500 was used in the meantime.  In the log run they ended up losing the contract to IBM.


On Saturday, January 11, 2014 at 10:27:26 PM UTC-5, Nigel Williams wrote:

Daniel Eliason

unread,
Feb 11, 2019, 9:38:53 AM2/11/19
to retro-B5500
Hi Bob McKenzie and others, I've been watching this group for about a year & reading on the web about the Large Systems.  In 1976 my friend's dad took us to work after hours one day to type in & run some BASIC programs on CANDE. This was at the Goleta CA Burroughs plant, and we were using the 6700 there.  I remember the front panel on the 6700 displayed "TRAIN", I think it was "TR" above "AIN".  We did not see the flashing Burroughs "B".  I've decided to post a question because of this discussion around the DCP.  From reading the DCP manuals on BitSavers, I can see that the DCP was a 24 bit minicomputer with a small group of registers and specific I/O instructions for the I/O racks.  I've read about the special ALGOL cross-compiler used to program it from the main system. You've already answered quite a lot of what I wondered in your posts.  So here's my question: was this device originally designed from the ground up to be a DCP on the Large System series?  Or was the design adapted from some other product line, or did it have some other origin?

Paul Kimpel

unread,
Feb 11, 2019, 10:31:46 AM2/11/19
to retro-B5500
In the early 1970s, I was told by one of the engineers who designed network adapter cards for the DCP that it was originally conceived to be a programmable I/O controller for the B6500, but that it turned out to have nowhere near the performance required for that job. So it was repurposed as a communications front-end processor. At least on the B6x00/7x00 systems, it could handle a large number of low-speed circuits quite well, but only a limited number of high-speed ones. My understanding is that, using the standard protocols, its aggregate throughput maxed out at 50-60kbps. As far as I know, the only I/O capability it had was character-level register transfers to and from the adapter clusters.

I assume the cross-compiler you refer to is Network Definition Language, NDL. It had a few control structures that were similar to Algol, but it was not Algol. You could not, for example, declare variables. NDL was a special purpose language designed specifically to program character-by-character line protocols and to describe the characteristics and connectivity of network circuits and devices.

Hans Vlems

unread,
Feb 11, 2019, 11:39:30 AM2/11/19
to Paul Kimpel, retro-B5500
Given a typical line speed of 1200 baud, it could support about 500 terminal lines?
Hans

Verstuurd vanaf mijn iPhone

Op 11 feb. 2019 om 16:31 heeft Paul Kimpel <paul....@digm.com> het volgende geschreven:

In the early 1970s, I was told by one of the engineers who designed network adapter cards for the DCP that it was originally conceived to be a programmable I/O controller for the B6500, but that it turned out to have nowhere near the performance required for that job. So it was repurposed as a communications front-end processor. At least on the B6x00/7x00 systems, it could handle a large number of low-speed circuits quite well, but only a limited number of high-speed ones. My understanding is that, using the standard protocols, its aggregate throughput maxed out at 50-60kbps. As far as I know, the only I/O capability it had was character-level register transfers to and from the adapter clusters.

I assume the cross-compiler you refer to is Network Definition Language, NDL. It had a few control structures that were similar to Algol, but it was not Algol. You could not, for example, declare variables. NDL was a special purpose language designed specifically to program character-by-character line protocols and to describe the characteristics and connectivity of network circuits and devices.

--
You received this message because you are subscribed to the Google Groups "retro-B5500" group.
To unsubscribe from this group and stop receiving emails from it, send an email to retro-b5500...@googlegroups.com.
Visit this group at https://groups.google.com/group/retro-b5500.
To view this discussion on the web visit https://groups.google.com/d/msgid/retro-b5500/34f6289d-c68d-4536-993f-b5326ce8d530%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Bob McKenzie

unread,
Feb 12, 2019, 9:07:33 PM2/12/19
to retro-B5500
When I went to work in the Pasadena plant in 1969 the first DCP was not quite operational.  It was specifically designed for the B6500 as the B7500 design was really never started.  When the B6500 had "design problems" they did minimal production of it until the fixes, which were substantial, were incorporated to make the B6700.  It was about the time they decided to make the B6700 that they decided to also build the B7700.  They decided to build the B5700 (essentially cleaned up and repainted B5500 with the DCP and AuxMemory added)  in early 1969 before I got there.  I don't know what things were options and what was in the basic package, that frequently changed in any case.  Usually, the general practice was to ship a minimal functioning system with everything else optional.  I do know that there was one network adapter card that handled 16 lines. The DCP handled 16 clusters, each cluster could handle 16 adapters. That gave us 256 lines, many typically, in Banks at least, were multi-drop lines with 4-10 TC500's for example.  We ran tests with heavily configured systems and even and even in those cases the processor in the DCP was idle most of the time. The interesting thing about the one network adapter card was that it could handle 110 baud to 9600 baud by program control but marketing sold it as 3 or more separate models, the faster models obviously costing substantially more money. Since the DCP could not detect which model the customer had purchased since they were all the same anyway, we just ran the line at whatever speed it ran.  NDL actually had a speed setting per line in the configuration but the system software just ignored it since it could not enforce it in any case.  It took a while but customer eventually got wise to the rip off.

NDL, the Network Definition Language was first written for the B6500 system.  It defined configurations for the network (e.g., number of lines, their addresses, how many drops, their addresses, etc.) and had a simple imperative language for writing character oriented drivers and changing settings on the adapter cards.  On the B6500, B6700, B7700 systems, the code for the DCP could run in either the DCPs memory or in central memory for the system.  The buffers which the DCP filled or read from were in central memory so the central processors could access the data.  The operating system for the DCP was technically written in XAlgol, but in fact was written in assembly language.  Each instruction in the DCP was a procedure in a base XAlgol program.  The main body of code in this program was then just a series of these calls plus some macros we wrote which were also just additional procedures. You then compiled and ran this large XAlgol to produce the object code for the DCP.  The drivers written in NDL were translated by the NDL compiler to a series of the same procedure calls which were translated the same way into the DCP code.  Odd way to build an assembler but it worked.

On the B5700, while we could access the B5700 memory we kept all the DCP's code and all the I/O buffers, which were variable length and allocated dynamically, in the DCP's memory.  When an Input successfully completed we inserted a message in a central memory queue which notified the B5700 system of the source, and length and the B5700 would return an address of where to put the input, which we would then transfer to the B5700's memory.  For output the we got the request via the our input queue in central memory and transferred the message to our memory and then transmitted the message to the designated terminal or system.  This approach minimized our use of central memory which helped the B5500.  When messages were queued into the DCP's input queue or in the output queue to the B5700 when they were previously empty, the source would interrupt the other system to let it know the queue was occupied again. We added functionality in the drivers to offload some of the tedious work off of the B5700 which was different then the B6500 ... approach.  For example, on request from the B5700 MCP, we would output line numbers for each new line until told to stop, ... thus helping CANDE.

As to your question about what the DCP was designed for. I can't say for sure but I do know that the I/O processor for the B6500 was completed and running when I started work in 1969 but the DCP was still in engineering for a few more months. I know this because the that early software on the B6500 was still being run on B5500 system number 102 for a few months when I arrived as the hardware was still being worked on and being tested.  The DCP which we and the B6700 group needed was still not available yet so we had to emulate it.

The Interesting part is that when we completed the DCP release, I took a leave from Burroughs to get my masters.  When I got back some time later they asked me to head up a project for the B6700 and B7700 systems to have real-time high speed devices added via the DCP.  The specific problem was that a B6700 system could only support only 1or maybe two check ready/sorters for banks which was a problem since the B3700 could handle up to 4 and an equivalent IBM system even more.  The problem was that the Burroughs large systems MCP and hardware were not designed to be real-time system.  When handling high speed reader-sorters, when each check was read you only had a couple of milliseconds before the driver had to select a pocket to drop the check, and to select the right pocket requiring information obtained from a potentially large data base.  We ran tests which showed a single DCP could easily support at least 8 reader-sorters.  The tests were so good that Burroughs was considering selling the DCP as a separate device controller for arbitrary devices that were not necessarily made with the Burroughs name on it (e.g., disk drives, printers ect.).  We were designing a new language for writing device drivers in when I decide to leave Burroughs again.  It just was not a fun place to work any more... too political.  ... that all I know on the subject.  I do not know if the DCO was sold as a general IOP for large systems or not, but it was capable of doing so.

Paul Kimpel

unread,
Feb 13, 2019, 11:01:32 AM2/13/19
to retro-B5500
I know of one site that had at least 1500 terminals on their B7700. They had upgraded from a B5500/5700 that also had DCPs and had about the same number of terminals as on the B7700. I think both systems had two DCPs, but there were not 1500 individual lines connected to the DCPs. The systems supported an international terminal network, so they used quite a few circuit concentrator/multiplexer devices to minimize the number of leased telephone circuits needed. Most of the terminals were 110 baud teletypes, but we also had quite a few 2400-4800 baud CRTs of the glass-teletype variety. The traffic was mostly asynchronous, character-at-a-time; there was very little block mode traffice, such as poll-select. As far as I can remember, the DCPs had no difficulty handling this traffic load.

The performance limitation for the DCP was no so much dependent upon average line speeds and numbers of lines as it was on traffic density, burst rates, and the amount of fancy stuff you were trying to do in the protocol algorithms. There is generally a lot more output than input in a teletype network. Output performance is generally not a problem, as the characters can be processed at whatever speed the DCP can manage, within reason. A few milliseconds of jitter in inter-character delays generally will not matter. Input was a different story, though, as when a character arrived, the DCP had to deal with it before the next character arrived.

As I recall, we started to have problems with DCP throughput when we started to do host-to-host communications at higher (for the time) speeds, e.g., 56kbps. The data would be coming into the DCP like a firehose, and if you had more than a couple of those transmissions going at a time, the DCP could not keep up.

Daniel Eliason

unread,
Feb 24, 2019, 4:31:37 PM2/24/19
to retro-B5500
Thanks Bob for all the additional background on the development and use of the DCP.  The DCP has been one of my really keen interests.  Since you all had to wait for the DCP, and even go to the trouble of emulating it in order to work, it seems likely that it was probably purposed for the 6500 system from its origin. It is quite a boost to hear the purposes the DCP was put do in addition to terminal service, like the check readers/sorters.  That would require loading some tables from the main application on MCP.  Your detail about the line numbering for CANDE is relevant to me, since that is how I experienced its use. I have read a lot about the T700 terminals on BitSavers, especially to understand the transmit/receive/local editing behaviours. Paul, thanks for the reminder NDL is not Algol, I knew that from reading the manual on BitSavers, but I had forgotten how small NDL was.  Really, we have a bit of a to-do these days about "DSL"s, Domain Specific Languages, which NDL is a perfect example from early times.

I have another burning interest, what about the B5500/5700 vs. B6700 disk controllers?  I've read on some of the Tasmanian web sites anecdotes that the B7700 had a repurposed B800 mini as a disk controller, that it even had to be booted from cassette in certain cases.  My question is, what I've noticed about B6700 disk controller is that it is referred to as a "Disk File Optimizer".  On the B6700 was it also a mini?  Did it offload any planning or file system type operations from MCP and the CPU?  I'm thinking of "optimizer" in the name.  In what ways was the B6700 disk controller more advanced and different from the B5500/5700 disk controllers?

art.shapiro

unread,
Feb 24, 2019, 7:29:39 PM2/24/19
to retro-B5500
Perhaps (or not) germane to this discussion: does anyone recall the specifics or purpose of the contraption called the "D Machine" from that era?

Art

Al Kossow

unread,
Feb 24, 2019, 9:35:56 PM2/24/19
to retro...@googlegroups.com

Paul Kimpel

unread,
Mar 1, 2019, 7:27:29 PM3/1/19
to retro-B5500
On the B5500, the head-per-track disks were controlled by a combination of electronics in the B5470 Disk File Control Unit (DFCU) and the B471 Disk File Electronics Unit (EU). There could be up to two DFCUs on a system. The DFCU connected through the peripheral exchange to the I/O Control Units (channels). The DFCU also connected to up to 5 EUs. Each EU could support up to five B475 Storage Units (SU). To the B5500, the DFCU was the addressable unit (DCA or DCB). By adding B451 Disk File Expanded Controls (exchanges), each DFCU could connect up to 10 EUs, five per exchange. The exchanges could also be cross-connected so that both DFCUs could address all EUs, allowing more parallelism of I/Os to the EUs, but the total number of EUs on the system in that case was limited to 10.

Another device, the File Protect Memory (FPM) allowed up to four B5500 systems to share a common pool of EUs. It allowed ranges of disk addresses to be locked temporarily by individual systems so that updates to the common disk resource could be synchronized. The FPM did not actually control access to the shared disk; it merely maintained lists of locked-out areas. The MCPs on the multiple hosts had to follow a protocol for using the FPM and coordinating access to the disk units.

Head-per-track disk on the B6500/6700/7700 worked in a similar fashion, although the disk control was a completely different design (MSI instead of discrete components), and the EU/SUs were later models with more capacity. As I recall, the exchange mechanism was a little more flexible, as well. As far as I know, the FPM was never used on the Large Systems, although it was quite popular on the Medium Systems (B3500, etc.) line, as they were for many years uni-processor systems.

All of these controls were hard-wired logic. When removable disk pack drives (OEM-ed from Century Data) were introduced for the B6700/7700 in the early 1970s, they used a programmable controller based on the Burroughs D-machine. This was a small computer developed by the Paoli Research Center, initially I think for military applications. It was quite a soft machine, with a writable control store. The D-machine controlled the drives and interfaced to the host's exchanges and I/O channels.

The D-machine was used in other products, including the B700, B800, and (I think) B900 small systems. It may have been used in some of the communications processors developed by the Downingtown, PA, facility -- I'm not sure.

The Disk File Optimizer (DFO) was not a disk controller. Like the FPM, it was an add-on device that complemented the disk controllers and exchanges. As far as I know, it was a hard-wired unit and only worked with head-per-track disk. Its purpose was to minimize rotational latency delays and reduce I/O times. The DFO could sense the shaft position of the disks for an EU, so it could determine how much delay there would be between the heads and a specified sector. It maintained a stack of up to 32 pending disk commands, automatically ordering the stack based on the requested addresses in its queued commands and the current shaft positions of the disks. Like the FPM, it did not actually initiate I/Os to the disks, but acted as a service unit to the MCP to help it select the queued I/O with the minimum latency time. There is a whole chapter devoted to it in the 1972 B6700 Reference Manual on bitsavers.

My impression is that the DFO was not a very successful product, but I don't know why. I imagine it was fairly expensive, and perhaps the degree of increased throughput it provided did not justify its cost.

On Sunday, February 24, 2019 at 1:31:37 PM UTC-8, Daniel Eliason wrote:
<snip>

Daniel Eliason

unread,
Mar 5, 2019, 11:17:10 AM3/5/19
to retro...@googlegroups.com
Thanks, Paul for clearing up the Disk File Optimizer, and about the disk controller generations.  So the 6700 removable disks had a D-machine controller.  I've read through the Bitsavers stuff on the D-machine, and also on the B700 (actually B720, from 1975).  I looked back to where I found mention of the minicomputer as a disk controller, it was on "http://users.monash.edu.au/~ralphk/fixing-the-burroughs.html".  I misquoted that account above. It was a B7800 using "the heart of a B700" as a disk controller that was mentioned.

That question is all resolved for me now!  After study of the D-machine and B720 documents, I find that the D-machine was basically a "bit slice" chip set with an 8 bit data path "slice".  I find that the B700 had the same architecture (looks like 16 bit data path), and the micro-instruction words for them are identical.  So, either 1) the D-machine doc simply had a table from the B700 inserted to satisfy the need to write a report, or 2) the D-machine borrowed the B700 architecture to move into LSI implementation, or 3) the B700 was simply built out of a D machine with two 8 bit slices.  Since you have mentioned that he B6700 removable cartridge controller was a D-machine, I can expect that the B7800 controller was as well, and since that's essentially identical to the "heart of a B700", all accounts resolve!!

The B720 document has a section on reader/sorter equipment.  This was informative as background to Bob McKenzie's stories about use of the DCP above, which was great.  It was also informative as to how the D-machine/B700 architecture was used as equipment controller.  In the reader/sorter case, the B700 ran an "M-level" program (ie, ISA interpreter level - "microcode") to service the equipment, and not an "S-level" program (ie, compiler object code).  The D-machine report also made mention of coding machine control applications at the same level as ISA interpreters, which implies M-level.  So, from these two sources, we have a pretty good guess that disk control functions would have been programmed in M-level code directly.  I had wondered whether the D-machine in the controller might run an ISA interpreter so that ALGOL could be used to develop the disk controller code.  But these points indicate otherwise, and also the B720 doc mentions COBOL, RPG, FORTRAN, but *not* ALGOL! Obviously there's going to be a performance issue doing a control application with S-level code, but without ALGOL (and I did my share of FORTRAN machine control coding in the 1980's), there would not be much point!

This thread has been really great!  Thanks to everyone!!

l wilton

unread,
Feb 19, 2020, 10:05:00 AM2/19/20
to retro-B5500
I worked in Pasadena on the Medium Systems MCP until Pasadena sent away, and spent quite a lot of time implementing the IO subsystem and interface to the peripherals. Unfortunately all my manuals from the time are in storage, and worse, my memory has been failing me now for some years. As a result I don't remember part and model numbers that I used to. But I think I still recall some things of use.

The D machine was originally designed by the military side of Burroughs, pussibly at GVL. I believe it was used on their side as a secure military communications switch and possibly some other things. At heart it was a wide-instruction microcoded machine that could operate directly on it's memory, but was intended to run an interpreter in microcode, and have the interpreted code be the main functional machine. 

In the Medium Systems world we had two uses for the D machine: the DCP and the 9387 disk controller. (Note: this DCP was *NOT* the same as the Large Systems DCP!) The 9387 had the M level in ROM, and the disk firmware at the S level was downloaded from the MCP over the disk IO channel. There was an LH (Load Host) keyboard command to load the firmware to the controllers, but you could also specify the firmware file name on the CHANNEL card for the controller in the coldstart deck, and the MCP had the firmware files bound to it so that it could load the disk control before it had to access it.

The DCP had the microcode in RAM, as well as the S level code in RAM. These were separate sections of RAM though. The S memory was 16 bits wide, and the M memory was considerably wider, perhaps 24 bits or more, I no longer recall. I seem to recall there was a cassette deck behind the front smoked glass that you used to load the M level code to the machine. Then the S level code was also downloaded from the MCP. While the 9385 code was written in Downingtown (by Joe Wilton and another guy whose name I now forget) and distributed as an object file, the DCP S code consisted of a kernel of routines also compiled at Downingtown, but this was bundled with interpreted "line procedures" compiled in NDL I and bound with the kernel code by the NDL compiler. This agglomeration was then downloaded to the DCP by the MCP before it could be used. I think the M level microcode was also downloaded along with the S level microcode to the DCP. There was a core set of M instructions in ROM that was sufficient to handle the download process, but the additional M instructions were necessary to do a lot of the bit-level manipulation that was not required for simple byte-level transfers.

In the case of this D-machine DCP, there were a lot of I/O adapters as well as the main processor. I seem to recall that there was room for 32 line adapters. There was also one channel adapter to talk to the host computer. I don't recall how much of the line logic was hard-wired in the line adapters, but they had a fairly substantial number of chips, and there were different adapters for different lines and protocols (like sync and async and high and low speed.) The DCP was a two-level interpreter: the M machine interpreted the kernel DCP code, and it interpreted the compiled NDL procedures that transferred the data to and from the lines and maintained state. 

A bit on how the early Medium Systems and the B6700/B7700 interfaced to peripherals: 

On the B6700 you could divide the central system up into the processors, memory control, memory storage modules (MSMs), the Multiplexor, and the I/O Controls, and the DCP. The I/O controls interfaced to the system thru the Multiplexor. The Multiplexor interfaced to the I/O Controls. 

On The B2500//B3500/B4700 you could divide the system up into the processor, Central Control, Memory, and the I/O Controls. While all the central system components were different, the I/O controls and stuff outboard of that (exchanges, peripherals, etc.) were exactly the same. (The except was, as mentioned, the "DCP", which was really two completely different things designed on different sides of the country for the two systems.)

The I/O controls on these systems came in two types: A and B, or Small and Large. The I/O Control Cabinet could hold five large controls and 5 small controls for a total of 10 I/O channels. The large controls were on the left when you looked at the card side of the cabinet, so were also known as "left-hand" controls. The small controls were on the right. Between the two controls was a vertical stripe of electronics that was the interface to the computer memory (actually to Central Control, which then interfaced to memory). The left-hand controls had the peripheral connections generally on the left end, and the right two cards were the interfaces to memory. There were front-edge card connectors on these cards that jumpered with 4" wide ribbon cables to the front-edge connectors to the memory interface. For the small controls the memory interface was the left two cards in the control and the peripheral cables were on the right. While you could not physically mount a large control on the small end (it was too long) you could mount a small control on the left side. But the cabling would be backwards. There were left-handed versions of some of the small controls to allow their use on the left side of the I/O cabinet.

The large controls were things like the disk control, the tape control, the single-line datacom control, and a couple of other more oddball things. The small controls were things like the card reader and punch controls, paper tape reader and punch controls, and the SPO control.

In the old disk subsystems, going back to the B5000, you had some sort of I/O control that was the interface between the processor/memory and the outboard side. This control interfaced over a pair of 25 wire cables (25 individually shielded wires making a cable about 30mm in diameter) to the EU. Or if an exchange was involved, one or more controls interfaced to the exchange, and then the exchange interfaced to up to 10 EUs. The DFE was a rack of cards about 24 inches wide and 40 inches high, and was housed in a separate "exchange cabinet" that could hold up to 4(?) exchanges or similar-sized units, such as FPM (File Protect Memory). The EU had the single host-side interface, and had outboard interfaces to some number of SU (Storage Unit) cabinets, each of which was a physical disk drive. 

When disk packs came along, concepts remained much the same, but the physical hardware changed. The diskpack control was a Type A control. It could cable to the 9387 disk controller (which was the D Machine in disguise), and that talked to the disk drives. So you still had the concept (sort of) of an EU and multiple SUs. You could connect multiple channels to a 9387. I no longer recall if this was done with a free-standing exchange (certainly not the same exchange as used with older disks) or if the 9387 could handle up to 4 host connections directly. I rather think that it could, but as I say, I no longer recall.

l wilton

unread,
Feb 19, 2020, 10:05:00 AM2/19/20
to retro-B5500
BTW, the D machine was not the only microcoded Burroughs machine. The B1000 or Small Systems was specifically designed as a bit-addressable memory microcoded machine that was intended to run specific interpreters for specific languages, and to be able to quickly and seamlessly switch between interpreters when interrupts occurred. It was MUCH different than a D machine, but from a few yards back the concepts involved in the B1000 and a B700 might look much the same.
Reply all
Reply to author
Forward
0 new messages