Larry__Weiss wrote:
>
> I just stumbled across some pics of the CDC6600:
> http://www.bambi.net/computer_museum/cdc6600_and_console.jpg
> http://online.sfsu.edu/~hl/mmm.html
>
> Just how powerful was that machine? How would it compare to modern-day
> machines, especially PC's?
For its day, incredibly powerful.
Compared to today's PCs, puny in speed, memory and disk storage. You could
probably go to Wal-Mart or wherever and for less than $1000 buy something that
could emulate one.
I'll give what little else I can, off the top of my head. Some of it may apply
to later machines, and not to the 6600. Don't take anything below too
uncritically, unless you get confirmation from someone who really DOES know, or
it's really easy to get right and hard to get wrong. Come to think of it, that
just leaves the 17-bit comment.
Emulating a CDC 6600 on a PC would be an interesting and probably pointless
programming challenge. The CDC 6600 and its compatible successors and
predecessors used 60-bit words (only word-addressable) and had 6-bit characters.
Well, sometimes 6-bit characters. That was what the hardware and the operating
system supported. There were also other schemes for representing more than 63
(or 64) characters: 6-12 character sets, 8-in-12 and maybe others. (I seem to
recall something about 7.5 characters/word, but that may just be a horrible,
horrible dream.) Learning of the one-byte/two-byte characters in Unicode
brought back memories of the 6-12 character set, where 7604 (octal) and 00 both
represent a colon. (Or was it 7402?) There was some operating system support
for 6-12, but it was limited.
The CPU was I don't know how many orders of magnitude slower. I seem to recall
something around then about memory speeds in the 200 nanosecond range, but that
may have been in a different context.
The address space was limited to 18 bits (17 for users), and I think the
mainframe memory could be ordered (or upgraded to) that size. Later machines in
the series (or a later series compatible with the CDC 6600) could go higher, but
the 17-bit limit on user programs remained.
They had an external kind of storage, Extended Core Storage, which went up to (I
think) 22 or 24 bits of addressing. (Later ECS was semiconductor, IIRC.) It
was used for several purposes. Storage more capacious than memory but faster
than disk. Communication between mainframes in Multimainframe mode (a la DEC's
VAXcluster) to coordinate access to shared disks, and perhaps other things.
Swapping storage, for timesharing (and batch) jobs that needed to be moved out
of memory for a brief while.
There was no virtual storage or segmented or paged memory; if your process had
any memory at all, your entire address space was in memory.
There were Peripheral Processors which did the I/O. There were 10 or 20 of
them, each of which had its own memory (12 bits of addressing?) and full access
to the central processor's memory. One of them was special (PP 0), and
contained the PP-resident part of the operating system. The rest were mostly or
completely interchangeable. It's possible that some devices were attached to
certain PPs, so only they could access them. Or maybe all could access all; I
forget. If software in a Peripheral Processor encountered a completely
unexpected situation, the convention was to program it to "hang" (spin in a
tight loop), rather than go on. As long as it wasn't PP 0, the computer could
continue, only slightly degraded. Sometimes more than one would get hung up.
Rebooting the machine -- in the dead of night, normally -- would fix it, and
provide a dump for later analysis. [1]
Despite their shortcomings, there were some nice things about the CDC 6600
series machines, and their operating systems. Compared to the dominant
mainframe of the day -- IBM 360s, 370s -- they had wonderful operating systems.
To copy a file, you used the copy command. One-liner. To make a file bigger,
you wrote to it. And so on. None of this IEBGENER/IEBCOPY foolishness. Files
were just a series of bytes. (OK, 6-bit bytes, and there were those EOR marks,
and oddball line terminators, but still...)
I must be getting old. Nostalgia is setting in again. -Eric
[1] The central processor was called the CP, the peripheral processors, PPs.
Where I worked, at the daily change control meeting -- normally chaired by an
IBM type, usually a woman -- the previous day's hardware and software problems
were also reviewed. Some of female attendees -- and occasionally the chair --
would be nearly smirking when they and the CDC types would discuss a "PP hang"
or say that machine A had to be restarted because it had too many hung PPs.
-ESH
--
E-mail privacy: http://www.theclairefiles.com/Personal/libspriv.html (Perhaps a
bit over-cautious, she is. Perhaps.)
I started digging around Google, and found that this was already asked
back in 1999 and got 82 replies including an estimate that the 6600 was rated at
about 1 MIPS.
http://groups.google.com/groups?&q=%22Comparative+Speed+of+CDC+6600%22
The CDC 6600 was the very first machine that I ever programmed on (1969).
The physicists for sure loved its speed at that time. I sure wish I still
had the sources for those matrix crunching programs to just see how fast
they can be run today.
It's surely amazing how much "idle" computing capacity the world has right now.
- LarryW
It's not the CDC 6600, but this site has information on one of its
almost contemporaries:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=92218.123445MACKINNO%40QUCDN.QueensU.CA
has information on an IBM 360/91's relative performance to a number of
modern (for 1992) machines running the same Fortran code:
>>IBM RS/6000 550 (FORTRAN 2.2): 0.07 seconds
>>IBM 3081/G (Fortran 1.3): 0.25 seconds
>>Sparcstation 1+ (FORTRAN 1.4): 0.38 seconds
>>486/25 (Lahey F77L-EM/32): 0.48 seconds
>>IBM 360/91 (Fortran H): 0.65 seconds
>>386/33 with 387 (Lahey F77L-EM/32): 0.90 seconds
It's also contains some exposition on the subject.
-Mike
--
http://www.mschaef.com
original long ago ... reposting from
http://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe:
rain/rain4
158 3031 4341
Rain 45.64 secs 37.03 secs 36.21 secs
Rain4 43.90 secs 36.61 secs 36.13 secs
also times approx;
145 168-3 91
145 secs. 9.1 secs 6.77 secs
rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.
--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/
Although the Wheelers refer to RAIN/RAIN4, LINPACK was also
used for relative speed comparisons, and has been ported to
modern platforms and other languages. You should be able to
find any number of different versions all over the net...
-dq
"Douglas H. Quebbeman" wrote:
This raises another question in my mind. In the late sixties and early seventies one set of working codes for reactor design
were called the PDQ-(version-number) series. These were used by several of the (then) AEC labs as well as companies like
Combustion Engineering. They were originally in Fortran, but had evolved in the CDC world into assembly code hybrids that even
had specialized disk drivers. Actually, then, two questions.
!. Were these unique to CDC machines or were there e. g., IBM versions of PDQ-7?
2. Have any of these dusty decks survived?
Joe Yuska
And for a few days after.
> Compared to today's PCs, puny in speed, memory and disk storage. You
could
> probably go to Wal-Mart or wherever and for less than $1000 buy something
that
> could emulate one.
No doult about it.
Depends on the operating system. As 8 bit became popular it became
supported.
> The CPU was I don't know how many orders of magnitude slower. I seem to
recall
> something around then about memory speeds in the 200 nanosecond range, but
that
> may have been in a different context.
No, no, no. A 6600 never in its wildest dreams had a 200ns memory access:
this was the days of real core memory, made of magnetic donuts on wires.
Well, you could get a new word from memory every 100ns but that word
has been desired for 800ns or more. Lots of parrallism going on what with
the PPs and different parts of the CP all able to make memory request.
Nearly everything was done in registers: 8 60-bit, 8 18-bit A(ddressing
registers and another 8 18-bit general purpose registers. Mustn't forget
the "stack" which held enough program space for a tight loop.
Memory was slow, so they tried not to use it too much.
> The address space was limited to 18 bits (17 for users), and I think the
> mainframe memory could be ordered (or upgraded to) that size. Later
machines in
> the series (or a later series compatible with the CDC 6600) could go
higher, but
> the 17-bit limit on user programs remained.
The 6600 came in 32k, 64k (64 in Microsoft speak, 65k to CDC) and 128/131k
versions. Just remember, this is 60 bit words, not bytes.
Oh yea, you may get as many a four instructions to the word which could be
issued
as quickly as every 100ns.
> They had an external kind of storage, Extended Core Storage, which went up
to (I
> think) 22 or 24 bits of addressing. (Later ECS was semiconductor, IIRC.)
It
> was used for several purposes. Storage more capacious than memory but
faster
> than disk. Communication between mainframes in Multimainframe mode (a la
DEC's
> VAXcluster) to coordinate access to shared disks, and perhaps other
things.
> Swapping storage, for timesharing (and batch) jobs that needed to be moved
out
> of memory for a brief while.
But then, you could buy an additional CP to go in your 6600 making it a
6700.
> There was no virtual storage or segmented or paged memory; if your process
had
> any memory at all, your entire address space was in memory.
Sure there was, they were called overlays and if you wanted them you wrote
the code to do them.
>"Eric S. Harris" wrote:
>> Perhaps someone in news:comp.sys.cdc has something to add to this?
>> Larry__Weiss wrote:
>> > I just stumbled across some pics of the CDC6600:
>> > http://www.bambi.net/computer_museum/cdc6600_and_console.jpg
>> > http://online.sfsu.edu/~hl/mmm.html
>> > Just how powerful was that machine? How would it compare to modern-day
>> > machines, especially PC's?
>>
>> The CPU was I don't know how many orders of magnitude slower.
>>
>
>I started digging around Google, and found that this was already asked
>back in 1999 and got 82 replies including an estimate that the 6600 was rated at
>about 1 MIPS.
>
<snip>
A more interesting measurement than MIPS for these machines was
megaflops - given their scientifc orientation. And that was generaly
measured by a standard benchmark that was run at every major
realease of OS or the Fortran compiler. I believe the benchmark
program used was LINPAK but I can't be sure.
A table of the achieved megaflop rates for various CDC machines
with different release levels of OS and Fortran compiler may be found
at
http://pages.sbcglobal.net/couperusj/Megaflops.html
Jitze
> Compared to today's PCs, puny in speed, memory and disk storage. You could
> probably go to Wal-Mart or wherever and for less than $1000 buy something that
> could emulate one.
WallyWorld is selling an Athlon 850 system for about $300. It can
simulate a floating point instruction mix faster than any 6000 or
Cyber 70 machine ever built. My GS160 Alpha with 730Mhz CPUs can run
the same simulated FP mix at over 13 MFlop. At most, only the 2
fastest 60 bit CPUs that CDC ever made could execute FP instructions
that fast.
> Emulating a CDC 6600 on a PC would be an interesting and probably pointless
> programming challenge.
Some of us would (do) consider it a "labor of love". :)
> Well, sometimes 6-bit characters. That was what the hardware and the
> operating
> > system supported. There were also other schemes for representing more
> than 63
> > (or 64) characters: 6-12 character sets, 8-in-12 and maybe others. (I
> seem to
> > recall something about 7.5 characters/word, but that may just be a
> horrible,
> > horrible dream.) Learning of the one-byte/two-byte characters in Unicode
> > brought back memories of the 6-12 character set, where 7604 (octal) and 00
> both
> > represent a colon. (Or was it 7402?) There was some operating system
> support
> > for 6-12, but it was limited.
Though 6-bit characterset werepraised early as a significant cost
reduced, as the price of systems came down, CDC heard more and more
complaints about 6-bit (not to be confused with 2-bit) characters
sets. It was an unfair bias, as many vendors of the day ran or
supported them.
> The CPU was I don't know how many orders of magnitude slower. I seem to recall
> something around then about memory speeds in the 200 nanosecond range, but that
> may have been in a different context.
Unless some chip designer gets really bored, we'll never know just how
fast a 6000 style CPU would be if it were built with modern
electronics. Any EEs care to venture a guess?
> No, no, no. A 6600 never in its wildest dreams had a 200ns memory access:
> this was the days of real core memory, made of magnetic donuts on wires.
>
> Well, you could get a new word from memory every 100ns but that word
> has been desired for 800ns or more. Lots of parrallism going on what with
> the PPs and different parts of the CP all able to make memory request.
>
> Nearly everything was done in registers: 8 60-bit, 8 18-bit A(ddressing
> registers and another 8 18-bit general purpose registers. Mustn't forget
> the "stack" which held enough program space for a tight loop.
> Memory was slow, so they tried not to use it too much.
Memory was 800ns. Though Cybers of that era phased memory into 4
banks so that consecutive memory references didn't conflict. This
gave an effective rate near 200ns.
CPUs on the 6000s came in two flavors. The monolithic 6400 and the
phased 6600. You could get a system with one CPU of either type, a
system with two 6400 CPUs, or a system with one of each. Two 6600
style CPUs in the same cabinet wasn't available as the memory speed
wouldn't keep up.
IIRC, idle PPs on NOS systems checked their input registers (IR) about
8000 times per second. The IR was the first word of the PP's Message
Buffer, which was 8 words long, so the way that memory was phased, all
IRs wound up in the same memory bank. Of the possible
1,250,000/second that the bank could muster, about 12% of them went to
idle PPs busily searching for something to do.
> There were Peripheral Processors which did the I/O. There were 10 or 20 of
> them, each of which had its own memory (12 bits of addressing?) and full access
> to the central processor's memory.
Most 6000s were configured with a full complement of 20 PPs. Certain
pricing options allowed for the purchase of machines with less, but
throughput suffered pretty badly. Many were field upgraded to 20 PPs.
The PP was a 12 bit processor. It had 4095.5 words. (Inside joke.
It had 4096 words, but word 7777 was rarely used as it wasn't directly
addressable.) Math in the PP was 18 bit. This allowed it to perform
address calculations on the full length of central memory.
> One of them was special (PP 0), and
> contained the PP-resident part of the operating system. The rest were mostly or
> completely interchangeable.
Correct. On most CDC operating systems, the PP portion of the system
monitor (MTR) ran in PP00. By convention, the console driver (DSD)
usually ran in PP01. All other PPs were "interchangeable".
> It's possible that some devices were attached to
> certain PPs, so only they could access them.
Devices were connected to channels. PPs could access any channel so
any PP could access any device.
> Despite their shortcomings, there were some nice things about the CDC 6600
> series machines, and their operating systems. Compared to the dominant
> mainframe of the day -- IBM 360s, 370s -- they had wonderful operating
> systems.
They were so much easier to use than VM, MVS, CICS, etc... Depending
on the application, the 6000s outran the IBMs, sometimes the IBMs
outran the 6000s. They were definitely the two biggest horses in the
race.
> To copy a file, you used the copy command. One-liner. To make a file bigger,
> you wrote to it. And so on. None of this IEBGENER/IEBCOPY foolishness.
> Files were just a series of bytes. (OK, 6-bit bytes, and there were those EOR
> marks, and oddball line terminators, but still...)
Even the early DOS operating systems understood that to COPY a file
meant simply that the contents of one file should be duplicated under
a different name. I wonder why it took IBM so long to figure this
out? :)
Though technically, the PP could write a partial word to disk, and
tape files could certainly contain something other that an even number
of CPU words, when translated to memory, files were always an exact
number of words.
I too miss these old beauties. They certainly had heart!
Kent
Kent Olsen wrote:
Perhaps in some later Cybers. Idon't recall. But for 6000 class machines the memory
modules ran at 1 mike, with a ten-way interleave, giving a max rate of 100 ns.
Somewhere in here (comp.sys.cdc) someone posted a URL to an authorized PDF of Jim
Thornton's book on the design of the 6600. It has disappeared from my server, but
should be googlable. Just about all is revealed there.
>
>
> CPUs on the 6000s came in two flavors. The monolithic 6400 and the
> phased 6600. You could get a system with one CPU of either type, a
> system with two 6400 CPUs, or a system with one of each. Two 6600
> style CPUs in the same cabinet wasn't available as the memory speed
> wouldn't keep up.
More significant was that you couldn't fit two 6600 CPU's and max memory in the
cruciform cabinetry of the 6600.
There was also a 6200, a de-clocked 6400 set up for lowball marketing purposes.
>
>
> IIRC, idle PPs on NOS systems checked their input registers (IR) about
> 8000 times per second. The IR was the first word of the PP's Message
> Buffer, which was 8 words long, so the way that memory was phased, all
> IRs wound up in the same memory bank. Of the possible
> 1,250,000/second that the bank could muster, about 12% of them went to
> idle PPs busily searching for something to do.
>
>
> > There were Peripheral Processors which did the I/O. There were 10 or 20 of
> > them, each of which had its own memory (12 bits of addressing?) and full access
> > to the central processor's memory.
>
> Most 6000s were configured with a full complement of 20 PPs. Certain
> pricing options allowed for the purchase of machines with less, but
> throughput suffered pretty badly. Many were field upgraded to 20 PPs.
I'll take serious issue with this. Virtually all of the 6000's and Cyber 70's I
worked on during a 17-year period had only 10 ppu. The extra 10-ppu option was
expensive and required a second cabinet. Most of the 20 ppu configurations I was
aware of were connected to exotic type peripherals some of which you could be killed
for knowing about.
I did extensive benchmarking of these machines, and in most cases you would saturate
CPU and memory long before you would run out of PPU.
There were of course some exceptions, like one configuration with 32 tape drives.
Does anyone know when UT Austin retired their CDC-6600 ?
- LarryW
> Compared to today's PCs, puny in speed, memory and disk storage. You could
> probably go to Wal-Mart or wherever and for less than $1000 buy something that
> could emulate one.
WallyWorld is selling an Athlon 850 system for about $300. It can
simulate a floating point instruction mix faster than any 6000 or
Cyber 70 machine ever built. My GS160 Alpha with 730Mhz CPUs can run
the same simulated FP mix at over 13 MFlop. At most, only the 2
fastest 60 bit CPUs that CDC ever made could execute FP instructions
that fast.
> Emulating a CDC 6600 on a PC would be an interesting and probably pointless
> programming challenge.
Some of us would (do) consider it a "labor of love". :)
> Well, sometimes 6-bit characters. That was what the hardware and the
> operating
> > system supported. There were also other schemes for representing more
> than 63
> > (or 64) characters: 6-12 character sets, 8-in-12 and maybe others. (I
> seem to
> > recall something about 7.5 characters/word, but that may just be a
> horrible,
> > horrible dream.) Learning of the one-byte/two-byte characters in Unicode
> > brought back memories of the 6-12 character set, where 7604 (octal) and 00
> both
> > represent a colon. (Or was it 7402?) There was some operating system
> support
> > for 6-12, but it was limited.
Though 6-bit characterset werepraised early as a significant cost
reduced, as the price of systems came down, CDC heard more and more
complaints about 6-bit (not to be confused with 2-bit) characters
sets. It was an unfair bias, as many vendors of the day ran or
supported them.
> The CPU was I don't know how many orders of magnitude slower. I seem to recall
> something around then about memory speeds in the 200 nanosecond range, but that
> may have been in a different context.
Unless some chip designer gets really bored, we'll never know just how
fast a 6000 style CPU would be if it were built with modern
electronics. Any EEs care to venture a guess?
> No, no, no. A 6600 never in its wildest dreams had a 200ns memory access:
> this was the days of real core memory, made of magnetic donuts on wires.
>
> Well, you could get a new word from memory every 100ns but that word
> has been desired for 800ns or more. Lots of parrallism going on what with
> the PPs and different parts of the CP all able to make memory request.
>
> Nearly everything was done in registers: 8 60-bit, 8 18-bit A(ddressing
> registers and another 8 18-bit general purpose registers. Mustn't forget
> the "stack" which held enough program space for a tight loop.
> Memory was slow, so they tried not to use it too much.
Memory was 800ns. Though Cybers of that era phased memory into 4
banks so that consecutive memory references didn't conflict. This
gave an effective rate near 200ns.
CPUs on the 6000s came in two flavors. The monolithic 6400 and the
phased 6600. You could get a system with one CPU of either type, a
system with two 6400 CPUs, or a system with one of each. Two 6600
style CPUs in the same cabinet wasn't available as the memory speed
wouldn't keep up.
IIRC, idle PPs on NOS systems checked their input registers (IR) about
8000 times per second. The IR was the first word of the PP's Message
Buffer, which was 8 words long, so the way that memory was phased, all
IRs wound up in the same memory bank. Of the possible
1,250,000/second that the bank could muster, about 12% of them went to
idle PPs busily searching for something to do.
> There were Peripheral Processors which did the I/O. There were 10 or 20 of
> them, each of which had its own memory (12 bits of addressing?) and full access
> to the central processor's memory.
Most 6000s were configured with a full complement of 20 PPs. Certain
pricing options allowed for the purchase of machines with less, but
throughput suffered pretty badly. Many were field upgraded to 20 PPs.
The PP was a 12 bit processor. It had 4095.5 words. (Inside joke.
It had 4096 words, but word 7777 was rarely used as it wasn't directly
addressable.) Math in the PP was 18 bit. This allowed it to perform
address calculations on the full length of central memory.
> One of them was special (PP 0), and
> contained the PP-resident part of the operating system. The rest were mostly or
> completely interchangeable.
Correct. On most CDC operating systems, the PP portion of the system
monitor (MTR) ran in PP00. By convention, the console driver (DSD)
usually ran in PP01. All other PPs were "interchangeable".
> It's possible that some devices were attached to
> certain PPs, so only they could access them.
Devices were connected to channels. PPs could access any channel so
any PP could access any device.
> Despite their shortcomings, there were some nice things about the CDC 6600
> series machines, and their operating systems. Compared to the dominant
> mainframe of the day -- IBM 360s, 370s -- they had wonderful operating
> systems.
They were so much easier to use than VM, MVS, CICS, etc... Depending
on the application, the 6000s outran the IBMs, sometimes the IBMs
outran the 6000s. They were definitely the two biggest horses in the
race.
> To copy a file, you used the copy command. One-liner. To make a file bigger,
> you wrote to it. And so on. None of this IEBGENER/IEBCOPY foolishness.
> Files were just a series of bytes. (OK, 6-bit bytes, and there were those EOR
> marks, and oddball line terminators, but still...)
Even the early DOS operating systems understood that to COPY a file
You might find it interesting to check out
<http://www.conmicro.cx/hercules/>. Hercules is a nearly full
System/390 emulator that runs on a PC. It's reputed to run 360 code
about ten times faster than a 360 (which specifict model I forget).
<http://www.corestore.org/hercules.html> has a link to a machine running
Hercules that is accessible online.
--
--
--John
Reply to jclarke at ae tee tee global dot net
(used to be jclarke at eye bee em dot net)
Which two, and when were they built? FWIW, I'm very surprised that
*any* machine ever built by CDC could go that fast.
> Most 6000s were configured with a full complement of 20 PPs. Certain
> pricing options allowed for the purchase of machines with less, but
> throughput suffered pretty badly. Many were field upgraded to 20 PPs.
20? I thought the full complement was ten (actually a single PPU with
ten register sets).
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair
note that CMS on VM ... had copyfile command ... effectively inherited
from CTSS (aka some of the CMS people had worked on CTSS) ...
although huge number of parameters were added to the copyfile over
time .... eventually endowing it with all sorts of extra capability
(as opposed to just simply doing a file copy).
In the same bldg (545 tech sq) some other people that had worked on
CTSS were working on Multics. Both CMS and unix trace some common
heritage back to CTSS.
here is page giving command correspondance between cms, vax, pc-dos,
and unix:
http://www.cc.vt.edu/cc/us/docs/unix/cmd-comp.html
Main memory was available in sizes up to 256K 60-bit words (about 2MB of
8-bit bytes). You could have up to 20 independent Peripheral Processors
(PP), each with 4K 12-bit words (total 128KB 8-bit bytes of PP memory).
Memory cycle time was 1 microsecond per bank, up to 32 banks per machine.
However addresses could only be issued every 0.1 microsecond giving a
maximum memory bandwith of 10,000,000 * 60 / 8 = 75MB/s
Each PP could drive one of up to 24 channels at up to 1 12-bit word per
microsecond (half-duplex) giving an aggregate I/O capacity of
20 * 12 * 1,000,000 / 8 = 30MB/s
Three sets of CPU registers:
8 "A" registers, A0 through A7. 18 bits each. Used for address
arithmetic and load/stores from/to memory.
8 "B" registers, B0 through B7. 18 bits each. Used for indexing,
loop counts, etc.
8 "X" registers, X0 through X7. 60 bits each. Used for general-purpose
arithmetic (including floating point) and loads/stores from/to memory.
Some representative CPU instructions and timing
SXi Xj (Set Xi to equal Xj.) .3 microseconds
SXi Xj+Xk (60-bit floating point sum Xj and Xk to Xi) .4 microseconds
SXi Xj*Xk (60-bit floating product Xj and Xk to Xi) 1 microsecond
SXi Xj/Xk (60-bit floating divide Xj and Xk to Xi) 2.9 microseconds
NO (No operation, pass) .1 microseconds
As you can see, the machine could execute pass instructions at a rate of
approx 10MIPS. Most instructions on the machine had a .3 microsecond
cycle giving a nominal rating of 3MIPS overall.
In addition, the machine had multiple function units, each capable of executing
in parallel assuming no register conflict. The units were:
BRANCH
BOOLEAN
SHIFT
ADD (60-bit Floating point adds)
LONG ADD (60-bit Integer adds)
MULTIPLY (60-bit) (duplexed unit, two per machine)
DIVIDE (60-bit)
INCREMENT (18-bit, duplexed, two per machine)
Because of the multiple functional units and register scoreboard,
several operations could be in process simultaneously, further increasing
throughput. Consider the following set of instructions:
SB2 B3+B4 (INCREMENT UNIT) .3 microseconds
LX2 4 (SHIFT UNIT) .3 microseconds
SA0 B5+B6 (INCREMENT UNIT) .3 microseconds
On a serial CPU the total time to execute that sequence would be
.3+.3+.3 microseconds = .9 microseconds. However, since there are
no register conflicts between instructions (no instruction uses a
register used by any other instruction) and because each instruction
can be dispatched to a separate functional unit (remember there are
two increment units), the actual time to execute is
.3 + .1 + .1 = .5 microseconds (instructions can be issued at
a maximum rate of one per .1 microseconds). That would be a rate
of 6MIPS (three instructions executed in .5 microseconds).
Commodity hardware (meaning things you can I could afford to buy and
take home) did not reach these levels of performance until the early-mid
1990s. The 6600 out-performed even most minicomputers (including top-of-the
line VAXs) throughout the 1980s.
6600: Design begun 1960. First shown, running, to the public
in Septemberof 1963 with production deliveries in fall of 1964.
Fastest computer in the world from 1964 to 1969 when it was
superceded by its upward-compatible big brother the 7600. 7600
was fastest computer in the world until introduction of Cray-1
in 1975.
The 6600 architecture is one of the longest-lived on the planet, exceeded
I believe only by the IBM 360. Several Cyber 170 systems exist today running
legacy applications. Those Cyber 170s are code-compatible with the original
6600. That gives a 38-year lifespan for the architecture and instruction
set.
greg
Main memory was available in sizes up to 256K 60-bit words (about 2MB of
Other 6600 "firsts" included the use of a video display operator's console
in place of the old "flashing lights" console.
greg
linpack numbers (including 6600):
http://ap01.physik.uni-greifswald.de/~ftp/bench/linpack.html
bunch of stuff from above
Computer N=100(Mflops)
------------------------------------ --- -------------
Cray T916 (1 proc. 2.2 ns) 522
Hitachi S-3800/180(1 proc 2 ns) 408
Cray-2/4-256 (1 proc. 4.1 ns) 38
IBM RISC Sys/6000-580 (62.5MHz) 38
IBM ES/9000-520 (1 proc. 9 ns) 38
SGI CHALLENGE/Onyx (6.6ns, 2 proc) 38
DEC 4000-610 Alpha AXP(160 MHz) 36
NEC SX-1 36
FPS 510S MCP707 (7 proc. 25 ns) 33
CDC Cyber 2000V 32
Convex C-3430 (3 proc.) 32
NEC SX-1E 32
SGI Indigo2 (R4400/200MHz) 32
Alliant FX/2800-200 (14 proc) 31
IBM RISC Sys/6000-970 (50 MHz) 31
IBM ES/9000-511 VF(1 proc 11ns) 30
DEC 3000-500 Alpha AXP(150 MHz) 30
Alliant FX/2800-200 (12 proc) 29
HP 9000/715 (75 MHz) 29
Sun Sparc 20 90 MHz, (1 proc) 29
Alliant FX/2800 210 (1 proc) 25
ETA 10-P (1 proc. 24 ns) 27
Convex C-3420 (2 proc.) 27
Cray-1S (12.5 ns) 27
DEC 2000-300 Alpha AXP 6.7 ns 26
IBM RISC Sys/6000-950 (42 MHz) 26
SGI CHALLENGE/Onyx (6.6ns, 1 proc) 26
Alliant FX/2800-200 (8 proc) 25
NAS AS/EX 60 VPF 25
HP 9000/750 (66 MHz) 24
IBM ES/9000-340 VF (14.5 ns) 23
Meiko CS2 (1 proc) 24
Fujitsu M1800/20 23
DEC VAX 9000 410VP(1 proc 16 ns) 22
IBM ES/9000-320 VF (1 proc 15 ns) 22
IBM RISC Sys/6000-570 (50 MHz) 22
Multiflow TRACE 28/300 22
Convex C-3220 (2 proc.) 22
Alliant FX/2800-200 (6 proc) 21
Siemens VP400-EX (7 ns) 21
IBM ES/9221-211 (16 ns) 21
FPS Model 522 20
Fujitsu VP-400 20
IBM RISC Sys/6000-530H(33 MHz) 20
Siemens VP200-EX (7 ns) 20
Amdahl 1400 19
Convex C-3410 (1 proc.) 19
IBM ES/9000 Model 260 VF (15 ns) 19
IBM RISC Sys/6000-550L(42 MHz) 19
Cray S-MP/11 (1 proc. 30 ns) 18
Fujitsu VP-200 18
HP 9000/720 (50 MHz) 18
IBM ES/9221-201 (16 ns) 18
NAS AS/EX 50 VPF 18
SGI 4D/480(8 proc) 40MHz 18
Siemens VP100-EX (7 ns) 18
Sun 670MP Ross Hypersparc(55Mhz) 18
Alliant FX/2800-200 (4 proc) 17
Amdahl 1100 17
CDC CYBER 205 (4-pipe) 17
CDC CYBER 205 (2-pipe) 17
Convex C-3210 (1 proc.) 17
Convex C-210 (1 proc.) 17
Cray XMS (55 ns) 17
Hitachi S-810/20 17
IBM ES/9000 Model 210 VF (15 ns) 17
Siemens VP50-EX (7 ns) 17
Multiflow TRACE 14/300 17
Hitachi S-810/10 16
IBM 3090/180J VF (1 proc, 14.5 ns) 16
Fujitsu VP-100 16
Amdahl 500 16
Hitachi M680H/vector 16
SGI Crimson(1 proc 50 MHz R4000) 16
FPS Model 511 15
Hitachi M680H 15
IBM RISC Sys/6000-930 (25 MHz) 15
Kendall Square (1 proc) 15
NAS AS/EX 60 15
SGI 4D/440(4 proc) 40MHz 15
Siemens H120F 15
Cydrome CYDRA 5 14
Fujitsu VP-50 14
IBM ES/9000 Model 190 VF(15 ns) 14
IBM POWERPC 250 (66 MHz) 13
IBM 3090/180E VF 13
SGI 4D/340(4 proc) 33MHz 13
CDC CYBER 990E 12
Cray-1S (12.5 ns, 1983 run) 12
Gateway 2000 P5-100XL 12
IBM RISC Sys/6000-520H(25 MHz) 12
SGI Indigo 4000 50MHz 12
Stardent 3040 12
CDC 4680InfoServer (60 MHz) 11
Cray S-MP/MCP101 (1 proc. 25 ns) 11
FPS 510S MCP101 (1 proc. 25 ns) 11
IBM ES/9000 Model 340 11
Meiko Comp. Surface (1 proc) 11
Gateway 2000 P5-90(90 MHz Pentium) 11
SGI Power Series 50MHz R4000 11
Stardent 3020 11
Sperry 1100/90 ext w/ISP 11
Multiflow TRACE 7/300 11
DEC VAX 6000/410 (1 proc) 1.2
ELXSI 6420 1.2
Gateway 2000/Micronics 486DX/33 1.2
Gateway Pentium (66HHz) 1.2
IBM ES/9000 Model 120 1.2
IBM 370/168 Fast Mult 1.2
IBM 4381 90E 1.2
IBM 4381-13 1.2
MIPS M/800 (12.5MHz) 1.2
Prime P6350 1.2
Siemans 7580-E 1.2
Amdahl 470 V/6 1.1
Compaq Deskpro 486/33l-120 w/487 1.1
SUN 4/260 1.1
ES1066 (1 proc. 80 ns Russian) 1.0
CDC CYBER 180-840 .99
Solbourne .98
IBM 4381-22 .97
IBM 4381 MG2 .96
ICL 3980 w/FPU .93
IBM-486 33MHz .94
Siemens 7860E .92
Concurrent 3280XP .87
MIPS M800 w/R2010 FP .87
Gould PN 9005 .87
VAXstation 3100-76 .85
IBM 370/165 Fast Mult .77
Prime P9955II .72
DEC VAX 8530 .73
HP 9000 Series 850 .71
HP/Apollo DN4500 (68030 + FPA) .60
Mentor Graphics Computer .60
MIPS M/500 ( 8.3HHz) .60
Data General MV/20000 .59
IBM 9377-80 .58
Sperry 1100/80 w/SAM .58
CDC CYBER 930-31 .58
Russian PS-2100 .57
Gateway 486DX-2 (66HHz) .56
Harris H1200 .56
HP/Apollo DN4500 (68030) .55
Harris HCX-9 .50
Pyramid 9810 .50
HP 9000 Series 840 .49
DEC VAX 8600 .48
Harris HCX-7 w/fpp .48
CDC 6600 .48
IBM 4381-21 .47
SUN-3/260 + FPA .46
CDC CYBER 170-835 .44
HP 9000 Series 840 .43
IBM RT 135 .42
Harris H1000 .41
microVAX 3200/3500/3600 .41
Apple Macintosh IIfx .41
Apollo DN5xxT FPX .40
microVAX 3200/3500/3600 .40
IBM 9370-60 .40
Sun-3/160 + FPA .40
Prime P9755 .40
Ridge 3200 Model 90 .39
IBM 4381-11 .39
Gould 32/9705 mult acc .39
NORSK DATA ND-570/2 .38
Sperry 1100/80 .38
Apple Mac IIfx .37
CDC CYBER 930-11 .37
Sequent Symmetry (386 w/fpa) .37
CONCEPT 32/8750 .36
Celerity C1230 .36
IBM RT PC 6150/115 fpa2 .36
IBM 9373-30 .36
CDC 6600 .36
IBM 370/158 .22
IBM PS/2-70 (16 MHz) .12
IBM AT w/80287 .012
IBM PC w/8087 .012
IBM PC w/8087 .0069
Apple Mac II .0064
Atari ST .0051
Apple Macintosh .0038
don't know about campus ... but balcones research had a cray and my
wife and I manage to donate a bunch of HYPERchannel equipment to them
for interconnecting various stuff.
thorton after working on 6600 left cdc and founded NSC and built
HYPERChannels.
random past stuff
http://www.garlic.com/~lynn/99.html#119 Computer, supercomputers & related
http://www.garlic.com/~lynn/2001.html#19 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001.html#20 Disk caching and file systems. Disk history...people forget
Anne & Lynn Wheeler wrote:
> Larry__Weiss <l...@airmail.net> writes:
> >
> > Does anyone know when UT Austin retired their CDC-6600 ?
> >
>
> don't know about campus ... but balcones research had a cray and my
> wife and I manage to donate a bunch of HYPERchannel equipment to them
> for interconnecting various stuff.
>
> thorton after working on 6600 left cdc and founded NSC and built
> HYPERChannels.
In between the 6600 and departure did the bulk of the work on the 6400 CPU. Then ran a research group
somewhat in competition with Chippewa Falls where they did the initial work on the STAR pipeline project
that eventually turned into the CYBER 200 series.
Joe Yuska
>Kent Olsen wrote:
>> > There were Peripheral Processors which did the I/O. There were 10 or 20 of
>> > them, each of which had its own memory (12 bits of addressing?) and
>full access
>> > to the central processor's memory.
>>
>> Most 6000s were configured with a full complement of 20 PPs. Certain
>> pricing options allowed for the purchase of machines with less, but
>> throughput suffered pretty badly. Many were field upgraded to 20 PPs.
>
>I'll take serious issue with this. Virtually all of the 6000's and Cyber 70's I
>worked on during a 17-year period had only 10 ppu. The extra 10-ppu option was
>expensive and required a second cabinet. Most of the 20 ppu
>configurations I was
>aware of were connected to exotic type peripherals some of which you
>could be killed
>for knowing about.
And I take issue with that.
Of the dozen or so systems I worked on, only the 6400 and one Cyber 73 only
had 10 PPs. All of the rest were either ordered with more, or upgraded
shortly after delivery. The other Cyber 73 has 17 PPs, the Cyber 74 had
14 and then 17, and all the rest were 170s or later, all with at
least 14 PPs. None had the full complement, though.
>> The PP was a 12 bit processor. It had 4095.5 words. (Inside joke.
>> It had 4096 words, but word 7777 was rarely used as it wasn't directly
>> addressable.)
It's addressible via instructions which do not involve the 12-bit
incrementer... Limits uesfulness a bit.
--
Jeff Woolsey {woolsey,jlw}@{jlw,jxh}.com
"I didn't get a 'Harrumph!' out of _that_ guy." -Gov Le Petomaine
"Delete! Delete! OK!" -Dr. Bronner on disk space management
"A toy robot!!!!" -unlucky Japanese scientist
Oh, I just love nitpicking on my favorite subject:
>Some representative CPU instructions and timing
>
> SXi Xj (Set Xi to equal Xj.) .3 microseconds
Well, Set Xi to the lower 18 bits of Xj sign-extended.
> SXi Xj+Xk (60-bit floating point sum Xj and Xk to Xi) .4 microseconds
> SXi Xj*Xk (60-bit floating product Xj and Xk to Xi) 1 microsecond
> SXi Xj/Xk (60-bit floating divide Xj and Xk to Xi) 2.9 microseconds
The rest of these you can get away with through the miracle of OPDEF
and make them the same as the FX and/or RX instructions (pick one).
>The 6600 architecture is one of the longest-lived on the planet, exceeded
>I believe only by the IBM 360. Several Cyber 170 systems exist today running
>legacy applications. Those Cyber 170s are code-compatible with the original
>6600. That gives a 38-year lifespan
So far.
>for the architecture and instruction
>set.
>Other 6600 "firsts" included the use of a video display operator's console
"Video" is a smidge inaccurate. It's a very glorified oscilloscope
(Tek scopes of the day had similar calligraphic characters for
readouts).
> >Other 6600 "firsts" included the use of a video display operator's console
>
> "Video" is a smidge inaccurate. It's a very glorified oscilloscope
> (Tek scopes of the day had similar calligraphic characters for
> readouts).
At the university I went to in the early 80's there was a Tek terminal
on the VAX. It was a pain because when your text output got to the
bottom of the screen it would start again from the first line and
overprint (without erasing) whatever was there previously. So you had
to hit some sort of "clear screen" key all the time. Also, it just had
some sort of very long timeconstant phosphor, because the text (and
graphics) would fade away after a few minutes.
-- Bruce
The 6000 Series Introduction & Peripheral Processor Training Manual
shows space for the second set of 10 PPs inside the 6600 mainframe
in:
Wing 3 Chassis 9 between (Core) Banks 11 & 12;
Wing 3 Chassis 10 between (Core) Banks 15 & 16;
Wing 3 Chassis 12 between the two display controllers;
Wing 1 Chassis 4 between (Core) Banks 5 & 6;
Wing 4 Chassis 13 between (Core) Banks 21 & 22;
Wing 4 Chassis 14 between (Core) Banks 25 & 26;
Wing 4 Chassis 15 between (Core) Banks 31 & 32;
Wing 4 Chassis 16 between (Core) Banks 35 & 36.
Perhaps those eyes-only systems to which you refer had yet
even still more exotic hardware inside the cruciform mainframe.
Some other eyes-only sites used CDC 1700s, with the 6600 running
the Time-Critical OS Monitor as the operating system...
...or so I hear...
;)
-dq
The 2000 and 990.
When the 990 came out it was the fastest scalar CPU in the world.
(The scalar portion of the Cray and 205 CPUs notwithstanding.) It
also could perform I/O with anyone. The Mormon church bought a pair
of them to build their geneology database after the 990 beat a
comparable 390/3090 in a database benchmark.
The 2000 was the fastest commercial machine that CDC ever built. It's
CPU performance is about twice the 990s.
> > Most 6000s were configured with a full complement of 20 PPs. Certain
> > pricing options allowed for the purchase of machines with less, but
> > throughput suffered pretty badly. Many were field upgraded to 20 PPs.
>
> 20? I thought the full complement was ten (actually a single PPU with
> ten register sets).
There seems to be some disagreement on the board about this.
A bank of up to 10 PPs was part of an "I/O subsystem" that consisted
of the PPs and channels. If you had one bank you had up to 10 PPs and
the bank's full complement of channels. If you had two banks you got
the second set of channels and up to 10 additional PPs.
IIRC, 6000s came with 7, 10, 14, 17, or 20 PPs. I seem to remember
that a 7 PP system was delivered, but I don't recall the details (does
anyone else?). As the 6000s became more popular 10 PPs weren't
enough. I worked on at least a dozen sites with 6000s or Cyber 70s
(basically, the same thing with slightly different packaging) and
recall only one of them with less than 17 PPs. Of course, most of
these customers were military or education. Commercial accounts may
have behaved differently due to costs.
Kent
The video monitors on the CDC-6600 were very functional. I remember seeing
chessboards displayed (UT Austin had their 6600 console on public display
behind a glass wall in the computation center).
- LarryW
That was probably a TEK 4014 "storage tube" based terminal.
The CDC-6600's console displays were not terminals of that sort.
They were refreshed calligraphic CRT's. Very nice to have at that time.
Does anyone have a picture of a CDC-6600 console with an image displayed?
- LarryW
>>
> That was probably a TEK 4014 "storage tube" based terminal.
> The CDC-6600's console displays were not terminals of that sort.
> They were refreshed calligraphic CRT's. Very nice to have at that time.
> Does anyone have a picture of a CDC-6600 console with an image displayed?
I have a home movie of "pac" running. Not a 6600, maybe a 6400, but more
likely a single tube machine.
I had forgotten about that internal storage. My main familiarity with
extra PPU configurations involved a 66?? that contained 10 ppu, 12
channels and some amount of central memory. My demonstrably faulty
memory tells me it connected through the ECS coupler, but...
I know that there were internal CDC documents as well as VIM documents
that gave configurations for the installed base of all those machines
that could be talked about. This would be a good time for a query about
old documents such as these.
Has anyone saved the old VIM stuff?
Joe Yuska
I don't know if the character generation is the same but I believe this is
the same idea:
http://www.cathodecorner.com/
Lee Courtney
Engineering Manager - Development Environments and Tools
MontaVista Software
1237 East Arques Avenue
Sunnyvale, California 94085
(408) 328-9238 voice
(408) 328-9204 fax
"Powering the Embedded Revolution"
For the time, the console was outstanding. While most others had
blinkenlights and typewriters, the 6000's had two tubes that displayed
in realtime more than you ever wanted to know about system operation. A
good systems type could see instantly how busy the system was, what
resources were critical, what jobs were sucking up these resources, and
even get a good idea about how to retune.
Another display program, DIS, was probably the best program debugger
available bar none at the time. Again, you could follow program
execution in realtime, effect program control in just about any way
imaginable.
One of the earliest (some say second) display programs written was BAT,
a baseball game. Many more followed, including classic spacewar and
others.
The chessboards probably from CHESS 3.0 or later, done by some people at
Northwestern and the champion of the first computer chess championship.
No, sorry, but that's not at all like the images that the CDC-6600's console
could display. Read the other posts in this thread about the variety of
graphics applications that were written for display on the 6600's console
displays.
I hope someone posts a pointer to an actual photograph of a working console.
Th most similar thing to a CDC-6600 console display that everyone has seen
(at least in the movies) is the older single-color and round air-traffic
controller displays.
- LarryW
Those are Tektronix model 40xx series and weren't really meant
for extended use as text terminals. They were "storage" refresh
devices which should have had the screen reflooded (erased) every
20 minutes or so for longevity. I wish I knew where I could find
a 4014 in pristine condition..
Anyway, not the same as the calligraphic scopes being talked about.
>Perhaps in some later Cybers. Idon't recall. But for 6000 class machines
the memory
>modules ran at 1 mike, with a ten-way interleave, giving a max rate of 100
ns.
>
The CDC 6000 series used 32 way interleave, 1 microsecond major cycle (memory
cycle time) and 100ns minor cycle (instruction execution rate).
>I'll take serious issue with this. Virtually all of the 6000's and Cyber
70's I
>worked on during a 17-year period had only 10 ppu. The extra 10-ppu option
was
>expensive and required a second cabinet. Most of the 20 ppu configurations I
was
>aware of were connected to exotic type peripherals some of which you could be
killed
>for knowing about.
>
>I did extensive benchmarking of these machines, and in most cases you would
saturate
>CPU and memory long before you would run out of PPU.
>
>There were of course some exceptions, like one configuration with 32 tape
drives.
>
I would be surprised if there was a 10 PPU add on option, at least for the
6500. Michigan State would have probably bought it, if available. When the
Cyber 170/750 came in we could only use 10 of the 20 PP's, because our
customized OS needed to be updated. When the additional 10 PPs were usable,
it made a significant difference. One reason may have been because the OS
dedicated (I think) 4 PPs to certain tasks (monitor, Argus (terminal support),
Console, and something else).
- Tim
Sure it is. The characters were drawn via analog circuitry steering the
electron beam. Certain o'scopes (such as the Tek 7904) draw characters the
same way.
The CDC 'scopes could draw characters in either 64 chars/line mode, 32 chars/line,
or 16 chars/line. There was also a 'dot' mode for doing graphics. The console was
directly connected to one of the I/O channels.
DSD, the Dynamic System Display PP program, directly positioned the beam and
wrote characters by sending X/Y coordinates and text down the channel. DSD
would have to resend the screen many times per second to keep the phosphors
bright and avoid flickering.
It should also be noted that the '026' PP program was an early example of
a full screen text editor. (Source code dates from 1967.) Combined with
the DIS 'shell', it was a very powerful programming/debugging environment.
The unfortunate thing was that end users on typical low-speed asych terminals
never got to experience some of the the special joys of using these systems.
It wasn't until the early '80s that full screen editors became widely available
on NOS.
Walt
-...-
Walt Spector
(w6ws at earthlink dot net)
Yeah, but Jack didn't write the LINPACK package until the late 1970s.
And the benchmark wasn't until the early 80s. For the comparison to be
meaningful the 6600 had to be taken in 1964.
I think LLNL had decommissioned its last 6600 before 1978 when
Jack published his ACM SIGNUM Paper although I had run on a 6400 in 1977
so 6600s were clearly still running in the late 1970s.
Be warned with Lynn and the other IBM folk, (I like Lynn), but they have a
tendency to quote 32-bit benchmarks against 60 and 64-bit CDC and Cray
benchmarks, and this is why Bailey's first rule in benchmark is the way it is.
Not answering specifically for reactor codes, I can assure you that AEC-era
lab codes exist and are running on ASCI machines, just no longer in
Fortran or CVC (to my surprise some one actually sat down and rewrote
them in C). I'll know a bit more next week when I go over to LLNL.
Gawd, we have to live with the MFLOPS albatross to this day.
It's a terrible hold over from the era when floating point operations
took significantly longer than a a clock cycle. Not even Knuth's
empirical study of Fortran programs could show managers the importance
of other aspects of system balance like memory bandwidth.
And it's worse as people moved with 2-D codes to 3-D codes and higher Ds.
I appreciate the machine, but the political measurement of MFLOPS is
possibly one of the most harmful in computing done in the 60s.
>A table of the achieved megaflop rates for various CDC machines
>with different release levels of OS and Fortran compiler may be found
>at
>
>http://pages.sbcglobal.net/couperusj/Megaflops.html
What was a Cyber 2000?
Unfortunately, the FTN compiler got too old for its time.
Maybe we can dredge Peglar or McCalpin into this.
The Scope CLock (posted elsewhere) is close, but the characters are far
too precise, and they're too big (biggest on console is 16 chars per line).
While these are 170s, the concept is identical.
The last four photos on this page are representative. Small (64/line) and
medium (32/line) characters are displayed.
http://berlin.ccc.de/~hans/vcfe2/index_4.htm
GIF! GIF! er... MPEG! MPEG!
I think this depended on the OS and the management of the site.
It was still a batch versus interactive world back then.
I'd prefer an IBM SPF site to batch oriented CDC shop, just as I would
have likely prefered a Livermore CDC machine over an IBM batch shop.
In article <a46c04a2.02070...@posting.google.com>,
Kent Olsen <ke...@nettally.com> wrote:
>They were so much easier to use than VM, MVS, CICS, etc... Depending
>on the application, the 6000s outran the IBMs, sometimes the IBMs
>outran the 6000s. They were definitely the two biggest horses in the
>race.
Those are most 370 terms.
You have to go back to the MVT era on 360s.
>> To copy a file, you used the copy command. One-liner. To make a file bigger,
>> you wrote to it. And so on. None of this IEBGENER/IEBCOPY foolishness.
>> Files were just a series of bytes. (OK, 6-bit bytes, and there were those EOR
>> marks, and oddball line terminators, but still...)
>
>Even the early DOS operating systems understood that to COPY a file
>meant simply that the contents of one file should be duplicated under
>a different name. I wonder why it took IBM so long to figure this
>out? :)
IBM had a DOS OS on 360s.
Disk was expensive back then. In IBM circles, there was great pride in
knowing exactly how many bytes fit on a 2314 (or 2311, etc.) or 3330
disk track. PC DOS and CDC systems had internal fragementation and even
you could get better machine I/O efficiency reading each track or cylinder.
It was the same logic, of that era, which made efficient use of punch
cards like reading every single Hollerith field.
Glad that era is over.
Thank God I don't remember how many bytes that was, but I might have it
in a note book of that era at home.
I was trying to distinguish what the consoles displayed from video. Video
is a moving raster-scan image. The console are definitiely not raster-scan.
(These days, raster scan can emulate vectors pretty well; the MAME arcade-game
emulator does it.)
Didn't PDP-1 do that?
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair
> > Which two, and when were they built? FWIW, I'm very surprised that
> > *any* machine ever built by CDC could go that fast.
>
> The 2000 and 990.
Wow -- I'd thought they'd pretty much abandoned that market by 1980...
> GIF! GIF! er... MPEG! MPEG!
I'll start looking for it tonight....
Jitze can correct me if I'm wrong, but I thought the 2000
was the final design iteration of the 6000 Series architecture.
Not many built, I'm guessing...
Jitze?
-dq
OK, I've never seen the CDC-6600 console except in pictures, but a
"calligraphic" display is one in which the graphics and characters are
displayed by moving the electron beam around the face of the screen in
a manner similar to a plotter. Maintaining a steady image requires
repeating the pattern of movement repeatedly to refresh the screen.
(The Tektronix displays eliminated the continuous refresh requirement
by using a storage tube that stored the image electrostatically on the
face of the screen itself, although the image would gradually
deteriorate over time and had to be refreshed from time to time as
well, although not as frequently).
IBM had a similar device, used in early CAD applications, called the
2250 Vector Graphic Display. It had a pretty good sized screen with a
square aspect ratio, and a small buffer (about 4K, IIRC) for storing
"graphic orders"... a kind of binary command language for producing
lines, arcs, characters, and such. The CRT had a long-persistence
phosphor, like a radar tube, so that it didn't flicker too badly when
it was displaying a large amount of graphic and text data. OS/360
also supported the device as an operator's console, but only the more
"upscale" customers used one of these instead of the usual 1052
Selectric typewriter console.
The earlier support for the 2250 as a console, in OS/360, was actually
cooler than the "device independent" console software later built into
the system to support other display-type devices. One of the neat
features was a "Help" facility that could be selected via light-pen.
It displayed syntax diagrams graphically for the various console
operator commands.
The basic technology, I gather, was drawn from IBM's work on the
displays for the Sage computer, which was also the ancestor of the
systems used in Air Traffic Control. Both the Sage, and the 2250 were
built at IBM's Kingston, NY plant.
--
Russ Holsclaw
One interesting historical point... The "6000 Introduction and
Peripheral Processor Training Manual" has a little propaganda
at the beginning... says that the AEC had been needing a machine
to crunch those "glowing" numbers, and attempts to do so kept
falling short. All of those designs had been based on the use
of the germanium transistor, and it had been pushed to the limits
of its switching speed. CDC (read: Cray?) throught an innovative
design might get around that limitation, namely, the emergent
parallelism found in the use of the PPs for I/O and the manifold
functional units...
Then, just in time, came the silicon transistor, which the manual
describes as "frosting on thge cake".
-dq
> I would be surprised if there was a 10 PPU add on option, at least for the
> 6500. Michigan State would have probably bought it, if available. When the
> Cyber 170/750 came in we could only use 10 of the 20 PP's, because our
> customized OS needed to be updated. When the additional 10 PPs were usable,
> it made a significant difference. One reason may have been because the OS
> dedicated (I think) 4 PPs to certain tasks (monitor, Argus (terminal support),
> Console, and something else).
You *would* post this on a day I'm not carrying the 6000 Series
Hardware refman with me, so I can't quote the option number, but
yeah, it was a "standard option". If you were running the latest
OS from CDC, they had the mods to make it work. If not, you had
to find a site who was running something close to the same PSR
level as you were, and get the code from them. We got ours from
a Navy site, the name of which escapes me...
-dq
As Jeff posted over in folklore, try this link:
http://berlin.ccc.de/~hans/vcfe2/index_4.htm
It's not the dual-scope 6612 (DD60), but the later CC545...
I've got a halftone pic of one scope on a 6612 showing a plot
of some function from a FoRTRAN program, will dig it up tonight...
-dq
Yes, it did. Does.
-dq
I don't have proof anymore but i believe the RAIN4 numbers were for 32
bit and the "RAIN" numbers wree for "double precision" 64bit.
"fast double precision" was introduced for 168-3 (not on initial 168-1
machines) ... and so the 9.1 secs should be for RAIN ... as was the
6.77 secos for the 91.
the interesting numbers are the 3031 and 158 numbers. The processor
engine in the 3031 and 158 were the same; however in the case of the
158 .... there were "integrated channels" ... aka there was two sets
of microcode running on the same 158 processor engine .... the
microcode that implemented the 370 instruction set ... and the
microcode that implemented the I/O support ... and the processor
engine basically had to time-slice between the two sets of microcode.
For the 3031, there were two "158" processor engines ... one processor
engine dedicated to the 370 function (i.e. the 3031) and a second
"158" processor engine (i.e. the "channel director") that implemented
all the I/O function outboard.
The dates for some of the machines (note 4341 and 3031 were about the same time):
CDC 6600 63-08 64-09 LARGE SCIENTIFIC PROCESSOR
IBM S/360-67 65-08 66-06 10 MOD 65+DAT; 1ST IBM VIRTUAL MEMORY
IBM S/360-91 66-01 67-11 22 VERY LARGE CPU; PIPELINED
AMH=AMDAHL 70-10 AMDAHL CORP. STARTS BUSINESS
IBM S/370 ARCH. 70-06 71-02 08 EXTENDED (REL. MINOR) VERSION OF S/360
IBM S/370-145 70-09 71-08 11 MEDIUM S/370 - BIPOLAR MEMORY - VS READY
IBM S/370-195 71-07 73-05 22 V. LARGE S/370 VERS. OF 360-195, FEW SOLD
Intel, Hoff 71 Invention of microprocessor
Intel DRAM 73 4Kbit DRAM Chip
IBM 168-3 75-03 76-06 15 IMPROVED MOD 168
IBM 3031 77-10 78-03 05 LARGE S/370+EF INSTRUCTIONS
and to repeat the numbers for rain/rain4:
158 3031 4341
Rain 45.64 secs 37.03 secs 36.21 secs
Rain4 43.90 secs 36.61 secs 36.13 secs
also times approx;
145 168-3 91
145 secs. 9.1 secs 6.77 secs
rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
35.77 secs.
--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/
Sure! They certainly were not raster-scan based. The price of memory
had to come down many orders-of-magnitude before that technology was
feasible.
And I suppose you are correct that a person not aquainted with this older
gear would take the wrong meaning from the simple word "video". That's
why I'm eager to see images of anyone's photographs (or movies!) of that
'scope in action.
- LarryW
My first thought would have been a ETA architecture renaming.
>Jitze?
Actually a far more interesting machine to me was the Cyberplus or AFP.
I had a manual (alas tossed it before working with the Museum).
I also spoke with Bill Ragsdale who worked on the one in Monterey.
I need to try to find one.
there were a number of 2250 "models" ... a 2250m1 direct channel
attach with its own controller, a 2250m4 .... which came with its own
1130 and some others.
in the late '60s, somebody at the science center ported spacewars from
pdp to the 1130/2250m4 (my kids played it in the mid '70s).
lincoln labs had one or more 2250m1 attached to 360/67 and somebody
there wrote fortran graphics package for CMS to drive the screen.
the university i was at also had 2250m1 .... and I hacked the CMS
editor with the 2250m1 support code from lincoln labs to generate a
full screen editor ... circa fall '68.
The DOE, the DTRA, and various other three letter agencies (lurking in the
group) STILL need, and their problems still fall short. Something about
computational complexity, even in the face of Moore's law which sounds
so impressive to people. 8^)
>All of those designs had been based on the use
>of the germanium transistor, and it had been pushed to the limits
>of its switching speed. CDC (read: Cray?) throught an innovative
>design might get around that limitation, namely, the emergent
>parallelism found in the use of the PPs for I/O and the manifold
>functional units...
>
>Then, just in time, came the silicon transistor, which the manual
>describes as "frosting on the cake".
Cute:
The golden days of the discrete component era (I can say that because I
am more in software).
Parallelism is on average at best an O(n) solution to performance problems.
There are interesting technologies on the horizon. Hope some pan out.
4 sounds like a good guess from that era.
Gee, I tend to think of Double Precision as meaning 128 bits...... ;^)
But then I am seeing this in the cdc group, noticing the cross post.
>"fast double precision" was introduced for 168-3 (not on initial 168-1
>machines) ... and so the 9.1 secs should be for RAIN ... as was the
>6.77 secos for the 91.
It's unfortunate that if if any 370/168s are running in open area.
Just as Kahan researched interesting floating point behavior, I'd be
curious about certain speed implications. I talk on odd days with
Alan Karp about this.
Too much marketeering. It's unfortunate. It would be fun to try to
deduce these behaviors, but I've learned all too well that even the guys
who built these machines were as certain.
>rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in
>35.77 secs.
Probaby LLL, but LRL has gone bye bye by way of terminology.
It's their 50th this year anyways.
Interesting web pages.
There's a typo on:
http://pages.sbcglobal.net/couperusj/215Moffett.html
They were P-3s, not P-37s.
My bulding is just to the left.
Also cropped to the left was the old Cray office on Ross drive.
I had seen the .36 number for the 6600 but never the .48 one.
I'm not sure I really believe either. Years ago I got the
chance to run some old FORTRAN code on a VAX 8600 -- code that was
brought over from the 6600. The 8600 ran it about half as
fast as the 6600 had.
Also, IIRC, the 7600 was roughly 5x the 6600 in floating point
performance and the Cray-1 was about 2x a 7600. Working backwards
from the Cray-1S numbers, the 6600 should have come in around
1.2Mflops.
Are we seeing a difrerence in compiler technology? The early
6600 FORTRAN compilers (RUN) were pretty poor while the
last ones (FTN4, etc.) kicked ass.
greg
Quantum computers? I think the same powers that keep down fusion
research that doesn't involve tokamaks and laser confinement
systems might quash that, too.
-dq
The 1 microsecond was a 'nominal' timing spec. on the 6600, 'read' cycle was
circa 800ns, 'write' was more like 1.7 microsec.
The 'stunt box' coordinated buffering/sequencing/serialization of actual
memory access, and CPU references. It was capable of starting to process a
reference every 100ns, thus matching 'scoreboard' instruction dispatch rate.
Memory was _5-way_ ported. Two CPU paths (one for each fetch/store functional
unit path), and, as I recall, two PP paths (one for each barrel), and one for
ECS moves.
I believe the 32-way interleave was used only in a 'max' configuration. That
smaller configs used 16-way.
"Douglas H. Quebbeman" <dqueb...@ixnayamspayacm.org> wrote in message
news:3d21fe34$1...@news.iglou.com... <Jitze can correct me if I'm wrong, but
I thought the 2000 was the final design iteration of the 6000 Series
architecture. Not many built, I'm guessing...>
I believe the 2000's were the final iteration of the CYBER 180 Series
Architecture, but with no 170 State. Ran NOS/VE exclusively.
Ken Hunter
I believe Manager had a dedicated PP (MAN?), and 1SP was pretty close to
dedicated.
Ken Hunter
Yeah, I remember encountering that same thing on a Tek terminal hooked to
the CYBER 170/750 at Michigan State. Also in the early '80s.
Ken Hunter
That 7.5 characters/word would have been when using actual 8 bit ASCII
characters, packed as tightly into 60-bit words as possible. Probably not
something which was frequently done.
"Eric S. Harris" <eric_ha...@mindspring.com> wrote in message
news:3D1F56EE...@mindspring.com... <The address space was limited to
18 bits (17 for users), and I think the mainframe memory could be ordered
(or upgraded to) that size. Later machines in the series (or a later series
compatible with the CDC 6600) could go higher, but the 17-bit limit on user
programs remained.>
Actually, there was an add on product for NOS which raised the user limit up
to 18-bits as well called Memory Usage Enhancement. Still available,
actually, though it's owned by General Dynamics rather than Syntegra. Still
used by certain DoD sites.
Ken Hunter
Read was destructive (required a write-back) and thus slower than write.
greg
I heard Jeff Ullman pan them a couple of months ago.
I don't know enough about them.
Personally, I hold hope for optical systems. A friend built a system at
Bell Labs which could count to 15 using photos, for the idea has merit.
I've also see optical correlators used for FFTs on radar.
>I think the same powers that keep down fusion
>research that doesn't involve tokamaks and laser confinement
>systems might quash that, too.
Well I'm going over to see the big lasers at LLNL next week.
There is uncontrolled fusion. That has been shown to work because of 6600s.
Storage tubes.
While popular with folks around Portland (I'm in Portland for a week of
meetings in 2 weeks), Tektronix was the bain of my existence on the
Culler-Fried system. It went out with the punched card.
Please edit subject line appropriately.
The 7.5 chars/word mode was quite frequently used when dealing with
9-track 'stranger' tapes.
CDC definately believed in diversity. When using ASCII with NOS, you had
your choice of:
1.) 8-in-12 - For printers and plotters
2.) Packed 7.5 chars to a word - For 9-track 'stranger' tapes
3.) 6/12 display code - For tty I/O
4.) 6-bit display code - translated to ASCII in various hardware devices
It was a mess.
It really helps to consider the 6600 and followons as having 12-bit
bytes. Then packing two 6-bit characters per byte. (Some older
documentation even states this explicitly.) This due to the
fact that PPs had 12-bit memory, the I/O channels were 12-bits wide,
and even the central memory was composed of 12-bit memory modules
'glued' together.
If some retrocomputing person ever bothers to write a C compiler for
the CDC instruction set, a C 'char' would almost certainly be 12 bits.
Walt
-...-
Walt Spector
(w6ws at earthlink dot net)
Yes, optical computers do have promise, especially in low power consumption.
> >I think the same powers that keep down fusion
> >research that doesn't involve tokamaks and laser confinement
> >systems might quash that, too.
>
> Well I'm going over to see the big lasers at LLNL next week.
>
> There is uncontrolled fusion. That has been shown to work because of
6600s.
Heh... I was referring to Philo T. Farnsworth's Fusor. I've seen photos
of controlled fusion reactions; the only problem is they stop when the
fuel runs out. No one's figured out how to continually reintroduce
fuel and keep the reaction going... yet.
-dq
The main reason I was looking for the sources for the codes from the
sixties and the seventies was that there were some neat adaptations to
the 6600 architecture including hand coded algorithms that took
advantage of the functional units. In addition there were specialized
disk drivers for the 808/6638 disks that used multiple ppu's
"ping-ponging" to avoid disk interleaving. I thought they would be cool
to examine.
Joe Yuska
IIRC the DEC lines had similar scopes with built in graphics
processors accessing a list of display instructions in main
memory, at least on the 11 (GT-40) and 15 (GT-15?).
Programming used short (relative) and long (absolute) vector
commands, with characters drawn as subroutines of short vector
commands called from the main display list.
Command lists had to be kept short enough to draw within the
video refresh interval, which was set by the power frequency
(50/60Hz), to avoid flicker.
--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada
Brian....@CSi.com (Brian dot Inglis at SystematicSw dot ab dot ca)
fake address use address above to reply
tos...@aol.com ab...@aol.com ab...@yahoo.com ab...@hotmail.com ab...@msn.com ab...@sprint.com ab...@earthlink.com ab...@cadvision.com ab...@ibsystems.com u...@ftc.gov
spam traps
>Disk was expensive back then. In IBM circles, there was great pride
>in knowing exactly how many bytes fit on a 2314 (or 2311, etc.) or 3330
>disk track. PC DOS and CDC systems had internal fragementation and
>even you could get better machine I/O efficiency reading each track
>or cylinder.
>
>It was the same logic, of that era, which made efficient use of punch
>cards like reading every single Hollerith field.
>
>Glad that era is over.
>
>Thank God I don't remember how many bytes that was, but I might have it
>in a note book of that era at home.
3625 for the 2311, 7294 for the 2314, and (IIRC) 13,030 for the 3330.
What's really scary is that I still remember it in hex for the 2311
and 2314: 0e29 and 1c7e respectively.
I once wrote a little program to help calculate optimum block sizes
for a given record size. (Let's see, gaps were 101 bytes if no key,
145 if there was a key, plus some forgotten percentage of the sum of
record and key sizes... well, I suppose there are worse things that
I could lie awake thinking about...)
--
cgi...@sky.bus.com (Charlie Gibbs)
Remove the first period after the "at" sign to reply.
I don't read top-posted messages. If you want me to see your reply,
appropriately trim the quoted text and put your reply below it.
> original long ago ... reposting from
> http://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe:
>
> rain/rain4
> 158 3031 4341
Why do I see so many notes about the 4341, but not the 4381. I programmed
a 4381 in 1986-1989 in college.
Was the '81 much faster than the '41? I ask because the '81 I used
felt a lot faster than what I've been told about the CDC 6600, but
then again, maybe it was just I/O speed.
> You might find it interesting to check out
><http://www.conmicro.cx/hercules/>. Hercules is a nearly full
> System/390 emulator that runs on a PC. It's reputed to run 360 code
> about ten times faster than a 360 (which specifict model I forget).
><http://www.corestore.org/hercules.html> has a link to a machine running
> Hercules that is accessible online.
Does anyone know if it is possible to get MVS/XA to run on an emulator,
and to get a copy of that OS for "personal education"?
I used to use MVS/XA on a 4381, and I'd like to see it again, just for
old time sake if nothing else.
> For the 3031, there were two "158" processor engines ... one processor
> engine dedicated to the 370 function (i.e. the 3031) and a second
> "158" processor engine (i.e. the "channel director") that implemented
> all the I/O function outboard.
How did the two stay in sync? I mean, I have vague and fuzzy
recollections of doing assembler on a 4381 and since the programs are
a mix of I/O and calculation, I assume this is very important.
>
>What was a Cyber 2000?
>Unfortunately, the FTN compiler got too old for its time.
>
The Cyber 2000 was indeed the last, biggest, and baddest
of the Mohicans. The top end of the dual-state machines
had been the 990 (also known as Theta during its development)
but this was still a water-cooled machine. Then in the final
spasm of CDC as a manufacturer of knuckle-dragging
machines, they came out with the 2000 which was air-
cooled and had an optional vector unit - but only ran
in 180 (NOS/VE) state. The irony of this machine is
that the last few sold were not used for Fortran number
crunching at all, but filled specialised heavy-duty computing
niches in other domains completely. One was used to maintain
the mother of all databases (both very large for its day and involving
lots of very complex logic) and another site used theirs as the core
engine for the mother of all financial transaction systems.
The overall muscle that these machines could bring to bear
on these problems (not just CPU power, but e.g. bandwidth
and address space) made both of them extrememly difficult
to replace at their sites, one of which I understand is still
trucking to this day.
Jitze
>In article <3d21fe34$1...@news.iglou.com>,
>Douglas H. Quebbeman <do...@ixnayamspay.com> wrote:
>>"Eugene Miya" <eug...@cse.ucsc.edu> wrote in message =
>>news:3d21f188$1...@news.ucsc.edu...
>>> What was a Cyber 2000?
>>> Unfortunately, the FTN compiler got too old for its time.
>>
>>Jitze can correct me if I'm wrong, but I thought the 2000
>>was the final design iteration of the 6000 Series architecture.
>>Not many built, I'm guessing...
>
>My first thought would have been a ETA architecture renaming.
>
No - a real honest-to-gosh NOS/VE 180 computer. In fact the
last in this line - but air-cooled with an optional vector unit and
running only in 64-bit NOS/VE mode. See my reply in the original
thread.
>
>>Jitze?
>
>Actually a far more interesting machine to me was the Cyberplus or AFP.
>I had a manual (alas tossed it before working with the Museum).
>I also spoke with Bill Ragsdale who worked on the one in Monterey.
>I need to try to find one.
>
The AFP (Advanced Flexible Processor) was the earlier name
when it first emerged from "eyes only" status. Marketing then
attached the name Cyberplus to it when it became commercially
available. Not many were sold as I recall - I know one went
to Europe to do hairy computations for foreign currency
exchange-rate arbitrage. Another went to the UK to a purveyor of
fine automobiles for the gentry, to do hydrodynamic computation for
turbines and airfoils and suchlike.
Unlike the Cyber range, it never enjoyed the prestige offered
by having a Cobol compiler available for it. (Inside joke)
Jitze
> 3625 for the 2311, 7294 for the 2314, and (IIRC) 13,030 for the 3330.
> What's really scary is that I still remember it in hex for the 2311
> and 2314: 0e29 and 1c7e respectively.
What *I* find scary is that I still remember $20, $60 and $4C
(JSR/RTS/JMP), $A9 and $AD and $A5 (LDA with immediate, absolute or zero
page), $18 and $38 (clear and set carry) and bunches of other opcodes.
Also special addresses such as $3D0 (warm restart), $FDED (output
character routine), $C000 and $C010 (keyboard data and strobe
registers), $C051 (switch to text mode).
The scary part is of course that I haven't touched an Apple ][ in nearly
twenty years, since I switched to 68k based machines (4e71 ... arrrrghh).
I'm a high level language programmer ... really I am. In fact I prefer
Lisp-family languages. None of that assembly language stuff spoken
*here*, by choice...
-- Bruce
Did NOS drop the really strange *other* 8-in-12 format that
was for TELEX, that had a special code (4000B?) in the first
word, then the 8-in-12 in the rest of the block? With Kronos,
If you had *any* interruption in the stream to the terminal,
you'd end up with gobbledegook because when it resumed, it
didn't have the prefix code to tell the controller (6671?)
what was coming...
-dq
The 2250 had its own command buffer, rather than using main memory...
it was, after all intended for mainframe attachment. Perhaps the 1130
model Lynn mentioned may have been different in that respect. I never
saw one.
Like the DEC displays you mentioned, its graphic command set included
both absolute and relative commands, much like most plotters.
Character display commands required only a character string (in
EBCDIC, of course) and the short strokes required to draw the
characters were wired-in to the device, in some sort of ROM, I assume.
I believe there were two text sizes available, IIRC. There was a
"branch" command in the graphic language which was required to make
the program loop that was an essential part of screen refresh. If the
display got complex, it would take a substantial fraction of a second
to run through the whole command loop. The long-persistence (P7)
Radar-tube phosphor kept it from being too obnoxious.
Come to think of it, ISTR that there was also an unbuffered model
available. It got its graphic commands directly via the channel
interface. It required a loop in the channel program to refresh the
screen. Naturally, it put a bit of a load on the channel to support
this mode of operation, so the buffered model was preferred, although
more expensive (core memory, after all). Am I remembering this
correctly?
X and Y coordinates were expressed as 12-bit values, called "raster
units", which allowed the screen to be addressed as a 4096x4096 grid.
One unit didn't move the beam very far, so it was quite precise. There
was a special "slew feature" that was required if you wanted it to be
able to draw straight lines at any angles other than horizontal,
vertical or 45-degree diagonals. If you were only using it for an op
console, of course, you didn't need that feature.
There was a light pen, too. When you pressed the button, it would set
a register with the address of the graphics command that was executing
when it detected a light-spike. It was the job of the software to
figure out what that meant in terms of what you were pointing at. IBM
provided a Graphics Subroutine Package (GSP) to interface with the
device. GSP itself pretty much operated at the EXCP level of I/O
interface. That's "Execute Channel Program" for all you
non-mainframers out there. You had to have a DD statement in your JCL
to allocate the display, so your program could access it. Beyond
that, not much native OS support, unless you were using it as the
operator's console.
There was a model with a built-in printer to print out the display. I
never saw one, but I was told once that it used a wet photographic
process and included a second, small, hidden CRT just for the printing
function.
--
Russ
these particular notes (that i happened to have laying around) came
from some work that i was doing for some endicott 4341 performance
engineers. they wanted a benchmark run between 3031 and 4341 (this is
pre-4381 ... aka after the endicott engineers produced the 4341
... they later went on to produce the 4381). The endicott performance
engineers were having trouble getting machine time to do the benchmark
(also at the time, rain/rain4 were one of the few widely run
benchmarks).
Basically the processor hardware engineers got the first machine built
and the disk engineering/product-test labs got the second machine
built, aka in addition to developing new disk drives they validated
existing disks against new processors as they became available. The
processors in the disk engineering lab had been running "stand-alone"
applications (FRIEND, couple others) for the testing .... problem was
that the testcell disks under development tended to sometimes deviate
from normal operational characteristics (MTBF for a standard MVS when
operating a single testcel was on the order of 15 minutes).
As something of a hobby, i rewrote the I/O supervisor to make it
absolute bullet proof, aka no kind of i/o glitches could make the
system crash. As a result it was installed in all the "test"
processors in the disk engineering and product test labs .... and they
were able to do concurrent, simulataneous testing of 6-12 testcells
(instead of scheduling stand-alone time for one testcell at a time) on
each processor (as needed).
I then got the responsibility of doing system support on all those
machines and periodically would get blaimed when things didn't work
correctly and so had to get involved in debugging their hardware (as
part of prooving that the software wasn't at fault). One such
situation was the weekend they replaced the 3830 control unit for a
16-drive string of 3350s (production timesharing) with a "new" 3880
control unit and performance went into the can on that machine.
Fortunately this was six months before first customer ship of the 3880
controller so there were times to make some hardware adjustments (I
make this joke at one point of working 1st shift at research, 2nd
shift in the disk labs, and 3rd shift down at STL, and also couple
times a month supporting the operating system for the HONE complex in
palo alto).
In any case, at that particular point that there were two 4341s in
existance, one in edicott and one in san jose disk machines. Since I
supported the operating system for san jose disk ... and since while
the machines might be i/o intensive ... the workload rarely exceeded 5
percent cpu utilization. They had 145, 158, 3031, 3033, 4341,
ect. machines that I could worry about and had some freedom in doing
other types of things with.
So i ran the rain/rain4 benchmarks for the endicott performance
engineers and got 4341 times (aka they couldn't get time on the
machine in endicott because it was booked solidly for other things),
3031 times, and 158 times. They previously had collected numbers for
the 168-3 and 91 times for rain/rain4 ... and of course rain had been
run on 6600 (numbers they sent to me along with the benchmarks to
run).
There may have been other benchmark runs made by other people ... but
I didn't do the runs and didn't have the data sitting around
conveniently. I posted some numbers that I had conveniently available.
misc. disk engineer related posts:
http://www.garlic.com/~lynn/subtopic.html#disk
misc. hone related posts:
http://www.garlic.com/~lynn/subtopic.html#hone
random post about working 4-shift work weeks (24hrs, 7days):
http://www.garlic.com/~lynn/2001h.html#29 checking some myths
--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/
basically instruction processors and I/O processors were architected
to be asyncronous .... effectively in much the same way that machines
with multiple instruction processors (SMP) are typically architected
so that the multiple instruction processors operate asyncronously.
The instructions for th3 360/370 i/o processors were, in fact called
"channel programs". You could write a channel program ... and signal
one of the asycnronous "channel processors" to begin asyncronous
execution of that channel program. The channel program could cause asyncronous
interrupts back to the instruction processor signling various kinds of
progress/events.
On some of the machines, the i/o processors were real, honest to
goodness independent asyncronous processors. On other machines, a
common microcode engine was used to emulate both the instruction
processor and multiple i/o (channel) processors. Machines where a
common processor engine was used to emulate multiple processors (cpus,
channels, etc) where typically described as having "integrated"
channels.
158s, 135, 145, 148, 4341, etc ... were "integrated" channel machines
(aka the native microcode engine had microcode for both emulating 370
processing and microcode for performing the channel process function
and executing "channel programs"). 168 machines had outboard channels
(independent hardware boxes that implement the processing of channel
programs). Channels processors and instruction processors had common
access to the same real storage (in much the same way that multiple
instruction processors have common access to the same real storage).
For the 303x line of machines .... they took a 158 integrated channel
machine .... and eliminated the 370 instruction emulation microcode
... creating a dedicated channel program processing machine called a
"channel director". The "channel director" was then a common component
used for 3031, 3032, and 3033 machines ... aka they were all "outboard
channel" machines (having dedicated hardware processing units for
executing channel programs) ... as opposed to "integrated channel"
machines.
A 3031 was then a 158 with just the 370 instruction emulation
microcode and reconfigured for "outboard channel" operation rather
than "integrated channel" operation. A 3032 was then a 168 that was
reconfigured to use "channel director" for outbarod channels (rather
than the 168 outboard channel hardware boxes).
HONE was the system infrastructure that supported the people in the
field, salesmen, marketing people, branch office people, etc.
At one time the US HONE system in Palo Alto had grown into the largest
single-system-image complex in the world. At one time, I knew it had
something over 40,000 defined "userids".
The US HONE system was also cloned for a number of country and/or
regional centers around the world.
Also, in the early '80s, the Palo Alto complex was extended with
redundant centers in Dallas and Boulder for "disaster surviveability"
(my wife and I later coined the terms disaster survivability and
geographic survivability when we were doing HA/CMP) ... online
workload was spread across the three datacenters, but if one failed
the remaining two could pick up.
Nearly all of the application delivery to branch & field people were
written in APL ... running under CMS. One of the most important were
the "configurator" applications. Starting with the 370/125 (& 115), it
was no longer possible for a salesman to manual fill-out a mainframe
machine order .... they all had to be done interacting with HONE
configurator.
random ha/cmp refs:
http://www.garlic.com/~lynn/subtopic.html#hacmp
also some smp related postings:
http://www.garlic.com/~lynn/subtopic.html#smp
oh yes, i did do a post referencing URL somebody's linpack table ... and
extracted several of the entries for the posting (including a number of
4381 entries ... besides 6600). Not the original table at the referenced
URL includes information about compiler and options used .... i scrubbed
that info ... trying to reduce the size of the posting ... people wanting
to see the full information should go to the reference URL
the 4381 linpack entries from that posting
IBM 4381 90E 1.2
IBM 4381-13 1.2
IBM 4381-22 .97
IBM 4381 MG2 .96
IBM 4381-21 .47
IBM 4381-11 .39
ref linpack posting
http://www.garlic.com/~lynn/2002i.html#12
Well, you're both right. The 2000 was a "real" 180 machine, but like
its baby cousin the 930, its engineering, electronics, and design came
from lessons learned at ETA.
> >
> >>Jitze?
> >
> >Actually a far more interesting machine to me was the Cyberplus or AFP.
> >I had a manual (alas tossed it before working with the Museum).
> >I also spoke with Bill Ragsdale who worked on the one in Monterey.
> >I need to try to find one.
> >
>
> The AFP (Advanced Flexible Processor) was the earlier name
> when it first emerged from "eyes only" status. Marketing then
> attached the name Cyberplus to it when it became commercially
> available. Not many were sold as I recall - I know one went
> to Europe to do hairy computations for foreign currency
> exchange-rate arbitrage. Another went to the UK to a purveyor of
> fine automobiles for the gentry, to do hydrodynamic computation for
> turbines and airfoils and suchlike.
>
> Unlike the Cyber range, it never enjoyed the prestige offered
> by having a Cobol compiler available for it. (Inside joke)
>
> Jitze
I was lucky enough to work on both the Cyberplus and 205 projects --
neither of which every had a real COBOL compiler.
One day I light-heartedly suggested a COBOL compiler for the 205.
Which drew a rather terse, "you don't understand vectors".
To which I responded, "what do you think payroll is?"
Not that a 205 would have been the right system for running a
company's HR, but it would have made a nice "addon".
:)
WRT to the 930, I doubt that either the S0 Team in Toronto or
the ETA folks in St Paul would agree with that. There was remarkably
little interchange between ETA and the rest of the company, by
design. In fact, some, if not all, non-ETA CDC employees had to
sign special non-disclosure agreements to have any kind of
technical discussions at all. Whether that was a good design
has been debated without resolution for many years.
--Ned
> There may have been other benchmark runs made by other people ... but
> I didn't do the runs and didn't have the data sitting around
> conveniently. I posted some numbers that I had conveniently available.
Mostly I just notice that I see a lot more references to the 4341 instead
of the 4381, and I mean in general, not just in your posts.
Was the '81 a less popular machine?
I remember the one I used in Richmond, VA, and there was one in city
hall in Newport News, VA. But most of what I saw was 3xxx or old 370
machines, not counting things like the baby-mainframes and the AS/400s.
Oh, yeah. Damn. Nightmares again. (Thank you so much ;-)
> CDC definately believed in diversity. When using ASCII with NOS, you had
> your choice of:
>
> 1.) 8-in-12 - For printers and plotters
>
> 2.) Packed 7.5 chars to a word - For 9-track 'stranger' tapes
>
> 3.) 6/12 display code - For tty I/O
>
> 4.) 6-bit display code - translated to ASCII in various hardware devices
>
> It was a mess.
>
> It really helps to consider the 6600 and followons as having 12-bit
> bytes. Then packing two 6-bit characters per byte. (Some older
> documentation even states this explicitly.) This due to the
> fact that PPs had 12-bit memory, the I/O channels were 12-bits wide,
> and even the central memory was composed of 12-bit memory modules
> 'glued' together.
When the flight simulators at McAir (McDonnell Aircraft Company) were migrating
from a pair of CDC something-or-anothers (750s?) to a bunch of networked Gould
SELs, back in 1985, they had a quandary: what to do about the home-brew hardware
controllers?
The SELs were 16-bit machines, as were the specialized I/O widgie-frammises that
drove the simulators. The Cyber would do some bit-fiddling to line the 16-bit
data up in 12-bit words (with padding, IIRC) and all was OK, though a little
strange. A reasonable sort of strange, under the circumstances.
The obvious -- to me -- approach would have been to modify the electronics to
accept 16-bit data from the SELs, but that's not what was done. (Because
hardware is hard, software is easy?) Instead, the SELs would input and output
the 16-bit data 12 bits at a time, just like the hardware expected, with padding
and aligning done in software in the SEL.
Guess who wrote the memory-to-memory translation code? In Fortran, with shift
and mask operations.
It was horrible, yet satisfying. The code had to handle any start address, and
any length of data. That meant the padding had to be accounted for differently,
depending on where in in the widgie-frammis's memory the data would reside. I
had a false start or two before I had code that would work correctly, was fairly
clear and reasonably-sized. Much collapsing of special cases. It had a certain
elegance, considering.
I wish I'd kept a copy. For one thing, I don't recall just what the
requirements were, much less exactly how I resolved them. Though I suppose I
could "reverse engineer" it from my recollections. (Will I? Sure. Right after
my javelin-catching class.)
For another, it's something I'm inordinately proud of, and would like to drag
out and look over, now and again. My precioussss.
> If some retrocomputing person ever bothers to write a C compiler for
> the CDC instruction set, a C 'char' would almost certainly be 12 bits.
>
> Walt
> -...-
> Walt Spector
> (w6ws at earthlink dot net)
It may just be a mis-memory, but I seem to recall something about a C compiler
for CDC machines, written back then. (From a university in Texas?) Maybe I'm
just thinking of Pascal, which there most definitely was. Though not from
Texas. -Eric S.
4341 was possibly one of the best price/performance machines for its
time. slightly related
http://www.garlic.com/~lynn/2001m.html#15 departmentl servers
>The irony of this machine is
>that the last few sold were not used for Fortran number
>crunching at all, but filled specialised heavy-duty computing
>niches in other domains completely. One was used to maintain
>the mother of all databases (both very large for its day and involving
>lots of very complex logic) and another site used theirs as the core
>engine for the mother of all financial transaction systems.
We used our Cyber 2000 at DPT running banking applications mostly
written in Cobol and interacting with IM/DM (BasisPlus) database.
Remember?
Regards
Peter
[snip]
Ah, I remember that now. In some ways I regret changing track away from
the IBM systems. It's just that the local colleges didn't really let
you learn much about them.
Everything we did on the mainframe was basically:
* load up a big chunk of records into a block
* process the block
* repeat
It always seemed to me that we should have started another I/O operation
just before processing the current block, so at least in theory the next
would be ready when you finished, or at least would be ready faster.
Did IBM compilers like COBOL and FORTRAN do anything like this behind
the scenes to increase througput?
What was the proper way to interleave I/O and processing on the IBM,
assuming your language would you, or you were using assembler?
Most UNIX systems try to do this sort of thing automatically, although
there are a lot of things you can do yourself to optimize I/O and
processing to work together. The UNIX process model is good but I/O is
almost totally divorced from a process. Throughput overall can be very
good, but a single program with a huge I/O queue can hold up processes
with small amounts of I/O for some time.
That's one thing I really miss about systems like MVS/XA: the ability
to control I/O at the process level when you need to. I think Solaris
can do this on high-end systems now, but it's not available in all of
the UNIX world yet, most especially not in the free systems.
the talk referenced in the above posting about 11,000+ vax machines
... was given in 1983 (spring?)
note as per:
http://www.garlic.com/~lynn/2002f.html#0
the total world-wide vax ships as of the end of 1982 was 14,508 and the
total as of the end of 1983 was 25,070.
from:
http://www.isham-research.com/chrono.html
4341 announced 1/79 and fcs 11/79
4381 announced 9/83 and fcs 1q/84
workstation and PCs were starting to come on strong in the
departmental server market by the time 4381s started shipping in
quantity.
also per:
http://www.garlic.com/~lynn/2002f.html#0
while the total number of vax shipments kept climbing thru '87
... they were micro-vax. combined 11/750 & 11/780 world wide shipments
thru 1984 was 35,540 ... and then dropped to a combined total of 7,600
for 1985 and 1660 for 1986.
--
+-------------------------------------------------------------+
| Charles and Francis Richmond <rich...@plano.net> |
+-------------------------------------------------------------+
Hi Ned,
The S0 was an 800 series machine. The 815/825 line.
Kent
>
>I was lucky enough to work on both the Cyberplus and 205 projects --
>neither of which every had a real COBOL compiler.
>
>One day I light-heartedly suggested a COBOL compiler for the 205.
>Which drew a rather terse, "you don't understand vectors".
>
>To which I responded, "what do you think payroll is?"
>
>Not that a 205 would have been the right system for running a
>company's HR, but it would have made a nice "addon".
>
But we won the battle on the 7600 which indeed had a Cobol
compiler. Despite the alleged "comaptibility" of the 7600 with
its lower brethren, a special version of the compiler had to be
built.
Was it used? Oh my... when the PSRs (bug reports) started
to roll in, we finally got some insight into the kinds of things
you might want to do with the fastest Cobol in the west...
Payroll? Naah. Some of the most interesting military-industrial
applications that I ever saw. At least those we were allowed
to see... Trying to analyze a dump 3rd hand via a human
interlocutor (because one didn't have the necessary clearances)
was great sport.
Jitze
As was typical with CDC, little was ever engineered that wasn't horrible
under-engineered or massive overkill.
Address space in the NOS/VE systems was one of the latter. An address space
(segment) was 2^31 bytes (2BG). Each task could have up to 4095 segments.
Total address space for the task 2^31 * 2^11 bytes, though some of the
segments were reserved.
And of course, a job step could have multiple tasks, and the ankle bone
connects to the shin bone...
And just to show how technology has come full circle, today's high end
RS6000 and Z900 mainframes are freon cooled. They look like refrigerators
that have had their liners removed.
Kent
Ah, McAuto. I turned down a job with their systems group back in '78 when
the interview took a rather odd twist. It appeared that their offer wasn't
a "job offer", but rather a "volleyball scholarship" to play in their
intramural team.
But I loved what they were doing there. I still wish that I could have
"flown" one of the simulators. :)
Kent
http://www.brouhaha.com/~eric/retrocomputing/dec/gt40/
has some more information. There were numerous similar
display systems from around 1970 through 1985. Evans &
Sutherland's "Picture System 1" was among the high-end.
Near the end of that time frame, vector-to-raster
conversion started being used to convert compatible
graphics instructions to frame buffers for raster
display, which (being driven by the TV market) eventually
became the cheapest useful display technology; Megatek
was one of the big names in that approach.
If anybody knows of a working VS60 (VT48 + VR48), I'd be
happy to take it off your hands, refurbish it, and attach
it as second bus master (shared RAM) to one of my Unibus
PDP-11s. Heck, I'd settle for extensive documentation so
we could add it to the SIMH software simulation package.