B5500 TSMCP/CANDE had at least login names
in 1968 or so. I forget if it had passwords?
Did it change with the 6700?
Did B5K have anything?
Anyone out there old enough to know and
still young enough to remember?
--
The first ten amendments to the constitution
are what make this a country worth fighting
for. Ignorance and apathy are what make it
so difficult to defend.
Jack Barone
You do not preserve freedom by destroying freedom.
Eric Green
>B5500 TSMCP/CANDE had at least login names
>in 1968 or so. I forget if it had passwords?
>Did it change with the 6700?
The 5500 Cande did, in fact, have passwords. Interestingly, being stored in
plain text in a known portion of the user's work space, whose title was based
on the usercode, it didn't take the proverbial rocket scientist to steal
usercode/password pairs from running users!
Art
When I first used a B6700 around the MCP mark 2.8 or 2.9 era (25 years
ago?), I'm reasonably sure usercode and encrypted passwords were maintained
in the USERDATA file.
bok wrote:
--
---------------------------------
Gordon DeGrandis - Brussels Belgium
Email address is SPAM protected please modify before sending
The problem with Cande was/is that is wants to rewrite the whole workfile
when it does updates/saves. We actually had a guy who wrote a Cande-
lookalike on the 5500 then 6700 which would do the simple fixes in-line.
I.e. rewrite records that were edited if possible rather than rewrite
the whole file for a simple change in one line. (Did a bunch of other
things faster too...)
>also used Swapper, anyone remeber Swapper?
Ye gads what a horrible thing THAT was... I wrote a program that
would display the position of up to 70 or so tasks on a dumb terminal.
I.e. you could watch as tasks moved from disk to memory and back out again.
All because memory was so expensive and you could only get to 6 megs
on those early machines (and 6 megs of core was as big as a house!)
--
----------------------------------------------------------------------------
|Bob Rahe, Delaware Tech&Comm Coll. / |
|Computer Center, Dover, Delaware / |
|Internet: b...@dtcc.edu (RWR50) / |
----------------------------------------------------------------------------
> In article <3C7BF2F6...@brutele.be>,
> Gordon DeGrandis <gordon.degrand...@brutele.be> wrote:
>>also used Swapper, anyone remeber Swapper?
>
>
> Ye gads what a horrible thing THAT was... I wrote a program that
> would display the position of up to 70 or so tasks on a dumb terminal.
> I.e. you could watch as tasks moved from disk to memory and back out again.
>
> All because memory was so expensive and you could only get to 6 megs
> on those early machines (and 6 megs of core was as big as a house!)
>
There was money in that sort of tool; when I was at the University
of Denver, we sold a program called SwapperSpy (OK, so bouncy case
was invented later) that displayed swapped task status on the SPO
(Can I still say SPO? Do I have to say ODT?). A few sites bought
a copy. We made enough to cover the cost of the flyer. It was fun.
We used to bring Swapper down every evening so we could run batch
jobs, but eventually we decided to run everything swapped. I patched
Controller to ignore SW- (or whatever it was) so that no one could,
even with the best of intentions, stop Swapper.
Six meg. You actually had six meg?
Louis Krupp
I recall knocking up some very basic inplace file editors for both A/V
series (large / medium systems in those days) for my own use. Didn't
everyone have one in those days?
One of the Universities in NZ (Otago) had a B6700 with only 64Kwords of
memory. After struggling to CANDE/swapper to run in that amount of memory
they wrote their own MCS called SCREAM. SCREAM (Simple Country Remote Entry
Access Method or something similar) provided editing capabilities similar to
CANDE and a host of other functionality.
> All because memory was so expensive and you could only get to 6 megs
> on those early machines (and 6 megs of core was as big as a house!)
Yes, 6 MB (2 ^ 20 words) was the pre ASD architectural limit and that
amount of memory was a luxury in those days ;-)
Well, if RAM had remained expensive and disk speeds had improved much
more rapidly ... then Swapper would be seen as prescient. It's due to
20/20 hindsight that we know it turned out the other way around.
Or if the B6700 had had a bulk RAM based swap device like the CDC 6400
... part of the problems with Swapper were just the limitations imposed
by not-present stacks, part of the problems were with the conflicting
memory model it imposed, but the big problem was just that swapping was
too slow. If you could put today's disk throughput on a 30-year-old CPU
and memory, Swapper would be considerably more valuable ... albeit
still a kludge.
> All because memory was so expensive and you could only get to 6 megs
> on those early machines (and 6 megs of core was as big as a house!)
The cost of core memory was the limitation that led to Swapper. It had
nothing to do with the 6MB limit, which was perceived as nearly
infinite in those days. Only in Swapper's late days was it seen as
helping with the address space limit.
Seems to me that one meg of core fit in a cabinet that was about 2'
wide, 6' long, and 6' high. That would place six meg in the bedroom
category rather than the house category. But I'll grant you the
hyperbole ;-). In any case, a complete B6700 based on that memory and 3
gig of disk would have needed a BIG house to hold it ...
Edward Reid
"Keith Landry" <klan...@bellsouth.net> wrote in message news:3C7C0731...@bellsouth.net...
Actual *core* memory was much larger than that. An "A" size cabinet ( 2' x
6' x 6' ) only held 3 16K word modules or about 288KB. You could not
configure a 6700 with more than 3 MB of core due to the fact that the cables
could be no longer than 100 feet and, because of cooling problems, you
couldn't pack enough cabinets in a space reachable by 100 ft cables. Prior
to the newer memory technologies, the only system to exceed the 3 MB limit
was the one at B&O - C&O where they "double-decked" it by installing half
the memory on the floor above and feeding the cables down through the
ceiling.
I've monitored this group for years (as I'm sure many Unisys employees do)
but when I was working I never felt comfortable in participating because of
a fear that my comments would be taken as an "official" response. Now that
I've been off on disability leave for quite a while, I don't have to worry
about that. I suspect that many current employees have the same fears that I
had and quietly lurk for the same reason.
John Keiser ( semi-retired, Unisys MCP geek since 1971)
P.S. If you think SWAPPER was a bear to use, you should have tried being the
guy stuck with maintaining it and fixing all the bugs. I carried that
albatross for years. ;-)
Remember Swapper day? Every Thursday the production systems would be
run with Swapper and that was the day that we would get nothing done.
Some people wore ties on those days since everyone know that people
with ties don't actually do any work.
--
Jeff J. Wilson [ jeff....@unisys.com ]
...speaking only for myself
Vanguard of the 13er generation.
>Six meg. You actually had six meg?
Nah, but that would have been the limit. If we'd had another
machine room! 8-)
(And one problem with shutting off swapper and restarting later was that
it had to get contiguous memory. If anything got a hunk of savecore
while swapper was down it might not be able to restart without a H/L.)
>> Ye gads what a horrible thing THAT was...
>Well, if RAM had remained expensive and disk speeds had improved much
>more rapidly ... then Swapper would be seen as prescient. It's due to
>20/20 hindsight that we know it turned out the other way around.
Nah, it was a kludge even then! 8-) Followed closely by the pre-ASD
type memory model... forget the name right now...
...
>The cost of core memory was the limitation that led to Swapper. It had
>nothing to do with the 6MB limit, which was perceived as nearly
>infinite in those days. Only in Swapper's late days was it seen as
>helping with the address space limit.
Well, I don't know. It certainly did help with the 6M limit because
one thing swapper did was swap out ALL of a prog. Including the save
core it used, like the stack etc. If you had lots of things running
the save core could mount up leaving you with almost nothing to run
progs with...
We were a commercial economics consulting company (Data
Resources, Inc, DRI, now part of somebody else) running
multiple users running large scale (multi-megaword) econometrics programs
on 5500s, later 6700s, later 7700s, .... currently Asomethings.
The week before we installed Swapper on our 6700 we
were able to give adequate response (i.e. good enough so
the customers didn't drop off) response to 15-20 users.
The week following installing Swapper this jumped to
>45. No other hardware or software changes.
In a paper early in 1971 in Journal of the ACM*
Aho and Denning proved that demand paging was more
effective than any other VM strategy, as long as the
cost of loading n pages 1 at a time was <= to the
cost of loading n pages n at a time. They forgot that
with contemporary hardware (and most hardware since
then) it was almost always cheaper to load n at a time rather than 1 at a time.
Why did all y'all hate swapper?
JKA
*Aho, A V, Denning, P J, and Ullman, J D
Principles of Optimal Replacement
JACM 18, pp 80-93, 1971
--
If we want to not become {insert big behemoth company here}
maybe we should stop hiring our managers and VPs from there.
J Berringer
Bob Rahe wrote:
>
> Nah, it was a kludge even then! 8-) Followed closely by the pre-ASD
> type memory model... forget the name right now...
>
> ...
>
>
> ----------------------------------------------------------------------------
> |Bob Rahe, Delaware Tech&Comm Coll. / |
> |Computer Center, Dover, Delaware / |
> |Internet: b...@dtcc.edu (RWR50) / |
> ----------------------------------------------------------------------------
Anywhere I can get information on pre-ASD and ASD memory models?
Thanks
JKA
there are different kinds of swappers. the original term applied to
implementation that would do total application roll-out/roll-in in
contiguous sections of real storage. there were then some partial
enhancements that used paging hardware to eliminate the requirement
for contiguous sections of real storage (but still consistent of
complete application roll-out/roll-in).
some demand paging systems had various kinds of optimized block page
out/in. many of them weren't referred to as swappers because of the
earlier definition/use of the term. The block page out/in was coupled
intol the standard demand paging in various ways ... aka for entities
that weren't member of a standard block page in/out set.
block page out/in implementations may or may not have also implemented
contiguous allocation for all members of a specific page group/set.
There are couple issues:
* w/o contiguous allocation,
1) a block page out/in still reduces latency and
2) also tends to throw a group of pages against the page device driver
which can be organized for optimal device operation (as opposed to
treating the requests as random sequential, one at a time).
* contiguous allocation
can further improve block page out/in I/O efficiency over non
non-contiguous operation.
========================================================
"big pages" was an attempt to maximize both. For page out operation,
clusters of pages were grouped in full track units, where members of a
track cluster tended to be pages in use together (not contiguous or
sequential) ... somewhat related to members of working set. A
suspended process could have all of its resident pages re-arranged
into multiple track clusters and all queued simultaneously for write
operation. When a task was re-activated, fetch requests for some
subset of the pages previously paged out was queued (instead of
waiting for individual demand page faults). Subsequent demand page
faults would not only fetch the specific page, but all pages in the
same track cluster.
A tricky part was when real storage was fully commited and a demand
page fault occured, there was a trade-off decision regarding
attempting to build a single "big page" (track cluster) on the fly, or
to select individual pages for page out. If individual pages are
selected for replacement, then there becomes two classes of pages on
secondary storage, singlet pages, and track cluster pages (which
potentially also needs different allocations strategy).
Other optimization issues:
* simultaneous scheduling of write I/O for all track clusters on task
suspension or placing them on a pending queue and only performing the
writes as required
* dynamic allocation of disk location of a track cluster at the moment
of the write operation to the first available place closest to the
current disk arm location
* simultaneous scheduling of read I/O for all track clusters on task
activation, only demand page fault members of track clusters (a demand
page fault of any member of a track cluster is the equivalent of a
demand page fault for all members of the same track cluster), or some
hybrid of the two.
misc. big page refs:
http://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/
I had sort of invented my own concept of working set and page
replacement algorithm when I was an undergraduate in '68 and
implemented it for cp/67 kernel (about the same time as denning's acm
paper in '68 on working set).
Later at Cambridge Science Center, I made several additional
enhancement.
In the early '70s, the grenoble science center took essentially the
same cp/67 kernel and implemented a "straight" working set
implementation ... very close to the '68 acm paper. grenoble
published an acm paper on their effort cacm16, apr73. The
grenoble & cambridge machines, workload mix, and configurations
were similar except
the grenoble 67 was 1m machine (154 4k pageable pages after fixed
kernel requirements)
the cambridge 67 was 768k machine (104 4k pageable pages after fixed
kernel requirements)
the grenoble had 30-35 users
the cambridge was a similar workload mix but twice the number of users
70-75 (except there was probably somewhat more cms\apl use on the
cambridge machine ... making the avg. of the various kinds of
transaction/workload types somewhat more processor intensive).
both machines provided subsecond response for the 90th percentile of
trivial interactive transactions ... however, cambridge response was
slighter better than the grenoble response (even with twice the
users).
The differences early '70s:
grenoble cambridge
machine 360/67 360/67
# users 30-35 70-75
real store 1mbyte 768k
p'pages 154 4k 104 4k
replacement local LRU global "clock" LRU
thrashing working-set dynamic adaptive
priority cpu aging dynamic adaptive
misc. refs
L. Belady, A Study of Replacement Algorithms for a Virtual Storage
Computer, IBM Systems Journal, v5n2, 1966
L. Belady, The IBM History of Memory Management Technology, IBM
Journal of R&D, v25n5
R. Carr, Virtual Memory Management, Stanford University,
STAN-CS-81-873 (1981)
R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm
for Virtual Memory Management, ACM SIGOPS, v15n5, 1981
P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6,
jan80
J. Rodriquez-Rosell, The design, implementation, and evaluation of a
working set dispatcher, cacm16, apr73
also with respect to vs/repack and program restructure mentioned in
a related recent thread in these newsgroup (parts of this technology
was also used in conjunction with some other modeling work to look
at page size issues):
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM
Systems Journal, v10n3, 1971
random refs:
http://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
http://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#1 Multitasking question
http://www.garlic.com/~lynn/94.html#2 Schedulers
http://www.garlic.com/~lynn/94.html#4 Schedulers
http://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
http://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
http://www.garlic.com/~lynn/96.html#0a Cache
http://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
http://www.garlic.com/~lynn/98.html#54 qn on virtual page replacement
http://www.garlic.com/~lynn/99.html#18 Old Computers
http://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
http://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
http://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
http://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
http://www.garlic.com/~lynn/2001h.html#26 TECO Critique
http://www.garlic.com/~lynn/2001l.html#6 mainframe question
http://www.garlic.com/~lynn/subtopic.html#wsclock
J Ahlstrom wrote:
> Anywhere I can get information on pre-ASD and ASD memory models?
I have a B5900 reference manual and some other antiques. If you're
anywhere near Boulder, Colorado, you can come look at them.
Speaking of antiques, I know where you might be able to get a 2.1
MCP listing (that may help explain the memory model).
At one time, it was possible to download manuals as PDF files. I
don't know if it still is.
Louis Krupp
>Actual *core* memory was much larger than that. An "A" size cabinet ( 2' x
>6' x 6' ) only held 3 16K word modules or about 288KB.
The B2700 memory wasn't even that dense. I believe we had 240,000 digits
(120,000 bytes) of ram in a memory cabinet.
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam 1-917-715-0831
dt ||\ Here I am: http://RandallBart.com/ I LOVE YOU
a |/ He Won't Get Far: http://www.callahanonline.com/calhat9.htm
l |\ DOT-HS-808-065 MSMSMSMSMSMSMS=6/28/107 Joel 3:9-10
l |/ Terrorism: http://www.markfiore.com/animation/adterror.html
> Nah, it was a kludge even then! 8-) Followed closely by the pre-ASD
>type memory model... forget the name right now...
Pre-ASD memory was called ASN. ASD stands for "Actual segment descriptor",
ASN stands for "Address space number".
SWAPPER and the controls it offered allowed me to put together an
interesting MCP tutorial on thrashing for a CUBE back in the early 80's.
SWAPPER had settings for both the maximum real memory allowed for an
individual subspace and the amount of real memory a subspace would start
with. By setting both to the same value I was able to eliminate problems
that were associated with deciding when to increase a subspace size ( we
were never very good at that ). By varying that value and rerunning the same
program many times on an otherwise idle machine, I was able to demonstrate
exactly what performance could be expected for various ratios of virtual
memory to real memory on Burroughs large systems using the ASN memory model.
I ran a number of different programs ( mostly compilers because they were
handy ) through the process and also tried different input data for the same
program. Then I plotted graphs of real memory used versus elapsed time. The
results were fascinating. Each program had its own distinct curve, even with
different input data, but all were very similar. The most interesting point
was that each curve had a fairly sharp "knee" where decreasing real memory
by only a few thousand words would cause a program to transition from
efficient running to thrashing.Further small reductions resulted huge
increases in elapsed time. In the other direction, increases in real memory
resulted in only minor improvements in elapsed time. It was a fun project
and the folks who attended the MCP Tutorial that year at CUBE really seemed
to like the presentation.
John Keiser (semi-retired Unisys MCP geek)
I did much the same thing when I was an undergraduate in '68 rewritting
much of the cp/67 dispatching and paging subsystem.
In cp/67 it was possible to fix/pin/lock virtual memory pages in real
storage. I used the lock command to lock specific virtual pages of an
idle process .... leaving specific amount of pageable, unlock pages
available for other tasks. I then ran a number of a large number of
different tasks.
I included a simple example of that in a presentation I made at the
SHARE user group meeting in aug '68.
I also used the technique when evaluating different paging techniques
that I was developing as well as modifications/improvements to the
code exeucting in the virtual address space.
much of the presentation pieces was previously posted
http://www.garlic.com/~lynn/94.html#18
MODIFIED CP/67
OS run with one other user. The other user was not active, was just
available to control amount of core used by OS. The following table
gives core available to OS, execution time and execution time ratio
for the 25 FORTG compiles.
CORE (pages) OS with Hasp OS w/o HASP
104 1.35 (435 sec)
94 1.37 (445 sec)
74 1.38 (450 sec) 1.49 (480 sec)
64 1.89 (610 sec) 1.49 (480 sec)
54 2.32 (750 sec) 1.81 (585 sec)
44 4.53 (1450 sec) 1.96 (630 sec)
> At one time, it was possible to download manuals as PDF files. I
> don't know if it still is.
>
> Louis Krupp
>
It still is !
Leif
> J Ahlstrom wrote:
>> Anywhere I can get information on pre-ASD and ASD memory models?
> I have a B5900 reference manual and some other antiques. If you're
> anywhere near Boulder, Colorado, you can come look at them.
> Speaking of antiques, I know where you might be able to get a 2.1
> MCP listing (that may help explain the memory model).
How many Ilene's tall is the MCP listing?
--Lee
Between the original, straightforward 20-bit address space (where
descriptors contained an actual address rather than an ASD number) and
the modern ASD architecture was this kludge which, as someone else
remembered, was called ASN, for Address Space Number. I don't know of
any extant reference, but here's the gist.
It's worth noting that it was universally understood that the ASN
architecture was nothing but a stopgap that would be superseded by a
better way of extending memory ASAP. It just took a while to evolve the
hardware. (The software required changes too, but I'm pretty sure the
hardware was the bottleneck. After all, the software engineers spent a
lot of time coding for the ASN kludge.)
You take the 6-meg address space and chop it in half. One half is
global, accessible to all processes and processors at all times. The
other half is local, fully addressible by processes assigned to it but
totally invisible to processes in other local boxes. Each local box was
assigned a unique ASN, hence the name of the architecture. Once
initiated, a process could not alter its ASN and was thus stuck with
the resources of whatever local box it was initiated in.
This meant that all necessarily global code and data had to be in the
global box -- much of the MCP, most MCSs, buffers and code for
databases shared by more than one ASN. As a result, a lot of time and
agony went into figuring out what to keep in what ASN, and in
particular figuring out which databases (and all their users!) could
run in a single ASN so that the buffers could use local rather than
global memory.
The user interface to the ASN was via a SUBSYSTEM attribute -- long
since deimplemented with alacrity at the first possible moment.
Subsystems had names rather than numbers, and could include multiple
ASNs, including optionally the global ASN.
There were two main variants of the ASN architecture. The earlier, used
with the B6800 and others, tied the local memory to a specific CPU --
the hardware of each CPU was still only capable of addressing 6MB.
Peripheral devices might be local to a CPU, and thus processes
requiring those devices had to be assigned to subsystems with access to
the devices. The ASN -- the number itself -- was hard-coded in the CPU.
The later variant decoupled the ASN from the CPU and peripherals. At
this stage the hardware at a low level was capable of addressing much
more memory, and all peripherals were shared by all CPUs. An ASN was
simply a window into the memory. CPUs could service any ASN, including
multiple CPUs in a single ASN, and there could be many more ASNs than
CPUs -- all impossible with the first variant, when the CPU and the
memory were inextricably coupled. The ASN -- the number itself -- was
simply a CPU register set when a CPU was assigned to a task. This
allowed total memory sizes to expand dramatically. However, global
memory was still 3MB, and this limitation became even more aggravating
(some would say intolerable) at this point. But at least one could play
the games with a lot more local ASNs. For example, install more memory,
add an ASN, and run that new database and all its applications there --
a lot easier and cheaper than a whole new CPU.
At this stage, in effect memory was addressed as follows: if the
high-order address bit is zero, leave it. If it's one, extend the
address by replacing the high-order bit with the current ASN, resulting
in more address bits to send to the physical memory.
OB Swapper: well, I didn't use Swapper enough to remember exactly how
it worked in the ASN architecture. I think it eventually evolved to the
point that swap spaces could be running in multiple ASNs and tasks
could migrate from one to another. But I may be misremembering that.
Well, I started out to give a one-paragraph intro to ASN and kept
running off at my fingers ... there were other aspects, but I think
this covers most of the important ones.
Edward Reid
OK, obviously I'm disremembering which wings of the DHSMV B6700 had
memory ... hmm, also I think by the time we got it up to one meg, we
were using thin film memory rather than core.
Edward Reid
If you had the money to max out your system on memory -- absolute limit
of 6 meg due to the address space, and someone mentioned that some
systems were limited to less -- then indeed Swapper helped with this
limit.
My point was that very few sites in the early 1970s hit the hardware
limit, because the memory was too expensive. Their memory bottleneck --
their critical path, in a sense -- was driven by the cost, not by the
architecture. By the late 1970s this was changing rapidly, and the
architectural limit was definitely a serious issue for a great many
sites.
Edward Reid
It all depended on what kind of mix you were trying to handle. For the
ideal case for which Swapper was designed -- a pure interactive load
with programs waiting long times between inputs -- Swapper could work
very well. When you strayed from this path, though, it didn't work so
well. If for any reason you swapped out processes that needed to run
again very soon, you just incurred a lot of disk I/O and a long delay
without saving a significant amount of memory.
Single-user programs run through CANDE were great Swapper candidates.
Transaction processors that handled very few transactions (say one per
minute) were great candidates. Busy TPs were poor candidates for
swapping, and batch programs were terrible candidates.
But Swapper wasn't very flexible, and for practical purposes you had to
decide at halt/load time how much memory to allocate to Swapper. This
meant that at some times you would have to run batch tasks or busy TPs
in Swapper, since dynamically shrinking and expanding the swap space
was not really feasible. Thus you got into situations where you were
running stuff in Swapper that didn't really belong there, just because
that's where the available memory was.
And of course at a lot of sites, someone locked in to the promise of
Swapper and tried to use it even though their mix wasn't a good fit.
IIRC, the documentation at that time was a bit too vague on how to
determine what would really run well in Swapper. (The documentation in
general is far more extensive today than it was in 1975.)
> running large scale (multi-megaword) econometrics programs
> on 5500s, later 6700s, later 7700s, .... currently Asomethings.
Probably NXsomethings ;-)
Edward Reid
Edward Reid wrote:
>
>
> It all depended on what kind of mix you were trying to handle. For the
> ideal case for which Swapper was designed -- a pure interactive load
> with programs waiting long times between inputs -- Swapper could work
> very well. When you strayed from this path, though, it didn't work so
> well. If for any reason you swapped out processes that needed to run
> again very soon, you just incurred a lot of disk I/O and a long delay
> without saving a significant amount of memory.
>
> Edward Reid
Of course.
Don't use a tool for something it isn't appropriate for.
We had exactly the large, single-user programs with long compute
time (10s or seconds to 1s of minutes) separated by comparable
waits for user input that it was designed for.
>Actual *core* memory was much larger than that. An "A" size cabinet ( 2' x
>6' x 6' ) only held 3 16K word modules or about 288KB. You could not
>configure a 6700 with more than 3 MB of core due to the fact that the cables
>could be no longer than 100 feet and, because of cooling problems, you
>couldn't pack enough cabinets in a space reachable by 100 ft cables. Prior
...
And the way the system worked you had cable driver cards that pushed the
power down the cable at speed. And boy was it spectacular when one of
them failed! Luckily they were at the top of the cabinet so the sparks
and smoke didn't damage anything else! Fourth of July time... 8-))
--
I don't know anything about Swapper, but it sounds vaguely similar
to an algorithm I used for executable code segments (which could
be reloaded from disk and never needed saving).
Use of the segment set a bit. At timing intervals all segments
shifted a MRU word (most recently used) right (which had possibly
just had a MSB set). This was used to select a segment to discard
by a simple scan for the smallest MRU field. There were no more
than 31 segments allowed, and the 32nd was a segment map.
When a segment was needed, if space was available just load it.
If not, select a discard and compact the memory (this was a very
simple minded algorithm). Repeat until space available at the
top.
The result was that heavily used segments migrated towards lower
memory, thus minimizing the copying in the compact step.
Something like self-organizing lists.
To use it efficiently you had to partition the code segments to
minimize inter-segment calls. But it would adapt quite nicely to
a common subroutines segment called often by other segments. The
code involved was non-position sensitive in that all jumps were
self-relative, and intersegment calls were table driven in terms
of segment/offset. Local calls went through the same table with
self-relative locations. I could run a compiler in 10% or less of
its total code space.
The segment map held things such as segment size, memory location,
offset in code file, loaded bit, the MRU field, etc.
--
Chuck F (cbfal...@yahoo.com) (cbfal...@XXXXworldnet.att.net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)
...
>You take the 6-meg address space and chop it in half. One half is
>global, accessible to all processes and processors at all times. The
>other half is local, fully addressible by processes assigned to it but
>totally invisible to processes in other local boxes. Each local box was
...
Is that right? I seem to remember you could split it up in variable
ways, i.e. it didn't have to be a 3/3 split. No? Or maybe it was just
that you didn't have to actually HAVE 3Mb in global...???
--
early TSS/360 was even worse than this. When an interactive task was
re-activate, TSS/360 would copy its page set from 2311 disk to 2301
fixed-head "drum". When that was done, it was start executing the
interactive task. When the interactive task suspended, all of its
pages would be copied from the 2301 back to the 2311.
when i was rewritting much of the cp/67 code in the '60s, I believed
that everything needed to be dynamic adaptive, and you didn't do
anything that you didn't need to do. With things like process
suspension (like interactive waiting for event or scheduling decision
for contention for real memory) if dynamic adaptive indicated high
enuf real storage contention, the suspension code would gather all the
task's pages and queue them for (effectively) block page out ... but
possibly wouldn't actually start the events (unless real storage
contention was at a higher level, since there would likely be some
probability of reclaim). If dynamic adaptive indicated much lower
level of real storage contention, it would even do less ... so there
was very high probability of re-use/reclaim.
In very late 70s (probably '79), somebody from the MVS group (that had
just gotten a big award) contacted me about the advisability of
changing VM/370 to correspond to his change to MVS; which was at task
suspension don't actually write out all pages ... but queue them on
the possibility that the writes wouldn't actually have to be done,
because the pages could be reclaimed. My reply was that I never could
figure out why anybody would do it any other way ... and that was the
way that I had always done since I first became involved w/computers
as an undergraduate.
This was not too long after another MVS gaff was fixed. When POK was
first working on putting virtual memory support into OS/370
(aos2,svs,vs2), several of us from cambridge got to go to POK and talk
to various groups. One of the groups was the POK performance modeling
people that were modeling page replacement algorithms. One of the
things that their (micro) model had uncovered was that if you select
non-changed pages for replacement before changed pages (because you
first had to write changed pages, non-changed pages you had some
chance of re-using the copy that was already on disk and so could
avoid the write) you did less work. I argued that page replacement
algorithms were primarily based on approximating LRU-type methodology
... and choosing changed before non-changed pages violating any
reasonable algorithm approximation. In any case, VS2/SVS was shipped
with that implementation. Well into the MVS cycle, somebody discovered
that the page replacement algorithm was choosing shared, high-use,
resident, linklib program pages before simple application data pages
for replacements (even tho simple application data pages had much
lower use than high-use shared resident linklib a.k.a. system pages).
>How many Ilene's tall is the MCP listing?
>
The current MCP is just under 1.25 Million lines of code. This does not
include utilities, MCS/COMS (terminal session handling code), or networking.
I *think* it includes device drivers (sort of), file system, and some parts
of the kernel (NX machines actually put a lot of interrupt and CPU scheduling
in seperate, specialized, processors that run their own code). I assume
memory management is part of the MCP.
- Tim
The Ilene reference has to do with a female programmer on the 7700 MCP
team. Folks would measure the height of the MCP listing versus Ilene's,
and convert inches to "Ilenes." Does anyone else remember this?
--Lee
> Tim McCaffrey <t...@spamfilter.asns.tr.unisys.com> wrote:
>
>>In article <a5ldot$jn$1...@tyrol.bertagnolli.net>, l...@tyrol.bertagnolli.net
>>says...
>>>How many Ilene's tall is the MCP listing?
> The Ilene reference has to do with a female programmer on the 7700 MCP
> team. Folks would measure the height of the MCP listing versus Ilene's,
> and convert inches to "Ilenes." Does anyone else remember this?
I think I remember Ilene (or was it Eileen?) Berger (?) who ran the
group responsible for a lot of the software (including GEMCOS). She
walked up to the podium once at CUBE, peeked over, and said, "I *am*
standing up!".
Louis
For the edification of those not quite as old and crusty as those who
lived through these times, I would like to point at that "memory was
too expensive" in this context is "a buck a bit" (and a dollar could
buy something back then!!).
>
>Edward Reid
>
--
Peter Ingham
Lower Hutt
New Zealand
Hans
J Ahlstrom <jahl...@cisco.com> wrote in message
news:3C7D26ED...@cisco.com...
That would be Eileen Boerger who was a software manager in Mission Viejo
when the standard was created and was, shall we say, vertically challenged.
Unfortunately we may never know how big the MCP has gotten because:
a) The standard was lured away by another corporation years ago and nobody
thought to make a copy.
and
b) It's been years since anyone had the patience to try to run off an entire
MCP listing on the lineprinter
John Keiser
If I had to guess, I would say that it is at least 3 standard Eileens and
possibly 4.
The only constraints on the split were the physical sizes of the memory
modules themselves. Of course, if you wanted to get any useful work done, a
3/3 split was strongly advised, at least on the B6800.
John Keiser
I helped field test the first 6800 Global Memory system back when I was with
the big B. You are right, it didn't have to be a 3MB/3MB split. You could
put 2MB in Global and then put 4MB in each local processor. It depended on
what your processing environment needed. We mostly ran tightly coupled
(generally referred to as tightly cripled). Biggest issue was that each box
including the GM had its own clock source. That meant any memory access to
GM required clock syncing overhead. In those days access to local memory
(IC by now) was 3-4 clocks. Access to GM was 7+ clocks (note lack of
specified upper limit). We quickly learned to keep things out of GM if you
wanted any performance.
Chris Bremer
My recollection is that she left the company somewhere in the very late
1980's or early 1990's after a large project she had championed (and used to
underpin several others architecturally) failed to meet either performance
or market acceptance expectations.
Her physical stature was indeed striking! And I had heard of this unit of
measure, and its association with her, at the time.
The unit of measure, in that case, would be the "Eileen".
-Chuck Stevens
"Louis Krupp" <lkr...@NOSPAMPLEASE.pssw.com> wrote in message
news:3C7E806C...@NOSPAMPLEASE.pssw.com...
<<history snipped>>
it seemed like a vast majority of operating system, control program,
kernel state of the art in the 60s, 70s and even into the 80s appeared
to assume system reaching some relatively static steady state (also an
issue observed with tcp slow-start in the late '80s).
i had reached a working hypothesis that the people typically
responsible for kernel programming spent almost all of their time
dealing with binary yes/no, on/off, true/false situations resulting in
a fairly entrenched mind set. dynamic adaptive was a significant
paradigm shift which was more characteristic of the OR crowd
implementing fortran and apl models.
To dynamically adapt programming style ... even with-in a span of a
couple machine instructions didn't seem to be a common occurance. In
fact, some number of people complained that they couldn't understand
how some of the resource manager was able to work ... there was a
sequence of few machine instructions flowing along a traditional
kernel programming paradigm dealing with true/false states .... and
then all of a sudden the machine instruction programming paradigm
completely changed. In some cases I had replaced a couple thousand
instructions implementing n-way state comparisions with some values
that was calculated someplace else, a sorted queue insert, and some
simple value compares and possibly a FIFO or LIFO pop off the top of a
queue (although I do confess to have also periodicly rewritten common
threads thru the kernel ... not only signficantly reducing the
aggregate pathlength but also sometimes making certiain effects
automagically occur as a side-effect of the order that other things
were done, my joke about doing zero pathlength implementations)
boyd, performance envelopes, and ability to rapidly adapt:
http://www.garlic.com/~lynn/subtopic.html#boyd
recent dynamic adaptive related thread (check for the "feed-back" joke
now 25 years old):
http://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#12 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?
scheduler posts
http://www.garlic.com/~lynn/subtopic.html#fairshare
virtual memory posts
http://www.garlic.com/~lynn/subtopic.html#wsclock
> If I had to guess, I would say that it is at least 3 standard Eileens and
> possibly 4.
Now that you mention it, yes, I think the term I heard was "standard
Eileen", not just "Eileen".
> a) The standard was lured away by another corporation years ago and nobody
> thought to make a copy.
Don't understand why a manager recently (but before the "package") retired
from Tredy (?) doesn't count. Neither vertical dimension nor corporate
clout were that much different, IIRC.
-Chuck Stevens
"Skip Ingham" <Skip....@unisys.com> wrote in message
news:a5itmn$25s6$1...@si05.rsvl.unisys.com...
B1000 Cande did/does a lot of things the current large system cande doesn't
do. Still miss PAGE LIT command. Skip
"Keith Landry" <klan...@bellsouth.net> wrote in message
news:3C7C0731...@bellsouth.net...
The B1000 CANDE did in-line saves.
Bob Rahe wrote:
In article <3C7BF2F6...@brutele.be>,Gordon DeGrandis
<gordon.degrand...@brutele.be> wrote:
That is the way I remeber it. We even implemented a chargecode system in
CandE so we could charge back usage. CandE got better when the file
subsystem was revamped with the PAST and FAST. Otherwise CandE was realy
slow especially when it was also running production stuff as well. We
The problem with Cande was/is that is wants to rewrite the whole
workfilewhen it does updates/saves. We actually had a guy who wrote a
Cande-lookalike on the 5500 then 6700 which would do the simple fixes
in-line.I.e. rewrite records that were edited if possible rather than
rewritethe whole file for a simple change in one line. (Did a bunch of
otherthings faster too...)
also used Swapper, anyone remeber Swapper?
Ye gads what a horrible thing THAT was... I wrote a program thatwould
display the position of up to 70 or so tasks on a dumb terminal.I.e. you
could watch as tasks moved from disk to memory and back out again. All
because memory was so expensive and you could only get to 6 megson those
early machines (and 6 megs of core was as big as a house!)
Skip
"David Galvin" <dga...@nospam.allmerica.com> wrote in message
news:h3xf8.24294$0C1.2...@newsread1.prod.itd.earthlink.net...
I've done a quick calculation of a straight listing, without compiler
formatics, of 1,244,256 lines. At 60 lines per page, 3100 pages per box
of paper, and 11 inches per box, we get ((1224456/60)/3100)*11 = 72.4
inches. Not even two Eileens.
An older MCP (41.253) printed with more modern technology (a page
printer with about 100 lines on each side of the page) takes up about
four feet of linear space in binders in the office down the hall.
--
Tom Herbertson
Unisys (Net2: 656 6427) Mail Stop 320, Mission Viejo CA 92691-2792 USA
Voice: +1 949 380 6427 mailto:tom.her...@unisys.com (office)
FAX: +1 949 380 6560 or mailto:herbe...@cox.net (home)
- My opinions are my own; I do not speak for Unisys or anyone else -
Yes, but you must include XREFs. I remember the ritual of moving
the official 34 listing (with XREFs) from the old computer room to
Eileen's office. As I recall it was more in 600-700k lines range
and it was more than one standard Eileen in height, even then.
> Page lit .target.
> Command would only offer a page of lines that the target appered in. You
> then could change or leave allow any line and transmit the page back fanning
> the lines back into the source file.
>
> Skip
> "David Galvin" <dga...@nospam.allmerica.com> wrote in message
> news:h3xf8.24294$0C1.2...@newsread1.prod.itd.earthlink.net...
>
>>What's PAGE LIT? Is (was) it the same as the current FIND LIT :P command?
I'm not sure I fully understand the description, but you might want to
try either
]SEARCH /target/
or
]+S /target/
in Editor and see if it does what you want. The first is much like ]FIND
in that it shows all of the lines with the target. SPCFY on any and you
go there for editing. Enter ]SEARCH again to go back or ]+SEARCH for
more. The difference between ]+S and ]+F is that the +S takes you to the
next page containing the target, but F takes you to the next line
containing the target and offers that line for editing.
> Skip Ingham wrote:
>
>> Page lit .target.
>> Command would only offer a page of lines that the target appered in. You
>> then could change or leave allow any line and transmit the page back
>> fanning
>> the lines back into the source file.
>>
>> Skip
>> "David Galvin" <dga...@nospam.allmerica.com> wrote in message
>> news:h3xf8.24294$0C1.2...@newsread1.prod.itd.earthlink.net...
>>
>>> What's PAGE LIT? Is (was) it the same as the current FIND LIT :P
>>> command?
>>
>
> I'm not sure I fully understand the description, but you might want to
> try either
> ]SEARCH /target/
> or
> ]+S /target/
> in Editor and see if it does what you want. The first is much like ]FIND
> in that it shows all of the lines with the target. SPCFY on any and you
> go there for editing. Enter ]SEARCH again to go back or ]+SEARCH for
> more. The difference between ]+S and ]+F is that the +S takes you to the
> next page containing the target, but F takes you to the next line
> containing the target and offers that line for editing.
>
>
Oh, and MORE continues the list from where you left off, rather than
starting the find or search from the beginning.
Eileen Boerger.
Yeah, she had a good sense of humor.
On Thu, 28 Feb 2002 17:51:09 -0500, Tom Herbertson wrote
> I've done a quick calculation of a straight listing, without compiler
> formatics, of 1,244,256 lines. At 60 lines per page, 3100 pages per box
> of paper, and 11 inches per box, we get ((1224456/60)/3100)*11 = 72.4
> inches. Not even two Eileens.
Heck, back in the mid-70s I was printing the MCP to microfiche. Just a
small box of fiche, maybe a quarter the size of a card box. Of course,
the MCP was also only about 150,000 lines in those days.
A few years later I was printing it on a page printer (Xerox 9700) in
portrait format, 132 columns and 132 lines, duplex. Yep, small print,
but my eyes were younger then. Pre-punched paper. I think it took five
3" binders, or maybe they were 4". That included the xref.
Even Eileen wasn't that short.
It really did depend a lot on the weight of the paper. Classic fan-fold
computer paper varied a lot more in weight than modern copy paper does.
Edward Reid
> That would be Eileen Boerger who was a software manager in Mission Viejo
> when the standard was created and was, shall we say, vertically challenged.
FYI... http://www.prodx.com/pages/company/staff_boerger.htm
Louis
-Chuck Stevens
"Louis Krupp" <lkr...@NOSPAMPLEASE.pssw.com> wrote in message
news:3C7FB0F...@NOSPAMPLEASE.pssw.com...
They say there that
"Prior to this she held a variety of management positions at Mentor
Graphics Corporation, including Vice President, Intellectual Property
Partnerships, Vice President of the Design Flows Strategic Business Unit
and Vice President of Corporate Engineering."
Nothing about Unisys or Burroughs before that. And it really is the same
person.
>The Ilene reference has to do with a female programmer on the 7700 MCP
>team. Folks would measure the height of the MCP listing versus Ilene's,
>and convert inches to "Ilenes." Does anyone else remember this?
That's an Eileen, which is slightly less than a Smoot. I worked for Eileen
(she was my boss's boss's boss). The partitions at the Lake Forest facility
were 5' for some cubicles and 6' for other cubicles. It was considered a
status symbol to have tall partitions, which led to some grumbling. Eileen
said "they're all tall to me".
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam 1-917-715-0831
dt ||\ Here I am: http://RandallBart.com/ I LOVE YOU
a |/ He Won't Get Far: http://www.callahanonline.com/calhat9.htm
l |\ DOT-HS-808-065 MSMSMSMSMSMSMS=6/28/107 Joel 3:9-10
l |/ Terrorism: http://www.markfiore.com/animation/adterror.html
>It's worth noting that it was universally understood that the ASN
>architecture was nothing but a stopgap that would be superseded by a
>better way of extending memory ASAP. It just took a while to evolve the
>hardware. (The software required changes too, but I'm pretty sure the
>hardware was the bottleneck. After all, the software engineers spent a
>lot of time coding for the ASN kludge.)
I hadn't heard before that ASN was seen a stopgap. I think it was over six
years between the introduction of ASN and ASD. The term "ASN memory" was a
retronym. The B7800 broke the 6MB with address spaces. Address spaces were
identified by number. The words "address space number" were abbreviated
ASN, so pretty soon address spaces were called ASNs. But it wasn't called
the ASN memory model until there was another memory model (ASD) to contrast
it with.
The similarity of the names ASD and ASN was unfortunate. Calling the MCP
which supported ASD MCP/AS was just stupid.
The library attribute SHARING has the value DONTCARE. This now is a synonym
for PUBLIC, but in the days of ASN memory, DONTCARE meant share by address
space. If SHARING is PRIVATE, each program gets it's own copy of the
library. If SHARING is PUBLIC every calling program shares the same copy,
but in order for programs in different ASNs to see it, it must be in global
memory. Global memory was usually a scarce resource. By specifying
DONTCARE, you could keep the library out of global memory without having one
copy for each calling program.
Credit for the memory address space expansion effort goes all the way
back to the B6800 and Global Memory. This was the first MCP System to
allow use of a physical memory space in excess of 6MB. This
architecture bound a cpu/mpx to a physical local memory with a common
shared global space. It was this system, and MCP 3.1 that brought us
terms like tightly-coupled (later referred to as thuroughly crippled),
groups (which were logical address spaces) and a host of task
attributes having to do with address visibility.
This architecture was indeed a stop gap and everybody knew it. There
was a proposal that actually made it into field documentation that
described a hierarchical global memory that allowed a 10x 6800 to be
built using 3 GMM cabinets .... fortunately this never saw the light
of day.
The 7800 mimiced the 6800 GMM design without the need for the GMM
hardware, the boys in Tredyffrin figured out how to used the existing
7800 hardware to deliver capability that matched, and extended the
6800 design. Specifically the 7800 implementation extended the design
to allow multiple CPM/IOM to be bound to a common local address space.
The 6900 and 5900 both used the 6800 design and required GMM hardware.
One of the most spectacular things I recall was watching a 3x5900 with
GMMII halt-load ... I swaer it took 30 minutes, the whole time the
ODTs were spewing maintenance messages in rotation.
The A9 was next, this machine was the first A Series and the first
hardware implementation to support emode Beta, the instruction and
stackformat set required for ASD memory. When released the ASD MCP
was not yet ready, so, it went out in ASN mode where the A9 CPU/IO
could float from address space to address space. The ASD MCP would
come later. You have to remember that this all happened at the same
time that the MCP evolved from ESPOL to NEWP ... Indeed exciting times
in Mission & Tredyffrin.
The 7900 was the stopgap of all stopgaps. It was a turn on the 7800
CPU and an adapt of the DLP IO started on the medium systems and
extended to the 5900, 6900 and A9. While it was an improvement over
the 7800 in many ways, it was a beast to administer. It was an ASN
mode system only but implemented the floating address space like the
A9. Later the code and data address spaces were split in an attempt
to expand the 6mb address window to 12mb. It helped some and
frustrated others.
The A15 hit the streets using the 7900's IO subsystem but with a new
processor and memory that was designed for ASD addressing as well. It
shipped with an ASN MCP but everybody who bought one bought it for ASD
addressability which was delivered some months after the initial
shipment. Again, the newification effort probably took a larger toll
on timing than the ASN MCP vs ASD MCP did.
There were other, lesser systems in the stable during these times, but
it was the 6800, A9 and A15 that moved the architecture, all without
once requiring that a user recompile his code. The A16 was the system
that really finished the evolution and formed the architectural basis
for a whole host of systems from the A11 to the 6820/30.
As for MCP vs. MCP/AS .... you need to put in in context as well.
MCP/AS was the Burroughs response to IBM's XA Architecture, As is
often the case, marketing needs hype, and in this case a response to
our clients questions after their IBM rep had come to visit.
Jim, sorry, but that's not right. The MCP had already been converted
to NEWP for some time by the time I joined Burroughs, Mission Viejo
plant, in '83. ESPOL was only history by then. The A9 didn't come
along until after a bunch of us were moved to the new Lake Forest
plant (which was actually in Irvine) several years later.
<snip>
--
Jeff J. Wilson [ jeff....@unisys.com ]
...speaking only for myself
Vanguard of the 13er generation.
I'm pretty sure that Jim just meant the NEWP conversion occurred during
the change in memory architecture generally. That's definitely true, as
I'm pretty sure the ESPOL MCP ran on the B6800.
Edward Reid
-Chuck Stevens
"Jeff J. Wilson" <jeff....@unisys.com> wrote in message
news:3C83BEA9...@unisys.com...
<<snippage> >
> The 7900 was the stopgap of all stopgaps.
Let us not forget one of its predecessors, the so-called "B7000". This
was the name given to a system that had a combination of B7700 and B7800
CPMs.
> It ran the 31TC MCP.
I forgot to mention the year: circa 1981.
>
> I'm pretty sure that Jim just meant the NEWP conversion occurred during
> the change in memory architecture generally. That's definitely true, as
> I'm pretty sure the ESPOL MCP ran on the B6800.
I worked at the first site, Ford Supply Staff, that had a Global(tm)
Memory 6800 without being a test site (I think the test site had been
Marathon Oil). It ran the 31TC MCP. This version of the MCP symbolic
included both ESPOL and Newp constructs, the Newp constructs set off by
some punctuation that I now forget what it was. It was the beginning
both of Newp and the converged MCP symbolic. In previous releases, there
had been both Mission Viejo (B6000) and Tredyffrin (B7000) symbolics of
many software items; while patches were regularly exchanged between the
facilities, there were different features in the two, so not all patches
went into both. There was some controversy over which features should
remain in the common symbolic. I recall people really missing the
in-line fix from B7000 CANDE. It was a feature where you'd type "FIX
<sequence number>" and nothing else. The line would then be typed at
your terminal, and you typed characters underneath it where you wanted
to insert (I think a period) and delete (I think a slash). It was very
much like APL in-line editing. It didn't work very well, though, on
terminals who moved their print head to the right to let you see the
characters; it was hard to tell what character you were going to type below.
Oh, and despite the fact that Ford was not a test site, we did soon
require a visit by implementor Darrell High of the Mission Viejo plant
to work out some problems.
Well, I know people were disatisfied with the global memory model and
were trying to come up with a replacement when I joined up. During my
interview one of the managers described GMM and the problems with it
and asked me what I would do about it. That was a tough interview.
Jeff is on point, by the time the A9 hit the streets the newpification
exercise was complete, but Ed captures the spirit of my point, the
architecture changes, change in implementation language and common
symbolic efforts happened at about the same time.
It went well beyond proposal. It was an integral part of the initial design.
It might have seen the light of day but when the project made the transition
to detail design and implementation a new software project leader was
assigned. He took one look at the amount of stuff that would have to go in
the topmost global and the 15+ clock *minimum* access time to get to it and
declared: " This will never fly. I'm not doing it. If you want more than 4
processors, get someone else to do it." There were no volunteers. That's
what killed it.
John Keiser
Dan
On 27 Feb 2002 17:53:44 GMT, b...@hobbes.dtcc.edu (Bob Rahe) wrote:
>In article <01HW.B8A252020...@news-east.usenetserver.com>,
>Edward Reid <edw...@paleo.org> wrote:
>>On Tue, 26 Feb 2002 16:15:43 -0500, Bob Rahe wrote
>>>> also used Swapper, anyone remeber Swapper?
>
>>> Ye gads what a horrible thing THAT was...
>
>>Well, if RAM had remained expensive and disk speeds had improved much
>>more rapidly ... then Swapper would be seen as prescient. It's due to
>>20/20 hindsight that we know it turned out the other way around.
>
> Nah, it was a kludge even then! 8-) Followed closely by the pre-ASD
>type memory model... forget the name right now...
>
>...
>
>>The cost of core memory was the limitation that led to Swapper. It had
>>nothing to do with the 6MB limit, which was perceived as nearly
>>infinite in those days. Only in Swapper's late days was it seen as
>>helping with the address space limit.
>
> Well, I don't know. It certainly did help with the 6M limit because
>one thing swapper did was swap out ALL of a prog. Including the save
>core it used, like the stack etc. If you had lots of things running
>the save core could mount up leaving you with almost nothing to run
>progs with...
There certainly have been lots of attempts to try to fix
and work around this problem.
Does anyone have documentation on the various
features and mechanisms used by
Univac for 1100 - 2200
Burroughs for A Series
IBM for 370 et seq
any other architectures/series that lasted long
enough to face the problem
to overcome their limitations?
For an article for Annals of Computer History
on this topic, I would like documentation
and engineering notes, names of people involved, ...
impact on operating systems, ...
Thank you
John K Ahlstrom
--
"C++ is more of a rube-goldberg type thing full of high-voltages,
large chain-driven gears, sharp edges, exploding widgets, and spots to
get your fingers crushed. And because of it's complexity many (if not
most) of it's users don't know how it works, and can't tell ahead of
time what's going to cause them to loose an arm." -- Grant Edwards
I suspect George Gray might have some of this information - my
recollection of addressing in the 1100/2200s that preceded me is a
little shaky. I'm not sure I can come up with documentation and
engineering notes, but I was (and am) involved in OS adapts for new
addressing enhancements from the 2200/900 onwards ...
> For an article for Annals of Computer History
> on this topic, I would like documentation
> and engineering notes, names of people involved, ...
> impact on operating systems, ...
>
Sounds like an interesting article. I guess I better keep my
subscription current until it appears.
Regards,
David W. Schroth
There have been two mechanisms used for this. Multi-banking in the 1970s,
and the newest one, adding new addressing modes with additional base
registers. I can explain the first well (I went through it), though others
would be better on the second. You need some background on how addressing
was done on these machines to understand the issues though.
> Burroughs for A Series
There has been a recent thread (in comp.sys.unisys) on just this topic.
> IBM for 370 et seq
Others are much better at this than I am, but see the terms XA and ESA.
> any other architectures/series that lasted long
> enough to face the problem
> to overcome their limitations?
>
> For an article for Annals of Computer History
> on this topic, I would like documentation
> and engineering notes, names of people involved, ...
> impact on operating systems, ...
I have some old manuals, but they don't adress the topic as a "cure" for
running out of bits. They just explain how the "new" thing works.
As this is a long discussion, why don't you e-mail me privately.
--
- Stephen Fuld
e-mail address disguised to prevent spam
PDP-11 hit it pretty early into its life cycle. One is left
to wonder what DEC was thinking. You will find gobs of materials
about it.
BESM-6 used fixed size 24 bit instructions. Something like 13 or 14
bits were available for addressing; Fortunately, only 48 bit words
were addressed, so decent sized data sets was possible to access
into 1970s. Eventually, a fixed size segmentation similar to the
one used in PDP-11 was introduced. The route of Intelish segment
registers was not taken. In late 80s, a fully 64 bit successor
to BESM-6 was introduced under the name of Elbrus-1KB, which had
a compatibility mode for old 24/48 bit binaries. BESM-6 was
introduced in 1966 and retired in 1992 (or so). Novosibirsk
people ported UNIX v6 to it right before it was retired, using
Johnson's pcc and simulated byte addressing with 6 byte words.
EMT overlays could be used too :)
French line of Mitra was a 16 bit machine with accumulator
architecture. Mitra-15 (1975?) had 64KB or core memory. Next
(popular) model was Mitra-225 (1981?) with segment registers,
offset by 4 bits, so 1MB of physical memory was addressable.
Unfortunately, only two useable bases existed, and code and data
had to be matching to get manageable execution model, which produded
and equivalent of PDP-11 without I+D. However, multiused performace
was greatly improved. A gentleman by the name Mark Venguerov
rewrote K&R C in the assembly of Mitra and added a code generator,
and so we were able to port UNIX utilities (I tried to port whole
UNIX too, but graduated before I was able to finish it :).
We had a passable Usenet server on it, but could only accept
12 bit compressed batches. Decompression of 16 bit compressed
batches required 402K of application data space, and I found
no good way to do it. Later, French came up with Mitra-725,
which had actual MMU underlying the whole addressing of 225.
(anyone feels it similar to 8086, 286 and 386?)
That MMU addressed up to 4MB or RAM, was paged, etc.
The page size was characteristic 256 bytes. The first thing
I did for my stillborn UNIX port was to define larger page of 1K
in software, including the 225 variant of it. There really was no
good reason to maintain such small page size, except compatibility
with retarged French OS MTM-2. I/O went through MMU too, which
was an annoying restriction. The 725 did not have time to take
over the 225 before whole line was wiped out by PCs.
HP 3000 was extended way past its initial design, but I did
not work with it. I seem to recall that there was a two-stage
extension, the first was very much in PDP-11 style. A group
of Russian maniacs ported UNIX v7 to it using Johnson's pcc
again, they also had trouble with Usenet feeds. This is all
I know about it :) Next extension supported paged memory,
but I do not remember its address size. Ask HP people.
Norsk Data Nord 500 was a weird mini with something like 19
address bits, and segments. It was extened to support a power
of two pointers and paged memory, or something like that.
It was reasonably popular in Europe and USSR, but fell completely
off the Internet horizon, and you'll never find any decent
materials about it. The whole thing, hardware and software,
was made in Norway! An accomplishment of similar magnitude
today would be if, say, Swiss built a commercially successful
line of Linux workstations based on Clipper or Axil Antrax-100,
with their own Fortran compiler.
One box which was NOT extened, while I would expect it to be,
was DG Eclipse (wasn't it also called MV?). Probably it had
enough address bits when it was designed (it was a VAX
contemporary after all).
-- Pete
When the 11 was designed (design completed in 1969 IIRC), a 16-bit address
space *was* a major extension over the 12-bit address space that was selling
like hot-cakes on the PDP-8. Early 11 OSs ran fine with 8 KB of physical
memory. And of course if you needed *serious* computing rather than the
kind of point-of-use computing the 8 and 11 were designed for, they'd
happily sell you a 10 (which DG of course couldn't - so if you think *DEC*
was short-sighted back then, what was DG thinking when it designed the
Nova?).
It's true that the *physical* address space got quickly extended to 18 bits
and then to 22 bits as the 11's wild popularity moved it into far more
environments than DEC may have envisioned. But the memory management
hardware had no trouble handling this extension - and I believe the 'hardest
mistake' refers to virtual address space, not physical address space.
The 11 grew to support 4 MB of physical memory, and this was plenty until
long after the VAX existed to satisfy larger needs. But, even though a
single application could use most of this 4 MB, the need to 'overlay' it to
do so was a real pain - though overlaying was a significantly more
powerful/flexible mechanism than the segmentation that the 16-bit PC used a
dozen years after the 11 appeared, and as long as application developers
were able to put in the effort to use it 11s competed very successfully with
VAXes in areas where more than 4 MB of physical memory wasn't useful (many
of which areas persisted well into the '90s).
- bill
If it were me doing the research, I would order the "principles of arch." manuals
for 360.370, XA, and what ever they call the 64 bit version.
I would then find some back issues of the IBM J. R&D at the library. And lastly
I would take a look at Lynn Wheeler's web page (he posts here regularily). I have
noticed a few other old beemers posting occasionally, like Julian Thomas JTdidit.
--
Del Cecchi
cec...@us.ibm.com
Personal Opinions Only
> On Fri, 08 Mar 2002 09:38:49 -0800, J Ahlstrom <jahl...@cisco.com> wrote:
> > Does anyone have documentation on the various
> > features and mechanisms used by
> > Univac for 1100 - 2200
> > Burroughs for A Series
> > IBM for 370 et seq
> > any other architectures/series that lasted long
> > enough to face the problem
> > to overcome their limitations?
>
> PDP-11 hit it pretty early into its life cycle. One is left
> to wonder what DEC was thinking. You will find gobs of materials
> about it.
I seem to remember reading something from when they were working on
the VAX that implied DEC was wondering what they'd been thinking,
too...
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair
My memory may be failing here, but I thought that the Eagle was
essentially the Eclipse extended. Wasn't that what Tracy Kidder's "Soul
of a New Machine" was about?
--Mirian
The BBN C-30 IMP had perhaps the most god-awful solution.
They started with a mid-60s Honeywell 16-bit machine (I think) as
the Arpanet IMP, then moved to a microcoded TTL replacement when it
got EOL'ed, as it was cheaper than re-writing code. Eventually they
bolted another 4 bits onto (almost) everything, giving them a 20-bit
machine which could now address a million words of memory.
Unfortunately they had an instruction encoding which only allowed
expression of 9 bits of immediate address or data, so you had to
reserve the bottom of every 512-word page for address and data
constants.
They also had a machine called the C-70, which was the same hardware
with microcode that made it look sort of like a 20-bit PDP-11, with
10-bit bytes. It ran some ancient variety of Unix, and was a bitch to
port code to.
--
.....................................................................
Peter Desnoyers (781) 457-1165 pdesn...@chinook.com
Chinook Communications (617) 661-1979 p...@fred.cambridge.ma.us
100 Hayden Ave, Lexington MA 02421
--
+-------------------------------------------------------------+
| Charles and Francis Richmond <rich...@plano.net> |
+-------------------------------------------------------------+
Not really. You could put up to 32K 12-bit words on a PDP-8, using a
bank switching scheme that sort of foreshadowed x86 segments. The
original 16-bit PDP-11 only offered half a bit more than that: 64K
8-bit bytes vs. 32K 12-bit words.
Gordon Bell was a strong advocate of lashing together lots of
computers to do large jobs, so I imagine that 64K bytes seemed plenty
for what would be one of many computers in a larger system. Since
then we've learned more about how hard it is to partition programs to
use lots of little computers, and how hard it is to program anything
that doesn't look like one CPU executing a single stream of
instructions.
> Early 11 OSs ran fine with 8 KB of physical memory.
Well, yeah, so did CP/M which was similarly capable.
> if you think *DEC* was short-sighted back then, what was DG thinking
> when it designed the Nova?).
Ed DeCastro designed the PDP-8, and designed a 16-bit follow-on that
DEC declined to build, so he quit and started DG to build it. At the
time, the competition between the Nova and the PDP-11 was based more
on price and OEM agreements than on architecture.
--
John R. Levine, IECC, POB 727, Trumansburg NY 14886 +1 607 387 6869
jo...@iecc.com, Village Trustee and Sewer Commissioner, http://iecc.com/johnl,
Member, Provisional board, Coalition Against Unsolicited Commercial E-mail
Yes, really: you're confusing physical address space with virtual address
space, and the latter is the subject referred to in this topic. Extending
the 11's initially-limited *physical* address space was hardly the 'hardest
mistake in comp arch to fix'.
You could put up to 32K 12-bit words on a PDP-8, using a
> bank switching scheme that sort of foreshadowed x86 segments. The
> original 16-bit PDP-11 only offered half a bit more than that: 64K
> 8-bit bytes vs. 32K 12-bit words.
...
> > Early 11 OSs ran fine with 8 KB of physical memory.
>
> Well, yeah, so did CP/M which was similarly capable.
From 30,000 feet, perhaps. But at any finer level of detail I think you'd
find significant differences between RSX-11M and CP/M.
>
> > if you think *DEC* was short-sighted back then, what was DG thinking
> > when it designed the Nova?).
>
> Ed DeCastro designed the PDP-8, and designed a 16-bit follow-on that
> DEC declined to build, so he quit and started DG to build it. At the
> time, the competition between the Nova and the PDP-11 was based more
> on price and OEM agreements than on architecture.
My comment referred to starting a company whose *only* offering was limited
to a 16-bit address space (as contrasted with DEC's ability to offer a 10 to
people who needed something more).
- bill
Joe Pfeiffer wrote:
I seem to recall having a conversation with one of the VMS implementors at a
DECUS meeting. He said that there had been an engineer who had argued that
extending the (virtual) address space from 16 bits to 24 bits, instead of the
proposed 32 bits, would be more than adequate. He said that he doesn't remember
the engineer's name, as he only attended a few meetings, then quietly
disappeared. Wonder why?
Also, I remember the original VAX-11/780s originally had 1MB of memory, with the
option of expanding to a whopping 2MB! It sounds tiny now, but we did support
over 200 concurrent interactive logins running a fairly large application. We
now have 448MB memory supporting about 400 concurrent users, but a whole bunch
of ancillary software as well.
I hope that the mystery (possibly apocryphal) engineer has learned that you can
never have too many address bits. Or at least that the marginal cost of adding a
few bits is nothing compared to the marginal cost of redesigning from the ground
up.
--Carl
It was a pretty hot machine at the time. The main problem being that it was
attached to a ND-100. The ND-100 did all the I/O and quickly became a
bottleneck. Big mistake.
By the time the 500 came out, all the competent people at Norsk data had
been promoted to management, so the people who did this design was rather
unexperienced. Rumour has it that a company who got one of the 500 PCB cards
for testing wanted to retain one and frame it as a glorious example of how
not to do it.
The ND-100 was the norwegian equivalent of the PDP-11, but faster. That is
why Norwegians have little experience with PDPs and early VAXen.
I think Norsk Data sold both the ND-100 and the ND-500 series as number
crunchers to CERN.
greetings,
...
> Also, I remember the original VAX-11/780s originally had 1MB of memory,
with the
> option of expanding to a whopping 2MB!
My recollection is that the minimum supported VMS V1 configuration was 256
KB of memory and dual RK06 disks. Not that this gave you a system you could
run more than one application at a time on...
- bill
>I hope that the mystery (possibly apocryphal) engineer
>has learned that you can never have too many address bits.
That was the problem...the engineers suffering from small
computer thinking never did learn.
<snip>
/BAH
Subtract a hundred and four for e-mail.
>Gordon Bell has said that the hardest mistake
>to deal with in a computer architecture is too few
>memory address bits.
<snip>
I'd like to know when he finally realized this.
The problem with that is that universal allowance for wide
addressing bloats every instruction. At least in local frame
oriented code, most addresses (90% or more) can be expressed in 7
bits + sign (where sign allows for parameter/local addressing).
So it is advisable to have multiple addressing modes to cater for
the relatively rarely used long addresses. Which in turn means
you have to design the instruction set with this in mind from the
beginning. Then shorter instructions mean better caching, smaller
executables, higher speed, etc.
Obviously this doesn't work too well when backward compatibility
is the prime goal.
--
Chuck F (cbfal...@yahoo.com) (cbfal...@XXXXworldnet.att.net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)
If you have an ACM web account, it is available in the ACM Digital Library.
The link to the article "The Evolution of the Sperry Univac 1100 Series: A
History, Analysis, and Projection" by B.R. Borgerson, M.L. Hanson, and P.A.
Hartley is:
http://doi.acm.org/10.1145/359327.359334
Without an ACM web account you can still access the abstract.
The special issue also has articles on the DECsystem 10, IBM 370, Cray, and
others. It is one issue that I have held on to over the years.
Regards,
Mike
<email address mangled to prevent spam>
Which doesn't of course apply at all if the address is not part of
the instruction...
--
Sander
+++ Out of cheese error +++
Sooner or later it is in some form. Which might be load constant
to register, or to top-of-stack, or index to descriptor table, or
whatever.
But this is easily taken care of. You have multiple instructions, one that
loads the low order X bits from the instruction into a register, another
that loads the next X bits of the register from the instruction, etc. This
even elegently takes care of the case that smaller programs run faster (no
need for the instructions for the high order bits), but you don't need to
expand the instruction size to hold a large constant. Note: I am not
claiming this is a new technique. It has been in use for decades.
Tarjei T. Jensen wrote:
> It was a pretty hot machine at the time. The main problem being that it was
> attached to a ND-100. The ND-100 did all the I/O and quickly became a
> bottleneck. Big mistake.
<snip>
> The ND-100 was the norwegian equivalent of the PDP-11, but faster. That is
> why Norwegians have little experience with PDPs and early VAXen.
>
> I think Norsk Data sold both the ND-100 and the ND-500 series as number
> crunchers to CERN.
Actually, I think CERN relied on the CDC-6600 and CDC-7600 for number
crunching. The ND-100 machines would be much better used
1) As real-time data collection controllers (driving CAMAC instrument
crates) and
2) As interactive terminal clusters for editing jobs to be submitted to
the CDC's and for palying around with data analysis.
In my second computer job, I joined the company just as we were
selling an instrumentation package (some fairly simple A/D conversion
peripheral) for an ND-10 which was going into the Danish Institute for
Shipbuilding Research. My job was to write and integrate a device
driver. So I was shipped to Oslo, with instructions not to come back
until I knew enough about the SINTRAN operating system do do the rest
un my own.
The operating system was written in a language similar to PL/360 or
BCPL. It was a demand paged system with a flexible, extensible command
language, and designed for a mixed interactive and real-time workload.
The file system had protection mechanisms of essentially the same
power as Unix, but the group concept was replaced with the concept
of "friends", which made it much more flexible. The file name
completion feature was also more flexible and elegant. Hyphens in
filenames were special: If the file (which leved in a directory
of executable programs) was named "list-files-alphabetically", then
"l-fi", "list-f" and "lfa" might all be valid abbreviations.
As it happened, the system had losts of stability problems.
This was in 1975, and CMOS memory was quite new. If I remember
correctly, Norsk Data had chosen to use SRAM because they
believed it would be more stable than DRAM. The system failed in
interesting ways, but failed much less, if they were running
memory diagnostics in the background. In the end, it was determined
that the memory chips worked quite well, UNLESS a memory word sat
with an unchanged value for a very long time (such as several days),
in which case it would develop an affinity for that value, and when a
different value was written into it, it might after several hours revert
to the prior value. The problem was that this pattern of access
applied to some very critical operating system tables ... such
as the disk bitmaps. One the problem was understood, the fix was
to replace the memory with DRAM, which by then had become the
industry standard.
I liked the Nord-10, and its successor, the ND-100. They were PDP-11
class machines, but the operating system was much more elegant than
anything I ever saw on the PDP-11. I am sure this was a direct
consequence of working with a small team, any not having resources to do
more than one system which had to serve for all applications.
--
/ Lars Poulsen +1-805-569-5277 http://www.beagle-ears.com/lars/
125 South Ontare Rd, Santa Barbara, CA 93105 USA la...@beagle-ears.com
> In article <3C88F729...@cisco.com>,
> J Ahlstrom <jahl...@cisco.com> wrote:
>
> >Gordon Bell has said that the hardest mistake
> >to deal with in a computer architecture is too few
> >memory address bits.
> <snip>
>
> I'd like to know when he finally realized this.
I'll guess somewhere around the VAX days.
>I hope that the mystery (possibly apocryphal) engineer has learned that you can
>never have too many address bits. Or at least that the marginal cost of adding a
>few bits is nothing compared to the marginal cost of redesigning from the ground
>up.
While I don't know of a computer built with too many address bits, it would
certainly be possible to build on with 1000 bits, which would be too many.
I think 64 bits is going to be adequate for quite a while.
I find it amusing that PC disk drives have run into addressing limits at
multiples of 4. The first hard dive limit was 8 MB, then 32 MB, then 128
MB, then 512 MB, then 2 GB, then 8 GB. I haven't heard of a 32 GB limit, it
seems the next limit is 128 GB. The limits were in different places: The
disk format, the BIOS, the hardware interface, the software interface. It
seemed as soon as we had a solution to one limit we were bumping into the
next one.
I bought my 486 with a 120 MB disk drive. I eventually upgraded to a 512 MB
drive, the biggest the BIOS supported. By then I was on the Internet, and
chewing up a megabyte a day (and I don't download much). When I filled up
that drive, I wanted another 512 MB drive. After buying four (or was it
five) different used drives that didn't work, I went down to the store and
bought a brand new 2 GB drive, which I formatted as just 512 MB. I didn't
want to bother with a driver for breaking the 512 MB limit, because I was
going to replace the whole computer soon (which I did).
Which reminds me, where do I get a driver to break the 8 GB limit on my
Toshiba laptop running Win95?
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam 1-917-715-0831
dt ||\ Here I am: http://RandallBart.com/ I LOVE YOU
a |/ He Won't Get Far: http://www.callahanonline.com/calhat9.htm
l |\ DOT-HS-808-065 MSMSMSMSMSMSMS=6/28/107 Joel 3:9-10
l |/ Terrorism: http://www.markfiore.com/animation/adterror.html
>The A9 was next, this machine was the first A Series and the first
>hardware implementation to support emode Beta, the instruction and
>stackformat set required for ASD memory. When released the ASD MCP
>was not yet ready, so, it went out in ASN mode where the A9 CPU/IO
>could float from address space to address space. The ASD MCP would
>come later.
Some A9s were not ASD capable. Later A9s could run either alpha (ASN) or
beta (ASD) microcode, but not the ones shipped in the first several months.
The A9 did not support pseudo links. FIBs (and some other structures?) are
set up as pseudo activation records, so they can be addressed via IRWs. On
ASN machines, this was done by creating pseudo stacks to address all of RAM.
It took 16 pseudo stacks to make 1 Megaword addressable. The Tredyfferin
people realized that this would consume hundreds of stack numbers as RAM
sizes increased. They invented pseudo links to replace the pseudo stacks on
the A15 in e-mode beta. At the time, normal links had a 12-bit stack number
and a 16-bit offset. Pseudo links had a 20-bit ASD number and a 4-bit
(later 5-bit) offset. But even in e-mode beta, the A9 continued to use
pseudo stacks. I don't know whether the A10 used pseudo links or pseudo
stacks; it inherited a lot from the A9.
Repeated from my original post, below:
> > > > the relatively rarely used long addresses. Which in turn
> > > > means you have to design the instruction set with this in
> > > > mind from the beginning. Then shorter instructions mean
> > > > better caching, smaller executables, higher speed, etc.
--
snip
> The BBN C-30 IMP had perhaps the most god-awful solution.
>
> They started with a mid-60s Honeywell 16-bit machine (I think) as
> the Arpanet IMP, then moved to a microcoded TTL replacement when it
> got EOL'ed, as it was cheaper than re-writing code. Eventually they
> bolted another 4 bits onto (almost) everything, giving them a 20-bit
> machine which could now address a million words of memory.
A minor point, the first IMPs were DDP516s from Computer Control
Company and predated the Honeywell takeover of 3C.
snip
Anyway it's merely a sound bite, and I am sure that someone of that
competence never believed it.
It may be the hardest common problem for some meaning of "common",
but I simply don't believe that it even competes for the absolutely
hardest problem. For example:
A lot of designs have made implicit timing assumptions, which seem
quite safe at the time. As the systems are shrunk, with subsequent
increase in speeds and size of system, the timing assumptions start
to come up against the speed of light. Now, THAT is a hard one to
get round :-)
To see its relevance to current computers, consider the problem of
combining the need for 1,000 CPUs, GHz clock rates, 10 cycle access
to cache and fast (near-uniform) cache coherence. SGI's CTO said
publicly that they are already up against this - which you can
check easily enough using elementary physics!
Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679