Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hardest Mistake in Comp Arch to Fix

18 views
Skip to first unread message

J Ahlstrom

unread,
Mar 8, 2002, 12:38:49 PM3/8/02
to
Gordon Bell has said that the hardest mistake
to deal with in a computer architecture is too few
memory address bits.

There certainly have been lots of attempts to try to fix
and work around this problem.

Does anyone have documentation on the various
features and mechanisms used by
Univac for 1100 - 2200
Burroughs for A Series
IBM for 370 et seq
any other architectures/series that lasted long
enough to face the problem
to overcome their limitations?

For an article for Annals of Computer History
on this topic, I would like documentation
and engineering notes, names of people involved, ...
impact on operating systems, ...

Thank you

John K Ahlstrom

--
"C++ is more of a rube-goldberg type thing full of high-voltages,
large chain-driven gears, sharp edges, exploding widgets, and spots to
get your fingers crushed. And because of it's complexity many (if not
most) of it's users don't know how it works, and can't tell ahead of
time what's going to cause them to loose an arm." -- Grant Edwards


David W. Schroth

unread,
Mar 8, 2002, 1:18:31 PM3/8/02
to
J Ahlstrom wrote:
>
> Gordon Bell has said that the hardest mistake
> to deal with in a computer architecture is too few
> memory address bits.
>
> There certainly have been lots of attempts to try to fix
> and work around this problem.
>
> Does anyone have documentation on the various
> features and mechanisms used by
> Univac for 1100 - 2200
> Burroughs for A Series
> IBM for 370 et seq
> any other architectures/series that lasted long
> enough to face the problem
> to overcome their limitations?
>

I suspect George Gray might have some of this information - my
recollection of addressing in the 1100/2200s that preceded me is a
little shaky. I'm not sure I can come up with documentation and
engineering notes, but I was (and am) involved in OS adapts for new
addressing enhancements from the 2200/900 onwards ...



> For an article for Annals of Computer History
> on this topic, I would like documentation
> and engineering notes, names of people involved, ...
> impact on operating systems, ...
>

Sounds like an interesting article. I guess I better keep my
subscription current until it appears.

Regards,

David W. Schroth

Stephen Fuld

unread,
Mar 8, 2002, 1:25:25 PM3/8/02
to

"J Ahlstrom" <jahl...@cisco.com> wrote in message
news:3C88F729...@cisco.com...

> Gordon Bell has said that the hardest mistake
> to deal with in a computer architecture is too few
> memory address bits.
>
> There certainly have been lots of attempts to try to fix
> and work around this problem.
>
> Does anyone have documentation on the various
> features and mechanisms used by
> Univac for 1100 - 2200

There have been two mechanisms used for this. Multi-banking in the 1970s,
and the newest one, adding new addressing modes with additional base
registers. I can explain the first well (I went through it), though others
would be better on the second. You need some background on how addressing
was done on these machines to understand the issues though.

> Burroughs for A Series

There has been a recent thread (in comp.sys.unisys) on just this topic.

> IBM for 370 et seq

Others are much better at this than I am, but see the terms XA and ESA.


> any other architectures/series that lasted long
> enough to face the problem
> to overcome their limitations?
>
> For an article for Annals of Computer History
> on this topic, I would like documentation
> and engineering notes, names of people involved, ...
> impact on operating systems, ...

I have some old manuals, but they don't adress the topic as a "cure" for
running out of bits. They just explain how the "new" thing works.

As this is a long discussion, why don't you e-mail me privately.

--
- Stephen Fuld
e-mail address disguised to prevent spam


Pete Zaitcev

unread,
Mar 8, 2002, 2:27:19 PM3/8/02
to
On Fri, 08 Mar 2002 09:38:49 -0800, J Ahlstrom <jahl...@cisco.com> wrote:
> Does anyone have documentation on the various
> features and mechanisms used by
> Univac for 1100 - 2200
> Burroughs for A Series
> IBM for 370 et seq
> any other architectures/series that lasted long
> enough to face the problem
> to overcome their limitations?

PDP-11 hit it pretty early into its life cycle. One is left
to wonder what DEC was thinking. You will find gobs of materials
about it.

BESM-6 used fixed size 24 bit instructions. Something like 13 or 14
bits were available for addressing; Fortunately, only 48 bit words
were addressed, so decent sized data sets was possible to access
into 1970s. Eventually, a fixed size segmentation similar to the
one used in PDP-11 was introduced. The route of Intelish segment
registers was not taken. In late 80s, a fully 64 bit successor
to BESM-6 was introduced under the name of Elbrus-1KB, which had
a compatibility mode for old 24/48 bit binaries. BESM-6 was
introduced in 1966 and retired in 1992 (or so). Novosibirsk
people ported UNIX v6 to it right before it was retired, using
Johnson's pcc and simulated byte addressing with 6 byte words.
EMT overlays could be used too :)

French line of Mitra was a 16 bit machine with accumulator
architecture. Mitra-15 (1975?) had 64KB or core memory. Next
(popular) model was Mitra-225 (1981?) with segment registers,
offset by 4 bits, so 1MB of physical memory was addressable.
Unfortunately, only two useable bases existed, and code and data
had to be matching to get manageable execution model, which produded
and equivalent of PDP-11 without I+D. However, multiused performace
was greatly improved. A gentleman by the name Mark Venguerov
rewrote K&R C in the assembly of Mitra and added a code generator,
and so we were able to port UNIX utilities (I tried to port whole
UNIX too, but graduated before I was able to finish it :).
We had a passable Usenet server on it, but could only accept
12 bit compressed batches. Decompression of 16 bit compressed
batches required 402K of application data space, and I found
no good way to do it. Later, French came up with Mitra-725,
which had actual MMU underlying the whole addressing of 225.
(anyone feels it similar to 8086, 286 and 386?)
That MMU addressed up to 4MB or RAM, was paged, etc.
The page size was characteristic 256 bytes. The first thing
I did for my stillborn UNIX port was to define larger page of 1K
in software, including the 225 variant of it. There really was no
good reason to maintain such small page size, except compatibility
with retarged French OS MTM-2. I/O went through MMU too, which
was an annoying restriction. The 725 did not have time to take
over the 225 before whole line was wiped out by PCs.

HP 3000 was extended way past its initial design, but I did
not work with it. I seem to recall that there was a two-stage
extension, the first was very much in PDP-11 style. A group
of Russian maniacs ported UNIX v7 to it using Johnson's pcc
again, they also had trouble with Usenet feeds. This is all
I know about it :) Next extension supported paged memory,
but I do not remember its address size. Ask HP people.

Norsk Data Nord 500 was a weird mini with something like 19
address bits, and segments. It was extened to support a power
of two pointers and paged memory, or something like that.
It was reasonably popular in Europe and USSR, but fell completely
off the Internet horizon, and you'll never find any decent
materials about it. The whole thing, hardware and software,
was made in Norway! An accomplishment of similar magnitude
today would be if, say, Swiss built a commercially successful
line of Linux workstations based on Clipper or Axil Antrax-100,
with their own Fortran compiler.

One box which was NOT extened, while I would expect it to be,
was DG Eclipse (wasn't it also called MV?). Probably it had
enough address bits when it was designed (it was a VAX
contemporary after all).

-- Pete

Bill Todd

unread,
Mar 8, 2002, 3:02:23 PM3/8/02
to

"Pete Zaitcev" <zai...@yahoo.com> wrote in message
news:slrna8i44n....@devserv.devel.redhat.com...

> On Fri, 08 Mar 2002 09:38:49 -0800, J Ahlstrom <jahl...@cisco.com> wrote:
> > Does anyone have documentation on the various
> > features and mechanisms used by
> > Univac for 1100 - 2200
> > Burroughs for A Series
> > IBM for 370 et seq
> > any other architectures/series that lasted long
> > enough to face the problem
> > to overcome their limitations?
>
> PDP-11 hit it pretty early into its life cycle. One is left
> to wonder what DEC was thinking. You will find gobs of materials
> about it.

When the 11 was designed (design completed in 1969 IIRC), a 16-bit address
space *was* a major extension over the 12-bit address space that was selling
like hot-cakes on the PDP-8. Early 11 OSs ran fine with 8 KB of physical
memory. And of course if you needed *serious* computing rather than the
kind of point-of-use computing the 8 and 11 were designed for, they'd
happily sell you a 10 (which DG of course couldn't - so if you think *DEC*
was short-sighted back then, what was DG thinking when it designed the
Nova?).

It's true that the *physical* address space got quickly extended to 18 bits
and then to 22 bits as the 11's wild popularity moved it into far more
environments than DEC may have envisioned. But the memory management
hardware had no trouble handling this extension - and I believe the 'hardest
mistake' refers to virtual address space, not physical address space.

The 11 grew to support 4 MB of physical memory, and this was plenty until
long after the VAX existed to satisfy larger needs. But, even though a
single application could use most of this 4 MB, the need to 'overlay' it to
do so was a real pain - though overlaying was a significantly more
powerful/flexible mechanism than the segmentation that the 16-bit PC used a
dozen years after the 11 appeared, and as long as application developers
were able to put in the effort to use it 11s competed very successfully with
VAXes in areas where more than 4 MB of physical memory wasn't useful (many
of which areas persisted well into the '90s).

- bill

Del Cecchi

unread,
Mar 8, 2002, 2:48:41 PM3/8/02
to
In article <pm7i8.22245$106.1...@bgtnsc05-news.ops.worldnet.att.net>,

If it were me doing the research, I would order the "principles of arch." manuals
for 360.370, XA, and what ever they call the 64 bit version.

I would then find some back issues of the IBM J. R&D at the library. And lastly
I would take a look at Lynn Wheeler's web page (he posts here regularily). I have
noticed a few other old beemers posting occasionally, like Julian Thomas JTdidit.
--

Del Cecchi
cec...@us.ibm.com
Personal Opinions Only

Joe Pfeiffer

unread,
Mar 8, 2002, 3:51:34 PM3/8/02
to
zai...@yahoo.com (Pete Zaitcev) writes:

> On Fri, 08 Mar 2002 09:38:49 -0800, J Ahlstrom <jahl...@cisco.com> wrote:
> > Does anyone have documentation on the various
> > features and mechanisms used by
> > Univac for 1100 - 2200
> > Burroughs for A Series
> > IBM for 370 et seq
> > any other architectures/series that lasted long
> > enough to face the problem
> > to overcome their limitations?
>
> PDP-11 hit it pretty early into its life cycle. One is left
> to wonder what DEC was thinking. You will find gobs of materials
> about it.

I seem to remember reading something from when they were working on
the VAX that implied DEC was wondering what they'd been thinking,
too...
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair

Mirian Crzig Lennox

unread,
Mar 8, 2002, 8:17:03 PM3/8/02
to
On 8 Mar 2002 19:27:19 GMT, Pete Zaitcev <zai...@yahoo.com> wrote:
>
>One box which was NOT extened, while I would expect it to be,
>was DG Eclipse (wasn't it also called MV?). Probably it had
>enough address bits when it was designed (it was a VAX
>contemporary after all).

My memory may be failing here, but I thought that the Eagle was
essentially the Eclipse extended. Wasn't that what Tracy Kidder's "Soul
of a New Machine" was about?

--Mirian

Peter Desnoyers

unread,
Mar 8, 2002, 10:28:20 PM3/8/02
to
On Fri, 08 Mar 2002 09:38:49 -0800, J Ahlstrom <jahl...@cisco.com> wrote:
>Gordon Bell has said that the hardest mistake
>to deal with in a computer architecture is too few
>memory address bits.
> [...]

> any other architectures/series that lasted long
> enough to face the problem
>to overcome their limitations?

The BBN C-30 IMP had perhaps the most god-awful solution.

They started with a mid-60s Honeywell 16-bit machine (I think) as
the Arpanet IMP, then moved to a microcoded TTL replacement when it
got EOL'ed, as it was cheaper than re-writing code. Eventually they
bolted another 4 bits onto (almost) everything, giving them a 20-bit
machine which could now address a million words of memory.

Unfortunately they had an instruction encoding which only allowed
expression of 9 bits of immediate address or data, so you had to
reserve the bottom of every 512-word page for address and data
constants.

They also had a machine called the C-70, which was the same hardware
with microcode that made it look sort of like a 20-bit PDP-11, with
10-bit bytes. It ran some ancient variety of Unix, and was a bitch to
port code to.

--
.....................................................................
Peter Desnoyers (781) 457-1165 pdesn...@chinook.com
Chinook Communications (617) 661-1979 p...@fred.cambridge.ma.us
100 Hayden Ave, Lexington MA 02421

Charles Richmond

unread,
Mar 8, 2002, 10:43:15 PM3/8/02
to
Pete Zaitcev wrote:
>
> [snip...] [snip...] [snip...]

>
> One box which was NOT extened, while I would expect it to be,
> was DG Eclipse (wasn't it also called MV?). Probably it had
> enough address bits when it was designed (it was a VAX
> contemporary after all).
>
Did you ever read _The Soul of a New Machine_??? The MV-8000
was designed as basically a 32-bit Eclipse. It had a high degree
of Eclipse binary compatibility. I was acquainted with a company
that only disposed of their *two* MV-20000 five or so years
ago. (The MV-20000 is a MV-8000 grown up...) All of these
machines were from DG (RIP).

--
+-------------------------------------------------------------+
| Charles and Francis Richmond <rich...@plano.net> |
+-------------------------------------------------------------+

John R Levine

unread,
Mar 9, 2002, 2:22:56 AM3/9/02
to
>> PDP-11 hit it pretty early into its life cycle. One is left
>> to wonder what DEC was thinking. You will find gobs of materials
>> about it.
>
>When the 11 was designed (design completed in 1969 IIRC), a 16-bit address
>space *was* a major extension over the 12-bit address space that was selling
>like hot-cakes on the PDP-8.

Not really. You could put up to 32K 12-bit words on a PDP-8, using a
bank switching scheme that sort of foreshadowed x86 segments. The
original 16-bit PDP-11 only offered half a bit more than that: 64K
8-bit bytes vs. 32K 12-bit words.

Gordon Bell was a strong advocate of lashing together lots of
computers to do large jobs, so I imagine that 64K bytes seemed plenty
for what would be one of many computers in a larger system. Since
then we've learned more about how hard it is to partition programs to
use lots of little computers, and how hard it is to program anything
that doesn't look like one CPU executing a single stream of
instructions.

> Early 11 OSs ran fine with 8 KB of physical memory.

Well, yeah, so did CP/M which was similarly capable.

> if you think *DEC* was short-sighted back then, what was DG thinking
> when it designed the Nova?).

Ed DeCastro designed the PDP-8, and designed a 16-bit follow-on that
DEC declined to build, so he quit and started DG to build it. At the
time, the competition between the Nova and the PDP-11 was based more
on price and OEM agreements than on architecture.


--
John R. Levine, IECC, POB 727, Trumansburg NY 14886 +1 607 387 6869
jo...@iecc.com, Village Trustee and Sewer Commissioner, http://iecc.com/johnl,
Member, Provisional board, Coalition Against Unsolicited Commercial E-mail

Bill Todd

unread,
Mar 9, 2002, 2:44:27 AM3/9/02
to

"John R Levine" <jo...@iecc.com> wrote in message
news:a6cd8g$12e$1...@xuxa.iecc.com...

> >> PDP-11 hit it pretty early into its life cycle. One is left
> >> to wonder what DEC was thinking. You will find gobs of materials
> >> about it.
> >
> >When the 11 was designed (design completed in 1969 IIRC), a 16-bit
address
> >space *was* a major extension over the 12-bit address space that was
selling
> >like hot-cakes on the PDP-8.
>
> Not really.

Yes, really: you're confusing physical address space with virtual address
space, and the latter is the subject referred to in this topic. Extending
the 11's initially-limited *physical* address space was hardly the 'hardest
mistake in comp arch to fix'.

You could put up to 32K 12-bit words on a PDP-8, using a
> bank switching scheme that sort of foreshadowed x86 segments. The
> original 16-bit PDP-11 only offered half a bit more than that: 64K
> 8-bit bytes vs. 32K 12-bit words.

...

> > Early 11 OSs ran fine with 8 KB of physical memory.
>
> Well, yeah, so did CP/M which was similarly capable.

From 30,000 feet, perhaps. But at any finer level of detail I think you'd
find significant differences between RSX-11M and CP/M.

>
> > if you think *DEC* was short-sighted back then, what was DG thinking
> > when it designed the Nova?).
>
> Ed DeCastro designed the PDP-8, and designed a 16-bit follow-on that
> DEC declined to build, so he quit and started DG to build it. At the
> time, the competition between the Nova and the PDP-11 was based more
> on price and OEM agreements than on architecture.

My comment referred to starting a company whose *only* offering was limited
to a 16-bit address space (as contrasted with DEC's ability to offer a 10 to
people who needed something more).

- bill

Carl Nelson

unread,
Mar 9, 2002, 4:26:32 AM3/9/02
to

Joe Pfeiffer wrote:

I seem to recall having a conversation with one of the VMS implementors at a
DECUS meeting. He said that there had been an engineer who had argued that
extending the (virtual) address space from 16 bits to 24 bits, instead of the
proposed 32 bits, would be more than adequate. He said that he doesn't remember
the engineer's name, as he only attended a few meetings, then quietly
disappeared. Wonder why?

Also, I remember the original VAX-11/780s originally had 1MB of memory, with the
option of expanding to a whopping 2MB! It sounds tiny now, but we did support
over 200 concurrent interactive logins running a fairly large application. We
now have 448MB memory supporting about 400 concurrent users, but a whole bunch
of ancillary software as well.

I hope that the mystery (possibly apocryphal) engineer has learned that you can
never have too many address bits. Or at least that the marginal cost of adding a
few bits is nothing compared to the marginal cost of redesigning from the ground
up.

--Carl

Tarjei T. Jensen

unread,
Mar 9, 2002, 4:20:39 AM3/9/02
to

Pete Zaitcev wrote:
> Norsk Data Nord 500 was a weird mini with something like 19
> address bits, and segments. It was extened to support a power
> of two pointers and paged memory, or something like that.
> It was reasonably popular in Europe and USSR, but fell completely
> off the Internet horizon, and you'll never find any decent
> materials about it. The whole thing, hardware and software,
> was made in Norway! An accomplishment of similar magnitude
> today would be if, say, Swiss built a commercially successful
> line of Linux workstations based on Clipper or Axil Antrax-100,
> with their own Fortran compiler.

It was a pretty hot machine at the time. The main problem being that it was
attached to a ND-100. The ND-100 did all the I/O and quickly became a
bottleneck. Big mistake.

By the time the 500 came out, all the competent people at Norsk data had
been promoted to management, so the people who did this design was rather
unexperienced. Rumour has it that a company who got one of the 500 PCB cards
for testing wanted to retain one and frame it as a glorious example of how
not to do it.

The ND-100 was the norwegian equivalent of the PDP-11, but faster. That is
why Norwegians have little experience with PDPs and early VAXen.

I think Norsk Data sold both the ND-100 and the ND-500 series as number
crunchers to CERN.

greetings,

Bill Todd

unread,
Mar 9, 2002, 4:35:04 AM3/9/02
to

"Carl Nelson" <carl....@mcmail.maricopa.edu> wrote in message
news:3C89D559...@mcmail.maricopa.edu...

...

> Also, I remember the original VAX-11/780s originally had 1MB of memory,
with the
> option of expanding to a whopping 2MB!

My recollection is that the minimum supported VMS V1 configuration was 256
KB of memory and dual RK06 disks. Not that this gave you a system you could
run more than one application at a time on...

- bill

jmfb...@aol.com

unread,
Mar 9, 2002, 5:22:08 AM3/9/02
to
In article <3C89D559...@mcmail.maricopa.edu>,
Carl Nelson <carl....@mcmail.maricopa.edu> wrote:
<snip>

>I hope that the mystery (possibly apocryphal) engineer
>has learned that you can never have too many address bits.

That was the problem...the engineers suffering from small
computer thinking never did learn.

<snip>

/BAH

Subtract a hundred and four for e-mail.

jmfb...@aol.com

unread,
Mar 9, 2002, 5:23:02 AM3/9/02
to
In article <3C88F729...@cisco.com>,
J Ahlstrom <jahl...@cisco.com> wrote:

>Gordon Bell has said that the hardest mistake
>to deal with in a computer architecture is too few
>memory address bits.

<snip>

I'd like to know when he finally realized this.

CBFalconer

unread,
Mar 9, 2002, 10:54:23 AM3/9/02
to
Carl Nelson wrote:
>
... snip ...

>
> I hope that the mystery (possibly apocryphal) engineer has learned
> that you can never have too many address bits. Or at least that
> the marginal cost of adding a few bits is nothing compared to the
> marginal cost of redesigning from the ground up.

The problem with that is that universal allowance for wide
addressing bloats every instruction. At least in local frame
oriented code, most addresses (90% or more) can be expressed in 7
bits + sign (where sign allows for parameter/local addressing).
So it is advisable to have multiple addressing modes to cater for
the relatively rarely used long addresses. Which in turn means
you have to design the instruction set with this in mind from the
beginning. Then shorter instructions mean better caching, smaller
executables, higher speed, etc.

Obviously this doesn't work too well when backward compatibility
is the prime goal.

--
Chuck F (cbfal...@yahoo.com) (cbfal...@XXXXworldnet.att.net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)

Mike

unread,
Mar 9, 2002, 11:37:09 AM3/9/02
to
I suggest looking at the January 1978 Communications of the ACM (Volume 21,
Number 1). It was a "Special Issue on Computer Architecture". It provides
a very good description of the early 1100 Series addressing architecture
(1107 through 1100/80).

If you have an ACM web account, it is available in the ACM Digital Library.
The link to the article "The Evolution of the Sperry Univac 1100 Series: A
History, Analysis, and Projection" by B.R. Borgerson, M.L. Hanson, and P.A.
Hartley is:
http://doi.acm.org/10.1145/359327.359334

Without an ACM web account you can still access the abstract.

The special issue also has articles on the DECsystem 10, IBM 370, Cray, and
others. It is one issue that I have held on to over the years.

Regards,
Mike

<email address mangled to prevent spam>


Sander Vesik

unread,
Mar 9, 2002, 11:45:16 AM3/9/02
to
In comp.arch CBFalconer <cbfal...@yahoo.com> wrote:
> Carl Nelson wrote:
>>
> ... snip ...
>>
>> I hope that the mystery (possibly apocryphal) engineer has learned
>> that you can never have too many address bits. Or at least that
>> the marginal cost of adding a few bits is nothing compared to the
>> marginal cost of redesigning from the ground up.
>
> The problem with that is that universal allowance for wide
> addressing bloats every instruction. At least in local frame
> oriented code, most addresses (90% or more) can be expressed in 7
> bits + sign (where sign allows for parameter/local addressing).
> So it is advisable to have multiple addressing modes to cater for
> the relatively rarely used long addresses. Which in turn means
> you have to design the instruction set with this in mind from the
> beginning. Then shorter instructions mean better caching, smaller
> executables, higher speed, etc.

Which doesn't of course apply at all if the address is not part of
the instruction...

--
Sander

+++ Out of cheese error +++

CBFalconer

unread,
Mar 9, 2002, 12:29:10 PM3/9/02
to

Sooner or later it is in some form. Which might be load constant
to register, or to top-of-stack, or index to descriptor table, or
whatever.

Stephen Fuld

unread,
Mar 9, 2002, 12:40:39 PM3/9/02
to

"CBFalconer" <cbfal...@yahoo.com> wrote in message
news:3C8A4118...@yahoo.com...

> Sander Vesik wrote:
> >
> > In comp.arch CBFalconer <cbfal...@yahoo.com> wrote:
> > > Carl Nelson wrote:
> > >>
> > > ... snip ...
> > >>
> > >> I hope that the mystery (possibly apocryphal) engineer has learned
> > >> that you can never have too many address bits. Or at least that
> > >> the marginal cost of adding a few bits is nothing compared to the
> > >> marginal cost of redesigning from the ground up.
> > >
> > > The problem with that is that universal allowance for wide
> > > addressing bloats every instruction. At least in local frame
> > > oriented code, most addresses (90% or more) can be expressed in 7
> > > bits + sign (where sign allows for parameter/local addressing).
> > > So it is advisable to have multiple addressing modes to cater for
> > > the relatively rarely used long addresses. Which in turn means
> > > you have to design the instruction set with this in mind from the
> > > beginning. Then shorter instructions mean better caching, smaller
> > > executables, higher speed, etc.
> >
> > Which doesn't of course apply at all if the address is not part of
> > the instruction...
>
> Sooner or later it is in some form. Which might be load constant
> to register, or to top-of-stack, or index to descriptor table, or
> whatever.

But this is easily taken care of. You have multiple instructions, one that
loads the low order X bits from the instruction into a register, another
that loads the next X bits of the register from the instruction, etc. This
even elegently takes care of the case that smaller programs run faster (no
need for the instructions for the high order bits), but you don't need to
expand the instruction size to hold a large constant. Note: I am not
claiming this is a new technique. It has been in use for decades.

Lars Poulsen

unread,
Mar 9, 2002, 4:38:08 PM3/9/02
to
Pete Zaitcev wrote:
>>Norsk Data Nord 500 was a weird mini with something like 19
>>address bits, and segments. It was extened to support a power
>>of two pointers and paged memory, or something like that.


Tarjei T. Jensen wrote:
> It was a pretty hot machine at the time. The main problem being that it was
> attached to a ND-100. The ND-100 did all the I/O and quickly became a
> bottleneck. Big mistake.

<snip>


> The ND-100 was the norwegian equivalent of the PDP-11, but faster. That is
> why Norwegians have little experience with PDPs and early VAXen.
>
> I think Norsk Data sold both the ND-100 and the ND-500 series as number
> crunchers to CERN.

Actually, I think CERN relied on the CDC-6600 and CDC-7600 for number
crunching. The ND-100 machines would be much better used
1) As real-time data collection controllers (driving CAMAC instrument
crates) and
2) As interactive terminal clusters for editing jobs to be submitted to
the CDC's and for palying around with data analysis.

In my second computer job, I joined the company just as we were
selling an instrumentation package (some fairly simple A/D conversion
peripheral) for an ND-10 which was going into the Danish Institute for
Shipbuilding Research. My job was to write and integrate a device
driver. So I was shipped to Oslo, with instructions not to come back
until I knew enough about the SINTRAN operating system do do the rest
un my own.

The operating system was written in a language similar to PL/360 or
BCPL. It was a demand paged system with a flexible, extensible command
language, and designed for a mixed interactive and real-time workload.
The file system had protection mechanisms of essentially the same
power as Unix, but the group concept was replaced with the concept
of "friends", which made it much more flexible. The file name
completion feature was also more flexible and elegant. Hyphens in
filenames were special: If the file (which leved in a directory
of executable programs) was named "list-files-alphabetically", then
"l-fi", "list-f" and "lfa" might all be valid abbreviations.

As it happened, the system had losts of stability problems.
This was in 1975, and CMOS memory was quite new. If I remember
correctly, Norsk Data had chosen to use SRAM because they
believed it would be more stable than DRAM. The system failed in
interesting ways, but failed much less, if they were running
memory diagnostics in the background. In the end, it was determined
that the memory chips worked quite well, UNLESS a memory word sat
with an unchanged value for a very long time (such as several days),
in which case it would develop an affinity for that value, and when a
different value was written into it, it might after several hours revert
to the prior value. The problem was that this pattern of access
applied to some very critical operating system tables ... such
as the disk bitmaps. One the problem was understood, the fix was
to replace the memory with DRAM, which by then had become the
industry standard.

I liked the Nord-10, and its successor, the ND-100. They were PDP-11
class machines, but the operating system was much more elegant than
anything I ever saw on the PDP-11. I am sure this was a direct
consequence of working with a small team, any not having resources to do
more than one system which had to serve for all applications.
--
/ Lars Poulsen +1-805-569-5277 http://www.beagle-ears.com/lars/
125 South Ontare Rd, Santa Barbara, CA 93105 USA la...@beagle-ears.com

Joe Pfeiffer

unread,
Mar 9, 2002, 5:09:52 PM3/9/02
to
jmfb...@aol.com writes:

> In article <3C88F729...@cisco.com>,
> J Ahlstrom <jahl...@cisco.com> wrote:
>
> >Gordon Bell has said that the hardest mistake
> >to deal with in a computer architecture is too few
> >memory address bits.
> <snip>
>
> I'd like to know when he finally realized this.

I'll guess somewhere around the VAX days.

Randall Bart

unread,
Mar 9, 2002, 7:11:02 PM3/9/02
to
'Twas Sat, 09 Mar 2002 09:26:32 GMT when all comp.sys.unisys stood in awe as
Carl Nelson <carl....@mcmail.maricopa.edu> uttered:

>I hope that the mystery (possibly apocryphal) engineer has learned that you can
>never have too many address bits. Or at least that the marginal cost of adding a
>few bits is nothing compared to the marginal cost of redesigning from the ground
>up.

While I don't know of a computer built with too many address bits, it would
certainly be possible to build on with 1000 bits, which would be too many.
I think 64 bits is going to be adequate for quite a while.

I find it amusing that PC disk drives have run into addressing limits at
multiples of 4. The first hard dive limit was 8 MB, then 32 MB, then 128
MB, then 512 MB, then 2 GB, then 8 GB. I haven't heard of a 32 GB limit, it
seems the next limit is 128 GB. The limits were in different places: The
disk format, the BIOS, the hardware interface, the software interface. It
seemed as soon as we had a solution to one limit we were bumping into the
next one.

I bought my 486 with a 120 MB disk drive. I eventually upgraded to a 512 MB
drive, the biggest the BIOS supported. By then I was on the Internet, and
chewing up a megabyte a day (and I don't download much). When I filled up
that drive, I wanted another 512 MB drive. After buying four (or was it
five) different used drives that didn't work, I went down to the store and
bought a brand new 2 GB drive, which I formatted as just 512 MB. I didn't
want to bother with a driver for breaking the 512 MB limit, because I was
going to replace the whole computer soon (which I did).

Which reminds me, where do I get a driver to break the 8 GB limit on my
Toshiba laptop running Win95?
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam 1-917-715-0831
dt ||\ Here I am: http://RandallBart.com/ I LOVE YOU
a |/ He Won't Get Far: http://www.callahanonline.com/calhat9.htm
l |\ DOT-HS-808-065 MSMSMSMSMSMSMS=6/28/107 Joel 3:9-10
l |/ Terrorism: http://www.markfiore.com/animation/adterror.html

CBFalconer

unread,
Mar 9, 2002, 8:37:59 PM3/9/02
to

Repeated from my original post, below:

> > > > the relatively rarely used long addresses. Which in turn
> > > > means you have to design the instruction set with this in
> > > > mind from the beginning. Then shorter instructions mean
> > > > better caching, smaller executables, higher speed, etc.

--

Cal Gardner

unread,
Mar 10, 2002, 12:09:18 AM3/10/02
to
pdesn...@butternut.chinook.com (Peter Desnoyers) wrote in message news:<slrna8j0p5.1...@butternut.chinook.com>...

snip


> The BBN C-30 IMP had perhaps the most god-awful solution.
>
> They started with a mid-60s Honeywell 16-bit machine (I think) as
> the Arpanet IMP, then moved to a microcoded TTL replacement when it
> got EOL'ed, as it was cheaper than re-writing code. Eventually they
> bolted another 4 bits onto (almost) everything, giving them a 20-bit
> machine which could now address a million words of memory.

A minor point, the first IMPs were DDP516s from Computer Control
Company and predated the Honeywell takeover of 3C.

snip

Charles Richmond

unread,
Mar 10, 2002, 12:40:12 AM3/10/02
to
Carl Nelson wrote:
>
> [snip...] [snip...] [snip...]

>
> I hope that the mystery (possibly apocryphal) engineer has learned that you can
> never have too many address bits. Or at least that the marginal cost of adding a
> few bits is nothing compared to the marginal cost of redesigning from the ground
> up.
>
You can *never* have too many address bits, cpu registers,
disk space, etc. But there has to be a physical limit on
such things. Wherever you drive the stake into the ground,
eventually the progress of technology will render the
decision to be too conservative. And "eventually" seems to
be occuring sooner and sooner these days...

Nick Maclaren

unread,
Mar 10, 2002, 5:13:21 AM3/10/02
to
In article <1b1yetx...@cs.nmsu.edu>,

Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>jmfb...@aol.com writes:
>
>> In article <3C88F729...@cisco.com>,
>> J Ahlstrom <jahl...@cisco.com> wrote:
>>
>> >Gordon Bell has said that the hardest mistake
>> >to deal with in a computer architecture is too few
>> >memory address bits.
>>
>> I'd like to know when he finally realized this.
>
>I'll guess somewhere around the VAX days.

Anyway it's merely a sound bite, and I am sure that someone of that
competence never believed it.

It may be the hardest common problem for some meaning of "common",
but I simply don't believe that it even competes for the absolutely
hardest problem. For example:

A lot of designs have made implicit timing assumptions, which seem
quite safe at the time. As the systems are shrunk, with subsequent
increase in speeds and size of system, the timing assumptions start
to come up against the speed of light. Now, THAT is a hard one to
get round :-)

To see its relevance to current computers, consider the problem of
combining the need for 1,000 CPUs, GHz clock rates, 10 cycle access
to cache and fast (near-uniform) cache coherence. SGI's CTO said
publicly that they are already up against this - which you can
check easily enough using elementary physics!


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

jmfb...@aol.com

unread,
Mar 10, 2002, 5:00:12 AM3/10/02
to
In article <1b1yetx...@cs.nmsu.edu>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:
>jmfb...@aol.com writes:
>
>> In article <3C88F729...@cisco.com>,
>> J Ahlstrom <jahl...@cisco.com> wrote:
>>
>> >Gordon Bell has said that the hardest mistake
>> >to deal with in a computer architecture is too few
>> >memory address bits.
>> <snip>
>>
>> I'd like to know when he finally realized this.
>
>I'll guess somewhere around the VAX days.

After he worked on the PDP-6. Sounds like he revised his history.

jmfb...@aol.com

unread,
Mar 10, 2002, 5:06:27 AM3/10/02
to
In article <3C8B0DD7...@ev1.net>,

Charles Richmond <rich...@ev1.net> wrote:
>Carl Nelson wrote:
>>
>> [snip...] [snip...] [snip...]
>>
>> I hope that the mystery (possibly apocryphal) engineer has learned that
you can
>> never have too many address bits. Or at least that the marginal cost of
adding a
>> few bits is nothing compared to the marginal cost of redesigning from
the ground
>> up.
>>
>You can *never* have too many address bits, cpu registers,
>disk space, etc.

Yes, you can. Unlimited resources promotes waste.

> .. But there has to be a physical limit on


>such things. Wherever you drive the stake into the ground,
>eventually the progress of technology will render the
>decision to be too conservative. And "eventually" seems to
>be occuring sooner and sooner these days...

The computing biz has been terribly one-sided for the last decade.
We're producing hardware to compensate for programming slopiness.
In fact, the hardware types depend on slopiness. I suppose it's
an aspect of the hardware biz getting divorced from the software
biz.

jmfb...@aol.com

unread,
Mar 10, 2002, 6:07:08 AM3/10/02
to
In article <3C8A80C0...@beagle-ears.com>,

<gawd> That had to have taken some clever debugging, guessing,
and throwing sand in the wind.

>
>I liked the Nord-10, and its successor, the ND-100. They were PDP-11
>class machines, but the operating system was much more elegant than
>anything I ever saw on the PDP-11. I am sure this was a direct
>consequence of working with a small team, any not having resources to do
>more than one system which had to serve for all applications.

The team also couldn't have had delusions of grandeur. They took
what was hanging on the system and used it. IOW, they managed
to keep NIH syndrome to a minimum.

Bill Marcum

unread,
Mar 10, 2002, 6:12:35 PM3/10/02
to

jmfb...@aol.com wrote in message ...

>In article <3C88F729...@cisco.com>,
> J Ahlstrom <jahl...@cisco.com> wrote:
>
>>Gordon Bell has said that the hardest mistake
>>to deal with in a computer architecture is too few
>>memory address bits.
><snip>
>
>I'd like to know when he finally realized this.
>
Is he still alive? I imagine some guy on his deathbed saying "Address
bits!" instead of
"Rosebud!" :)


Bill Marcum

unread,
Mar 10, 2002, 6:25:06 PM3/10/02
to

Randall Bart wrote in message
<05vk8ushgm9mjo95o...@4ax.com>...

>Which reminds me, where do I get a driver to break the 8 GB limit on my
>Toshiba laptop running Win95?


I think what you need is the OSR2 version of Win95, or Win98, or Linux.

Martin Heller

unread,
Mar 10, 2002, 8:08:32 PM3/10/02
to
Bill Marcum wrote:

He's still alive and working for Microsoft:

http://research.microsoft.com/users/GBell/

Cheers, M. H.

John R Levine

unread,
Mar 11, 2002, 12:59:26 AM3/11/02
to
>> >When the 11 was designed (design completed in 1969 IIRC), a 16-bit address
>> >space *was* a major extension over the 12-bit address space that was selling
>> >like hot-cakes on the PDP-8.
>>
>> Not really.
>
>Yes, really: you're confusing physical address space with virtual address
>space

No, really, I'm not. A PDP-8 only had one address space. (Well,
unless you had the TSS-8 modifications which, while cool, weren't used
much.)

> You could put up to 32K 12-bit words on a PDP-8, using a
>> bank switching scheme that sort of foreshadowed x86 segments. The
>> original 16-bit PDP-11 only offered half a bit more than that: 64K
>> 8-bit bytes vs. 32K 12-bit words.

User programs did their own bank switching on the -8, somewhat like
programs on the 8086 (still popular in embedded applications) today.
If you had 32K of expensive core, one program could and did use it
all.

>> > Early 11 OSs ran fine with 8 KB of physical memory.
>>
>> Well, yeah, so did CP/M which was similarly capable.
>
>From 30,000 feet, perhaps. But at any finer level of detail I think you'd
>find significant differences between RSX-11M and CP/M.

Well, OK, iRMX whatever. You want realtime on your 8085, the tools are
there for it.

Anyway, the PDP-11 was an important move for DEC in that it aligned
them with the 8-bit byte addressable world hitherto dominated by IBM,
but even at the time 16 bits wasn't going to give much headroom.

Tarjei T. Jensen

unread,
Mar 11, 2002, 3:54:13 AM3/11/02
to

jmfb...@aol.com wrote :

> The team also couldn't have had delusions of grandeur. They took
> what was hanging on the system and used it. IOW, they managed
> to keep NIH syndrome to a minimum.
>

The delusions of grandeur came later with the 500 series.

That is when they started to behave strange.

BTW The Nord 100 was supposed to be a cheaper and slower version of the Nord
10. Somehow it ended up being significantly faster.

The Nord 5 is probably where the future of Norsk Data lay, only they didn't
know at the time. It was a number cruncher used to generate wheather
forecasts. One wonder what could have been if they have had the sense to put
Unix on it and any successors. But, Unix was not invented here......

greetings,


Bernd Paysan

unread,
Mar 11, 2002, 5:19:15 AM3/11/02
to
Randall Bart wrote:
> I haven't heard of a 32 GB limit

I have ;-).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/

Jan C. Vorbrüggen

unread,
Mar 11, 2002, 6:05:38 AM3/11/02
to
> The problem with that is that universal allowance for wide
> addressing bloats every instruction. At least in local frame
> oriented code, most addresses (90% or more) can be expressed in 7
> bits + sign (where sign allows for parameter/local addressing).
> So it is advisable to have multiple addressing modes to cater for
> the relatively rarely used long addresses. Which in turn means
> you have to design the instruction set with this in mind from the
> beginning. Then shorter instructions mean better caching, smaller
> executables, higher speed, etc.

That's how the transputer ISA does it...a nibble at a time. In principle, you
can run the same code on a 16-bit and a 32-bit machine.

Jan

Terje Mathisen

unread,
Mar 10, 2002, 10:23:45 PM3/10/02
to
"Tarjei T. Jensen" wrote:
> The ND-100 was the norwegian equivalent of the PDP-11, but faster. That is
> why Norwegians have little experience with PDPs and early VAXen.
>
> I think Norsk Data sold both the ND-100 and the ND-500 series as number
> crunchers to CERN.

I though they got into CERN as process/lab control machines, using the
ND-10?

The main problem, as I've mentioned before here, was that the context
switch system was perfect for a fixed set of processes, maximum 16,
running at fixed priority levels.

This was because it had 16 sets of registers, so on taking an interrupt
it simply switched a register block pointer and ran with it.

Selling the same machines for general multi-user work was not nearly as
good, since that meant a fairly expensive full context switch between
many processes running at the same basic priority.

Terje

PS. It really didn't help that it had to take a full interrupt for each
character sent or received over the terminal ports. :-(
--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"


Terje Mathisen

unread,
Mar 10, 2002, 10:26:55 PM3/10/02
to
Randall Bart wrote:
> Which reminds me, where do I get a driver to break the 8 GB limit on my
> Toshiba laptop running Win95?

I believe that driver/bugfix is called Win98 Second Edition.

Terje

Ron Smith

unread,
Mar 11, 2002, 9:46:08 AM3/11/02
to
For real details, I suggest you check university or technical libraries that
may still have Programmer's Reference Manuals and Processor and Storage
manuals for the 1108, 1110, and 1100/90. As they are obsolete, they are no
longer orderable from the Unisys Bookstore.

Generally:

The 1108 with its operating System "Exec 8" was the first third-generation
computer of the 1100 Series line so let's start there. The current
Operating System "OS 2200" still contains some of the original code and will
run many of the original programs.

The 1108 had an 18-bit virtual address space that was handled in two parts.
One part was called the I-bank (Instruction bank) and could be at most 64K
36-bit words in length. The other part was called the D-bank (Data bank)
and could use all of the high-order addresses above whatever the length of
the I-bank was. There were two techniques provided to handle larger
programs. One was Overlay Segments which were common on most systems of the
time. A linker/loader actually loaded code and data over the top of part of
the segments at run time. The other was a mechanism called "Re-entrant
banks" and invoked by an ER LINK$ (ER being the OS call instruction). ER
LINK$ switched the Instruction Bank base register to point to another "Bank"
in memory. It was intended to permit the direct sharing of code segments of
compilers, database managers, and even large user programs.

With the advent of the 1110 computer in 1972, the OS name was changed to "OS
1100." The 1110 made the ER LINK$ approach architectural creating new
instructions and a Bank Descriptor table. The approach is remarkably
similar to the one later to be adopted by Intel on the 386. The virtual
address space was effectively increased to 29 bits. There were two
1024-entry bank descriptor tables. One was a shared table that contained
system-global shared banks (segments) like the compilers and database
managers. The other was unique to each user program and allowed much larger
programs to be constructed. Both the shared and user banks could be either
instructions or data. Up to four of the banks were visible to instructions
at any time.

The next stage really occurred with the 1100/90 computer which added
"Extended Mode." There was a partial implementation about a year earlier on
the System-11 which was a small scale 1100 system. This was the truly large
change that really justifies the quote. Many user applications and portions
of the operating system are still not in extended mode. Extended mode
revamped the instruction format to add an explicit base register (somewhat
reminiscent of the 360), increased the size of the bank descriptor tables to
32768 entries, and established 8 categories of bank descriptor tables. The
new 36-bit virtual address consisted of a 3-bit field that selected the bank
descriptor table, 15 bits that selected the bank, and 18 bits of offset.
Adjacent entries could be combined into larger banks of up to 24-bits of
address space (16 million words) or the whole table could be used as one
giant bank.

--
Ron Smith
Unisys
Roseville, MN

"J Ahlstrom" <jahl...@cisco.com> wrote in message
news:3C88F729...@cisco.com...


> Gordon Bell has said that the hardest mistake
> to deal with in a computer architecture is too few
> memory address bits.
>

> There certainly have been lots of attempts to try to fix
> and work around this problem.
>
> Does anyone have documentation on the various
> features and mechanisms used by
> Univac for 1100 - 2200
> Burroughs for A Series
> IBM for 370 et seq
> any other architectures/series that lasted long
> enough to face the problem
> to overcome their limitations?
>
> For an article for Annals of Computer History
> on this topic, I would like documentation
> and engineering notes, names of people involved, ...
> impact on operating systems, ...
>
> Thank you
>
> John K Ahlstrom
>
> --
> "C++ is more of a rube-goldberg type thing full of high-voltages,
> large chain-driven gears, sharp edges, exploding widgets, and spots to
> get your fingers crushed. And because of it's complexity many (if not
> most) of it's users don't know how it works, and can't tell ahead of
> time what's going to cause them to loose an arm." -- Grant Edwards
>
>


Bill Todd

unread,
Mar 11, 2002, 10:10:15 AM3/11/02
to

"John R Levine" <jo...@iecc.com> wrote in message
news:a6hh3u$7on$1...@xuxa.iecc.com...

> >> >When the 11 was designed (design completed in 1969 IIRC), a 16-bit
address
> >> >space *was* a major extension over the 12-bit address space that was
selling
> >> >like hot-cakes on the PDP-8.
> >>
> >> Not really.
> >
> >Yes, really: you're confusing physical address space with virtual
address
> >space
>
> No, really, I'm not. A PDP-8 only had one address space. (Well,
> unless you had the TSS-8 modifications which, while cool, weren't used
> much.)

Yes, really, you are. The 8 had a 12-bit virtual address. The fact that
you could move this address around to different portions of physical memory
doesn't change that, any more than the fact that you can move the 11's
16-bit virtual addressing around in far more than 64 KB of physical memory
changes its basic address width.

>
> > You could put up to 32K 12-bit words on a PDP-8, using a
> >> bank switching scheme that sort of foreshadowed x86 segments. The
> >> original 16-bit PDP-11 only offered half a bit more than that: 64K
> >> 8-bit bytes vs. 32K 12-bit words.
>
> User programs did their own bank switching on the -8, somewhat like
> programs on the 8086 (still popular in embedded applications) today.
> If you had 32K of expensive core, one program could and did use it
> all.

Just as one program can use almost the entire 4 MB of physical memory that
the 11/70 (now J11) supports. The problem is that it can't do it
*transparently* to the program, any more than a program on the 8 could use
more than 4KW (or a program on a 16-bit PC could use more than 64 KB:
though the separation of code, data, stack, etc. segments allowed use of
more than that at one time, the program still had to be cognizant of what
was in which segment).

So I'll reiterate my original statement: the 11 with its 16-bit virtual
address space marked a significant step beyond the 8 with its 12-bit address
space. If you want to talk the maximum amount of memory a single program
could address non-transparently, the numbers are 32 KW for the 8 and, after
the 11/70 appeared, 4 MB for the 11, but IIRC Bell's comment - the basis for
this discussion - referred to virtual address space, not physical address
space.

- bill

Chris Jones

unread,
Mar 11, 2002, 10:21:16 AM3/11/02
to
Terje Mathisen <terje.m...@hda.hydro.com> writes:

[...]

PS. It really didn't help that it had to take a full interrupt for each
character sent or received over the terminal ports. :-(

That's nothing: early DG Novas interrupted for every BIT (which may or
may not have been an improvement over having to poll)!

Michael G. Dobbins

unread,
Mar 11, 2002, 11:06:09 AM3/11/02
to
"Chris Jones" <c...@theWorld.com> wrote in message
news:tdn3cz7...@shell01.TheWorld.com...

The DG Nova had a 16 bit register with each bit tied to a different terminal
line so it could support 16 terminals. The processor was interrupted 5
times the bit rate to be able to sample the state of all 16 lines in one
word. this was done to be able to detect level transitions on the lines and
then count to 3 to get to the center of the bit. Since there was always a
transition from the stop bit to the start bit, the processor was able to get
close enough to the center of the start bit and keep the timing drift close
enough to the center of the rest of the bits in that single byte. I
actually played with writing one of these.


George Gray

unread,
Mar 11, 2002, 11:17:23 AM3/11/02
to
These problems go back to the UNIVAC I, II, and III. The UNIVAC I had
just 1,000 words (at twelve 6-bit char-acters per word) and this had
only expanded to 2,000 words in the II. These small amounts of memory
had been very confining, and it was clear that the III needed more.
The I and II had used only decimal arithmetic. Their instruction
format did not permit more than four decimal digits for the memory
address, so it would not be able to represent addresses greater than
9,999. The III was designed to have from 8,192 to 32,768 words of
memory, so the old instruction format was inadequate.

Since core memory was expensive, it was decided to use a small word
size: 25 data bits, plus two more parity bits. The addressing was
done in binary. Using the old decimal approach, it would have taken
30 bits (5 x 6 bits per digit) to represent an address of 32,768; in
binary, this could be done in fifteen bits. The III's word size of 25
bits may seem rather odd, but it was chosen to provide some degree of
data compatibility with the I and II, where a word held twelve 6-bit
digits. A triple-word (75 bits) on the III could accommodate data
from the I and II. The instructions fit in one word, and were laid out
as follows:

1 bit indirect addressing indicator
4 bits index (base) register number
6 bits operation code
4 bits arithmetic register number or register for operand indexing
10 bits address displacement

The small word size necessitated using a scheme of base register plus
displacement for memory references that had been suggested by Presper
Eckert. After the indirect addressing bit, the four bits for the
arithmetic register number (there were four arithmetic registers), and
six for the operation code (giving 64 possible operations--not a large
number), just fourteen were left for the memory address, but fifteen
would be needed to reach 32,768. Eckert's solution was to compute the
address as the sum of the 10-bit displacement field and the contents
of one of the fifteen index registers, which functioned as a base
register.

Paul Winalski

unread,
Mar 11, 2002, 11:58:13 AM3/11/02
to
On Sat, 09 Mar 2002 09:26:32 GMT, Carl Nelson
<carl....@mcmail.maricopa.edu> wrote:

>Also, I remember the original VAX-11/780s originally had 1MB of memory, with the
>option of expanding to a whopping 2MB! It sounds tiny now, but we did support
>over 200 concurrent interactive logins running a fairly large application. We
>now have 448MB memory supporting about 400 concurrent users, but a whole bunch
>of ancillary software as well.

The original plans for the VAX-11/780 called for a minimum physical
memory size of 256KB, with 512KB, 1MB, and 2MB configurations
also available. During the development of VAX/VMS it became clear
pretty quickly that 256KB was not viable, 512KB remained the minimum
supported configuration until, IIRC, the release of VAX/VMS version 4.
The few remaining customers with 512KB boxes (mainly 11/750s) were
given free upgrades to 1MB.

--PSW

----------
Remove 'Z' to reply by email.

Anne & Lynn Wheeler

unread,
Mar 11, 2002, 1:34:36 PM3/11/02
to

pr...@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
> The IBM S/360 instruction set provided for 24-bit (16MB) virtual
> addresses. The big stumbling block to extending this was the
> design of the subroutine call instructions Branch and Link (BAL)
> and Branch and Link Register (BALR). BAL stores two
> pieces of information: the return address for the subroutine call
> and some state bits (2 bits of condition code and a 4-bit "program
> mask" indicating which exceptions (arithmetic overflow/underflow,
> etc.) are enabled). BAL/BALR put the return address in the low
> 24 bits of one of the general-purpose registers and the state bits
> in the high 8 bits of the same register. Hence, you couldn't expand
> the virtual address range beyond 24 bits without killing any program
> that does subroutine calls. The 360/67, which supported 32-bit
> virtual addresses, added Branch and Store instructions (BAS/BASR)
> that put the state information elsewhere, thus allowing the full
> 32 bits of the return register to be used for the return address.
> S/370 left enough bits all over the privileged parts of the
> architecture to support 32-bit addressing, but it took IBM a long
> time to write all of the 24-bit dependencies out of their code, and
> it wasn't until years after S/370's first release that we saw XA
> support.

slight note while 360/67 supported 32bit addressing (virtual memory,
and hardware addressing relocation, etc) ... XA/811 introduced w/3081
was 31bit addressing (not 32bit). If i remember correctly, one of the
(32/31) issues was the operation of BXLE/BXH instructions ... which
did arithmatic operations ... but register contents were typically
used for addressing.

random refs:
http://www.garlic.com/~lynn/93.html#14 S/360 addressing
http://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
http://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
http://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
http://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
http://www.garlic.com/~lynn/2000g.html#28 Could CDR-coding be on the way back?
http://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
http://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
http://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
http://www.garlic.com/~lynn/2001l.html#24 mainframe question
http://www.garlic.com/~lynn/2001l.html#36 History
http://www.garlic.com/~lynn/2001m.html#43 FA: Early IBM Software and Reference Manuals
http://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002c.html#40 using >=4GB of memory on a 32-bit processor

--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/

glen herrmannsfeldt

unread,
Mar 11, 2002, 2:46:39 PM3/11/02
to
"Bill Todd" <bill...@metrocast.net> writes:

(snip discussion about PDP8 and PDP11 addressing)

>Yes, really, you are. The 8 had a 12-bit virtual address. The fact that
>you could move this address around to different portions of physical memory
>doesn't change that, any more than the fact that you can move the 11's
>16-bit virtual addressing around in far more than 64 KB of physical memory
>changes its basic address width.

Physical address size is the amount of real, physical, hold in your hand,
memory that a machine can address. (Even if you can't afford it, or
even imagine a way to attach it.)

Virtual is a little harder to define. Modern x86 can be said to
have a 45 bit virtual address space, with 13 bits for segment selectors,
and 32 within each segment. (The low three bits of segment selectors
are ring and local/global, so I don't count them.) It is fairly easy
to write compilers to load segment selectors for both code and data
references. Now, even though the Pentium II had a 45 bit virtual
and 36 bit physical address space, it only has a 32 bit MMU datapath.
This makes it difficult to use the 45 bit virtual address space,
but it is still there, anyway.

>> > You could put up to 32K 12-bit words on a PDP-8, using a
>> >> bank switching scheme that sort of foreshadowed x86 segments. The
>> >> original 16-bit PDP-11 only offered half a bit more than that: 64K
>> >> 8-bit bytes vs. 32K 12-bit words.
>>
>> User programs did their own bank switching on the -8, somewhat like
>> programs on the 8086 (still popular in embedded applications) today.
>> If you had 32K of expensive core, one program could and did use it
>> all.

>Just as one program can use almost the entire 4 MB of physical memory that
>the 11/70 (now J11) supports. The problem is that it can't do it
>*transparently* to the program, any more than a program on the 8 could use
>more than 4KW (or a program on a 16-bit PC could use more than 64 KB:
>though the separation of code, data, stack, etc. segments allowed use of
>more than that at one time, the program still had to be cognizant of what
>was in which segment).

Transparent is hard to define. If a compiler can generate code for it,
then it is probably transparent enough. If a single CSECT
(subroutine, function, etc.) is limited to 64K or so, but one can
transparently (to the high-level language programmer) call between
them that is probably good enough. For data access, you might
limit the range of each subscript of an array, but allow for large
multidimensional arrays. Again, how transparent it is depends on
how it is used.

>So I'll reiterate my original statement: the 11 with its 16-bit virtual
>address space marked a significant step beyond the 8 with its 12-bit address
>space. If you want to talk the maximum amount of memory a single program
>could address non-transparently, the numbers are 32 KW for the 8 and, after
>the 11/70 appeared, 4 MB for the 11, but IIRC Bell's comment - the basis for
>this discussion - referred to virtual address space, not physical address
>space.

I don't know how much of this compilers will do for you, so it
may not be transparent enough.

-- glen

Bill Todd

unread,
Mar 11, 2002, 4:51:53 PM3/11/02
to

"glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message
news:a6j1iv$a...@gap.cco.caltech.edu...

> "Bill Todd" <bill...@metrocast.net> writes:
>
> (snip discussion about PDP8 and PDP11 addressing)
>
> >Yes, really, you are. The 8 had a 12-bit virtual address. The fact that
> >you could move this address around to different portions of physical
memory
> >doesn't change that, any more than the fact that you can move the 11's
> >16-bit virtual addressing around in far more than 64 KB of physical
memory
> >changes its basic address width.
>
> Physical address size is the amount of real, physical, hold in your hand,
> memory that a machine can address. (Even if you can't afford it, or
> even imagine a way to attach it.)
>
> Virtual is a little harder to define.

I can agree with that, though it would not surprise me if a
generally-accepted definition does exist that's more in line with what I've
been saying than with what you say below.

Modern x86 can be said to
> have a 45 bit virtual address space, with 13 bits for segment selectors,
> and 32 within each segment. (The low three bits of segment selectors
> are ring and local/global, so I don't count them.) It is fairly easy
> to write compilers to load segment selectors for both code and data
> references.

I'll at least start to entertain that suggestion when you show me some
commonly-used compilers that will happily handle a terabyte array on x86
(there could be some - I'm hardly a compiler junkie). Though (since I *am*
an old assembler junkie) I still would have major reservations about
defining something like hardware virtual address space based on software
trickery .

Now, even though the Pentium II had a 45 bit virtual
> and 36 bit physical address space,

You just jumped from 'can be said to have' to 'had'. Subtle, but worth
noting.

it only has a 32 bit MMU datapath.
> This makes it difficult to use the 45 bit virtual address space,
> but it is still there, anyway.
>
> >> > You could put up to 32K 12-bit words on a PDP-8, using a
> >> >> bank switching scheme that sort of foreshadowed x86 segments. The
> >> >> original 16-bit PDP-11 only offered half a bit more than that: 64K
> >> >> 8-bit bytes vs. 32K 12-bit words.
> >>
> >> User programs did their own bank switching on the -8, somewhat like
> >> programs on the 8086 (still popular in embedded applications) today.
> >> If you had 32K of expensive core, one program could and did use it
> >> all.
>
> >Just as one program can use almost the entire 4 MB of physical memory
that
> >the 11/70 (now J11) supports. The problem is that it can't do it
> >*transparently* to the program, any more than a program on the 8 could
use
> >more than 4KW (or a program on a 16-bit PC could use more than 64 KB:
> >though the separation of code, data, stack, etc. segments allowed use of
> >more than that at one time, the program still had to be cognizant of what
> >was in which segment).
>
> Transparent is hard to define. If a compiler can generate code for it,
> then it is probably transparent enough.

Interesting. I suspect that one could create a compiler for the PDP-11 that
could handle that terabyte array I mentioned above (though the paging
activity if you ever tried to process it all would be truly impressive) -
using only conventional overlay mechanisms. And if you restrict code and
data 'chunks' to small sizes (as you suggest below - 8 KB or 16 KB would
work well on the 11) it becomes almost easy. Does that mean that after all
these years the 11 actually had a 40-plus-bit virtual address space? Shock,
as the British industry tabloids would say.

If a single CSECT
> (subroutine, function, etc.) is limited to 64K or so, but one can
> transparently (to the high-level language programmer) call between
> them that is probably good enough. For data access, you might
> limit the range of each subscript of an array, but allow for large
> multidimensional arrays. Again, how transparent it is depends on
> how it is used.

I have this feeling that a good definition of the virtual address width of a
processor really shouldn't depend on what you're doing with it, but that's
just a gut reaction.

- bill

Bruce Hoult

unread,
Mar 11, 2002, 5:24:45 PM3/11/02
to
In article <a6ikad$k00$1...@trsvr.tr.unisys.com>,
"Michael G. Dobbins"
<michael.dob...@No.Spam.unisys.No.Spam.com.No.Spam> wrote:

Was this happening all the time, or only when a start bit transition was
seen on one of the lines?

-- Bruce

Anne & Lynn Wheeler

unread,
Mar 11, 2002, 5:46:47 PM3/11/02
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:
>
> There was a 16-bit line still present in MVS/370, which was removed
> in MVS/XA. That was a relic of a MUCH older era and, by then,
> affected only the location of device entries and similar stuff.

... aka UCB ... unit control block ... basically device descriptor was
addressed with 16bit values ... so all UCBs (device definitions) had to
reside in the first 64k bytes of address space. This also represented
a limitation on the total number of devices that could be defined. This
was an operating system convention ... not a hardware convention.

370 hardware I/O architecture introduced an addition to the 360 CCWs
(channel command word, aka methodology to define I/O operations)
called IDALs.

In standard 360 hardware I/O, CCWs were

* 8 bit operation indicator (thoe IO "op-code")
* 24 bit address
* 8 bit misc. flags
* reserved
* 16 bit count

The 8 bit operation indicator specified things like read, write,
control, etc. operations.

The 24 bit address specified the target for read/write operations,
control information, etc. The CCWs and the target of the CCWs had to
be within 16mbyte.

IDALs were one or more full-word (32bit) InDirect Address Lists. In
370, the address for read/write operations could be moved into IDALs,
a flag set in the "misc flags" indicating use of IDALs, and the CCW
would point to an IDAL (rather than directly to target address). The
CCWs and IDALs had to reside within the first 16mbytes of memory, but
the IDALs could be used for I/O transfers addressing 2gbytes (31bits).
IDALs were part of 370 w/o 31bit address, but used to improve
scatter/gather non-contiguous (real) memory I/O transfers.

IDALs were used for 3033 32mbyte real memory option (late in the 370
life-cycle but pre XA/811 31-bit addressing). Instructions were still
24bit, but a unused bit in the page table entry was used to extend the
"real page number value" (instead of 12bit value for up to 4096 4kbyte
real pages, it could specify a 13bit real page number, for up to 8192
4kbyte real pages, TLB and cache also needed adjusting) and IDALs were
used to perform I/O operations involving addresses above the 16mbyte
real storage line. Also introduced in 3033 was something called dual
address space ... two separate 16mbyte virtual address spaces that
could be accessed sort-of simultaneously. The "problem" was that the
MVS kernel and much of system services were co-located in the
application virtual (16mbyte) address space, as kernel functions grew
& expanded, the portion of virtual address space available to
application shrunk. Daul address space was a gimick of getting part of
the kernel out of the application address space ... but still allowed
kernel services that were implemented based on directly addressing
data in the application address space to still work.

31bit addressing came along with the 3081 and XA/811 (aka 370-XA
... the internal code name was 811; 11/78). CCWs and IDALs were still
constrained to be in the first 16mbyte of real storage.

ccw format
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IESHBE01/2.17.2.3?SHELF=&DT=19970319164503

comparison of 370 & 370-xa
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/F.0?DT=19970613131822#HDRAF1H1

there is 390 IO overview in following document:
http://www.linuxhq.com/kernel/v2.2/patch/pre-patch-2.2.15-19/linux.15p19_Documentation_Debugging390.txt.html

discussion of 390 "addresses"
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.12.2

random ref:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CONTENTS?SHELF=
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/DZ9ZR000/CONTENTS?SHELF=&DT=20020212195453

Now to get really confusing, dual address space then sort-of evolved
into access registers and multiple address spaces (up to 16) where
program call kinds of operations (w/o going thru kernel call overhead)
can result in a controlled address space switch:

http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/3.8?DT=19970613131822

from above:

3.8.1 Changing to Different Address Spaces

A program can cause different address spaces to be addressable by
using the semiprivileged SET ADDRESS SPACE CONTROL instruction to
change the translation mode to the primary-space mode, secondary-space
mode, access-register mode, or home-space mode. However, SET ADDRESS
SPACE CONTROL can set the home-space mode only in the supervisor
state. The program can cause still other address spaces to be
addressable by using other semiprivileged instructions to change the
segment-table designations in control registers 1 and 7 and by using
unprivileged instructions to change the contents of the access
registers. Only the privileged LOAD CONTROL instruction is available
for changing the home segment-table designation in control register
13.

=====================================================

http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/2.3.5?DT=19970613131822

2.3.5 Access Registers


The CPU has 16 access registers numbered 0-15. An access register
consists of 32 bit positions containing an indirect specification (not
described here in detail) of a segment-table designation. A
segment-table designation is a parameter used by the
dynamic-address-translation (DAT) mechanism to translate references to
a corresponding address space. When the CPU is in a mode called the
access-register mode (controlled by bits in the PSW), an instruction B
field, used to specify a logical address for a storage-operand
reference, designates an access register, and the segment-table
designation specified by the access register is used by DAT for the
reference being made. For some instructions, an R field is used
instead of a B field. Instructions are provided for loading and
storing the contents of the access registers and for moving the
contents of one access register to another.

Each of access registers 1-15 can designate any address space,
including the current instruction space (the primary address
space). Access register 0 always designates the current instruction
space. When one of access registers 1-15 is used to designate an
address space, the CPU determines which address space is designated by
translating the contents of the access register. When access register
0 is used to designate an address space, the CPU treats the access
register as designating the current instruction space, and it does not
examine the actual contents of the access register. Therefore, the 16
access registers can designate, at any one time, the current
instruction space and a maximum of 15 other spaces.

Paul Winalski

unread,
Mar 11, 2002, 6:08:47 PM3/11/02
to
On Mon, 11 Mar 2002 22:46:47 GMT, Anne & Lynn Wheeler
<ly...@garlic.com> wrote:

>IDALs were one or more full-word (32bit) InDirect Address Lists.

I always thought it was Indirect Data Address List.

> In
>370, the address for read/write operations could be moved into IDALs,
>a flag set in the "misc flags" indicating use of IDALs, and the CCW
>would point to an IDAL (rather than directly to target address). The
>CCWs and IDALs had to reside within the first 16mbytes of memory, but
>the IDALs could be used for I/O transfers addressing 2gbytes (31bits).
>IDALs were part of 370 w/o 31bit address, but used to improve
>scatter/gather non-contiguous (real) memory I/O transfers.

Since the dynamic address translation hardware was part of the
CPU and not necessarily accessible by the I/O channels, the
mapping of virtual address to physical address had to be done
via some other means, and that is where the IDAL came in. The
IDAL told the I/O channel what physical memory address corresponded
to the virtual address in the CCW. It had to be a list to perform
page scatter/gather.

>31bit addressing came along with the 3081 and XA/811 (aka 370-XA
>... the internal code name was 811; 11/78). CCWs and IDALs were still
>constrained to be in the first 16mbyte of real storage.

I've always wondered--why was it only 31 bit addressing, not 32 bit?
What was the high bit used for?

Michael G. Dobbins

unread,
Mar 11, 2002, 6:23:54 PM3/11/02
to

"Bruce Hoult" <br...@hoult.org> wrote in message
news:bruce-82F1D8....@copper.ipg.tsnz.net...

> In article <a6ikad$k00$1...@trsvr.tr.unisys.com>,
> "Michael G. Dobbins"
> > The DG Nova had a 16 bit register with each bit tied to a different
terminal
> > line so it could support 16 terminals. The processor was interrupted 5
> > times the bit rate to be able to sample the state of all 16 lines in one
> > word. this was done to be able to detect level transitions on the lines
and
> > then count to 3 to get to the center of the bit. Since there was always
a
> > transition from the stop bit to the start bit, the processor was able to
get
> > close enough to the center of the start bit and keep the timing drift
close
> > enough to the center of the rest of the bits in that single byte. I
> > actually played with writing one of these.
>
> Was this happening all the time, or only when a start bit transition was
> seen on one of the lines?

IIRC, the register was just a reflection of the line state and there was no
hardware to interrupt so if you wanted to know what was going on you had to
poll the raw line state. Ultimate in cheap interfaces :-)


Anne & Lynn Wheeler

unread,
Mar 11, 2002, 8:02:35 PM3/11/02
to
pr...@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
> Since the dynamic address translation hardware was part of the
> CPU and not necessarily accessible by the I/O channels, the
> mapping of virtual address to physical address had to be done
> via some other means, and that is where the IDAL came in. The
> IDAL told the I/O channel what physical memory address corresponded
> to the virtual address in the CCW. It had to be a list to perform
> page scatter/gather.

both CCWs and IDALs were real addresses. The CCW pointed to the IDAL.

In pre-31bit 370, IDAL was used for scatter-gather I/O efficiency.

there were all these pre-virtual memory applications that generated
CCWs thinking they were running in real memory. The supervisor
received control and had to copy the CCWs to kernel memory and replace
all the virtual addresses with real addresses. The original (virtual)
CCW may have specified a contiguous transfer virtual address range
that was actually mapped to discontiguous real pages. The IDALs were
used to list the discontiguous real addresses (mostly when I/O
transfer specified a range of bytes that crossed one or more page
boundaries).

So for (application) CCWs that had specified a virtual address with
I/O transfer that crossed one or more page boundaries ... instead of
mapping into a single real address ... mapped into multiple
(typically) discontiguous real addresses (which were listed in an
IDAL) with the real, translated pointing to the IDAL (instead of the
real data address).

There are previsions in standard 360 & 370 CCWs for scatter/gather
using a sequence of multiple data-chained CCWs (this is the method
that CP/67 used for translating virtual machine CCWs to real CCWS),
however there could be some timing sensitive situations involving
changing a single CCW into a sequence of two or more data-chained CCWs
(which was addressed by the 370 IDAL feature).

This goes into the deep dark reaches of I/O and CCW architecture. The
I/O architecture defines CCW operation as syncronous ... the current
CCW is completely executed before the next CCW is fetched (even if it
involved data-chaining CCWs). This syncronous CCW fetch & execution of
multiple CCWs in place of a single CCW would introduce latencies that
could result in anomolous results (possibly even data transfer
overruns) in various timing sensitive situations. (pre)Fetching
multiple IDAW in an IDAL had no such limitation. The "syncronous"
specification, in part allowed for any value in subsequent CCW to
modified ... even while the current CCW was in the process of
execution (the I/O programming equivalent of self-modifying code).

--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/

Heinz W. Wiggeshoff

unread,
Mar 11, 2002, 9:13:16 PM3/11/02
to
Paul Winalski (pr...@ZAnkh-Morpork.mv.com) writes:
>
...
> I've always wondered--why was it only 31 bit addressing, not 32 bit?
> What was the high bit used for?

Argument list delimiter, in assembler and IIRC PL/I.
The last address in the list has the high bit set to 1.

(One wouldn't want to rewrite BILLIONS of lines to kill that convention,
right? B-)

John Homes

unread,
Mar 11, 2002, 9:34:00 PM3/11/02
to

"Paul Winalski" <pr...@ZAnkh-Morpork.mv.com> wrote in message
news:3c8d384a....@proxy.news.easynews.com...

> On Mon, 11 Mar 2002 22:46:47 GMT, Anne & Lynn Wheeler
> <ly...@garlic.com> wrote:
>
of IBM XA architecture

>
> I've always wondered--why was it only 31 bit addressing, not 32 bit?
> What was the high bit used for?
>

In the PC, it selected between 24 and 31 bit addressing.

In lists of addresses in store, it marked the last entry in the list. This
was a software convention that predated 31-bit.

John Homes

Stephen Fuld

unread,
Mar 12, 2002, 1:58:13 AM3/12/02
to

"Anne & Lynn Wheeler" <ly...@garlic.com> wrote in message
news:ulmcya...@earthlink.net...

> nm...@cus.cam.ac.uk (Nick Maclaren) writes:
> >
> > There was a 16-bit line still present in MVS/370, which was removed
> > in MVS/XA. That was a relic of a MUCH older era and, by then,
> > affected only the location of device entries and similar stuff.
>
> ... aka UCB ... unit control block ... basically device descriptor was
> addressed with 16bit values ... so all UCBs (device definitions) had to
> reside in the first 64k bytes of address space. This also represented
> a limitation on the total number of devices that could be defined. This
> was an operating system convention ... not a hardware convention.

You guys know far more about it than I do, but then why, in the late 1990s,
when I regularly attended Share meetings, were the MVS guys (IBMers) talking
about limitations on virtual storage constraint relief involving moving
things within MVS to be able to be "above the line" and running out of
"below the line" space?

--
- Stephen Fuld
e-mail address disguised to prevent spam


Nick Maclaren

unread,
Mar 12, 2002, 4:06:32 AM3/12/02
to
In article <9Ghj8.4025$tP2.4...@bgtnsc05-news.ops.worldnet.att.net>,

Because all I/O had to be done from "below the line" (24-bit),
and some compilers, libraries and load modules had rather deeply
embedded assumptions about the top byte in addresses. If you
could remove chunks of the system to above the line, it effectively
increased the space below the line.

The 16-bit line affected only a VERY few, very systemish things.
If I recall correctly, there were a couple of things other than
UCBs, and there was something that had some hardware implications,
but there was nothing that would impact an ordinary program to any
great extent. I mentioned it only because it was a relic of a much
earlier era, when the 16-bit line still had some meaning :-)


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

Patrick Schaaf

unread,
Mar 12, 2002, 4:19:55 AM3/12/02
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:

>Because all I/O had to be done from "below the line" (24-bit),
>and some compilers, libraries and load modules had rather deeply
>embedded assumptions about the top byte in addresses. If you
>could remove chunks of the system to above the line, it effectively
>increased the space below the line.

Funny. This is isomorphic to the Linux kernel x86 HIGHMEM distinction,
which kicks in when you have 1GB RAM or more. With less than 1GB RAM,
the kernel keeps physical memory mapped 1:1. If you want to use more RAM,
you activate HIGHMEM, and all kinds of pathes in the kernel start to check
whether they must map a page before access. Consequently, all kinds of
data structures in the kernel are converted to be able to run without
a direct virtual mapping - they can then live "above the line" to free
up space "below the line".

Note that this is a kernel compile time distinction. Without HIGHMEM
the overhead code goes away completely, as it does on 64bit archs.

best regards
Patrick

--
Le plus ca change, le plus c'est la meme chose.
(as the french say; sprinkle accents where you see fit)

CBFalconer

unread,
Mar 12, 2002, 7:06:59 AM3/12/02
to
Patrick Schaaf wrote:
>
... snip ...

>
> --
> Le plus ca change, le plus c'est la meme chose.
> (as the french say; sprinkle accents where you see fit)

I only patronize Chinese restaurants which have a 'No monosodium
glutamate used' sign posted. C'est dommage.

--
Chuck F (cbfal...@yahoo.com) (cbfal...@XXXXworldnet.att.net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:u...@ftc.gov (for spambots to harvest)


jmfb...@aol.com

unread,
Mar 12, 2002, 6:35:41 AM3/12/02
to
In article <ZF9j8.36899$uv5.3...@bin6.nnrp.aus1.giganews.com>,
"Bill Todd" <bill...@metrocast.net> wrote:

[pare that newsgroup before I attrack that guy's flame again]
<snip>

>> Transparent is hard to define. If a compiler can generate code for it,
>> then it is probably transparent enough.
>
>Interesting. I suspect that one could create a compiler for the PDP-11
that
>could handle that terabyte array I mentioned above (though the paging
>activity if you ever tried to process it all would be truly impressive) -
>using only conventional overlay mechanisms. And if you restrict code and
>data 'chunks' to small sizes (as you suggest below - 8 KB or 16 KB would
>work well on the 11) it becomes almost easy.

There are many ways to cope with limited memory address space.
Hopefully, one does not do it at the compiler level (where
compiler level implies an app user compiling his app).


> .. Does that mean that after all


>these years the 11 actually had a 40-plus-bit virtual
>address space? Shock, as the British industry tabloids would say.

Just because it is possible to implement with software does not
mean that it happened. ;-)

>
> If a single CSECT
>> (subroutine, function, etc.) is limited to 64K or so, but one can
>> transparently (to the high-level language programmer) call between
>> them that is probably good enough. For data access, you might
>> limit the range of each subscript of an array, but allow for large
>> multidimensional arrays. Again, how transparent it is depends on
>> how it is used.
>
>I have this feeling that a good definition of the
>virtual address width of a
>processor really shouldn't depend on what you're doing
>with it, but that's just a gut reaction.

You've got a good gut. :-) Proof: this thread's contents.

/BAH

Subtract a hundred and four for e-mail.

Inge Birkeli

unread,
Mar 12, 2002, 10:41:57 AM3/12/02
to
Lars Poulsen <la...@beagle-ears.com> wrote in message news:<3C8A80C0...@beagle-ears.com>...
.....
> As it happened, the system had losts of stability problems.
> This was in 1975, and CMOS memory was quite new. If I remember
> correctly, Norsk Data had chosen to use SRAM because they
> believed it would be more stable than DRAM. The system failed in
......
> to replace the memory with DRAM, which by then had become the
> industry standard.

As the HW designer of the memory you describe, Lars, I like to comment
some of your statements/guessings. But first I must say I'm thrilled
by reading the interest of my old babies, like the first silicon
memory systems, but also Nord 100 as my dearest one.

There never was any SRAM used for main memory, except for caches, look
up tables and control stores (later). In fact the problems you refer
to was with DRAMs as well.

At Norsk Data we decided to replace core memory with DRAM in a bold
plan, as we stopped ordering core memory before we knew the DRAMs
would work, and naturally they didn't, like old Murphy would know,
something that was about to kill ND.

These very first memory systems was using one of the early NMOS DRAM
devices from Intel and TI. However, only Intel was able to deliver
quantities, but unfortunately there was a meta-state bug in the chip.

The symptoms you describe was typical for the first systems with small
memories. I can remember visiting Danish Institute for Shipbuilding
Research, in order to help a Field Service Engineer with the problem
(maybe we met there?). It was first when we started building really
big memory systems we was able get an error frequency high enaugh to
trace the problem. Half a Mbyte e.g. was really big in 75. When we
finally spotted the problem, like always, the fix was very simple.

Thanks for the interest,

Inge

(Mr. in Norway)

Stephen Fuld

unread,
Mar 12, 2002, 12:41:48 PM3/12/02
to

"Nick Maclaren" <nm...@cus.cam.ac.uk> wrote in message
news:a6kgeo$56a$1...@pegasus.csx.cam.ac.uk...

So my original point was correct. That is, there are still remanents of the
original 24 bit addressing that haven't been gotten rid of yet and are still
affecting things today.

Nick Maclaren

unread,
Mar 12, 2002, 1:14:03 PM3/12/02
to
In article <w5rj8.4644$tP2.4...@bgtnsc05-news.ops.worldnet.att.net>,

Stephen Fuld <s.f...@worldnet.att.net> wrote:
>
>
>So my original point was correct. That is, there are still remanents of the
>original 24 bit addressing that haven't been gotten rid of yet and are still
>affecting things today.

That seems very likely, but it is probably now more legacy applications
code, rather than system services. I can't tell you for sure, as it is
a long time since I was involved with MVS.

Anne & Lynn Wheeler

unread,
Mar 12, 2002, 2:42:35 PM3/12/02
to
nm...@cus.cam.ac.uk (Nick Maclaren) writes:
> That seems very likely, but it is probably now more legacy applications
> code, rather than system services. I can't tell you for sure, as it is
> a long time since I was involved with MVS.

I/O in 31bit ... CCWs pointers to IDALs are 24bit pointers (which
means the IDALs have to be within first 16mbyte). This is a hardware
issue. There are all sorts of random software issues that may still
represent residual 24bit code.

.. i don't know what the 64bit effects have on all this

--
Anne & Lynn Wheeler | ly...@garlic.com - http://www.garlic.com/~lynn/

Anne & Lynn Wheeler

unread,
Mar 12, 2002, 3:18:14 PM3/12/02
to

Anne & Lynn Wheeler <ly...@garlic.com> writes:
> .. i don't know what the 64bit effects have on all this

all from z/64bit POP
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DZ9ZR000/CONTENTS?SHELF=&DT=20020212195453

The ORB
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DZ9ZR000/15.6.2?DT=20020212195453

specifies the (start of) channel program as a 31bit address (CCWs can
be anywhere in first 2gbyte).

it specifies whether a channel program is (consistently) format-0 or
format-1 CCWs and (consistently) format-1 IDALs (consistently) or
format-2 IDALs

presumably format-0 CCWs could reside above the 16mbyte line (aka 31bit), but
could only specify tranfers betlow 16mbyte line or point to IDALs within
the first 16mbyte.

CCWs:
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DZ9ZR000/15.6.3?DT=20020212195453

have format-0 CCW (old 24-bit address pointers ... either directly to
data or to IDAL)

and format-1 CCW (31-bit address pointers ... either directly to data
or to IDAL).

IDAW:
http://publibz.boulder.ibm.com:80/cgi-bin/bookmgr/BOOKS/DZ9ZR000/1.1.7?DT=20020212195453

they now have doubleword "format-2" (64bit) IDAW (as well as
"format-1" (31bit) IDAW.

Sander Vesik

unread,
Mar 12, 2002, 5:01:17 PM3/12/02
to
In comp.arch Bill Todd <bill...@metrocast.net> wrote:
>
>>
>> Physical address size is the amount of real, physical, hold in your hand,
>> memory that a machine can address. (Even if you can't afford it, or
>> even imagine a way to attach it.)
>>
>> Virtual is a little harder to define.
>
> I can agree with that, though it would not surprise me if a
> generally-accepted definition does exist that's more in line with what I've
> been saying than with what you say below.

But the "generaly" accepted these days tends to say "flat", and that need
not be the case.

>
> Modern x86 can be said to
>> have a 45 bit virtual address space, with 13 bits for segment selectors,
>> and 32 within each segment. (The low three bits of segment selectors
>> are ring and local/global, so I don't count them.) It is fairly easy
>> to write compilers to load segment selectors for both code and data
>> references.
>
> I'll at least start to entertain that suggestion when you show me some
> commonly-used compilers that will happily handle a terabyte array on x86
> (there could be some - I'm hardly a compiler junkie). Though (since I *am*
> an old assembler junkie) I still would have major reservations about
> defining something like hardware virtual address space based on software
> trickery .
>

It depends on how you define it - if it is "the maximal size of one
contiguous object" then yes, its always the same as the maximal size
of a segment. OTOH, if it "the maximum amount of memory directly
acessible using pointers" then it becomes num_segments x segment_size.

This doesn't apply to the x86 directly though unless you are in
the 286 mode, as the "virtual" addresses get translated to linear
32bit first and then page tables are applied (as opposed to having
page tables attached to segments)

>
> - bill
>

--
Sander

+++ Out of cheese error +++

Gordon DeGrandis

unread,
Mar 12, 2002, 5:09:40 PM3/12/02
to
What I would like to know is why the limitation of 2MB could not be
extended for the B1900 machines? There was a limitation. Even though the
machine used interpreters and the mamory was mapped for each programming
language the modern overuse of memory caught up with the B1000 series.
The last one of the series, B1900 was a dual processor machine but it
was still limited to 2MB of memory.

Does anyone have some insights to this limitation?

Gordon

jmfb...@aol.com wrote:

> In article <3C88F729...@cisco.com>,
> J Ahlstrom <jahl...@cisco.com> wrote:
>
>
>>Gordon Bell has said that the hardest mistake
>>to deal with in a computer architecture is too few
>>memory address bits.
>>
> <snip>
>
> I'd like to know when he finally realized this.


>
> /BAH
>
> Subtract a hundred and four for e-mail.
>


--
---------------------------------
Gordon DeGrandis - Brussels Belgium
Email address is SPAM protected please modify before sending

Andrew McWhirter

unread,
Mar 12, 2002, 5:23:45 PM3/12/02
to
Gordon DeGrandis wrote:
>
> What I would like to know is why the limitation of 2MB could not be
> extended for the B1900 machines? There was a limitation.

I'm by no means an expert but I seem to recall that the problem was
rooted in the fact that the B1000 series was *bit* addressable. 2MB =
16Mb, requiring 24 bits to address, and that's all they had.

> The last one of the series, B1900 was a dual processor machine but it
> was still limited to 2MB of memory.

The last ones were the little console machines, which we called "GEM"s
(Goleta Entry-level Machine). I think the models were B1965 for the
single cabinet and B1995 for the dual processor. The second processor
was't all that useful as I recall, since it was configured as a slave to
the main processor and used for I/O handling only (but I may well be
wrong on that...)

Cheers - Andrew

Bill Todd

unread,
Mar 12, 2002, 6:30:56 PM3/12/02
to

"Sander Vesik" <san...@haldjas.folklore.ee> wrote in message
news:10159704...@haldjas.folklore.ee...

> In comp.arch Bill Todd <bill...@metrocast.net> wrote:

...

> > I'll at least start to entertain that suggestion when you show me some
> > commonly-used compilers that will happily handle a terabyte array on x86
> > (there could be some - I'm hardly a compiler junkie). Though (since I
*am*
> > an old assembler junkie) I still would have major reservations about
> > defining something like hardware virtual address space based on software
> > trickery .
> >
>
> It depends on how you define it - if it is "the maximal size of one
> contiguous object" then yes, its always the same as the maximal size
> of a segment. OTOH, if it "the maximum amount of memory directly
> acessible using pointers" then it becomes num_segments x segment_size.

Which only changes the question to, what's a pointer, and what's memory, and
what's 'directly accessible'? If you define a pointer as the content of a
general register (holding an address), then you're back to one segment max.
If you define a pointer as something else manipulated by software (whether
compiler-generated or otherwise) to address a range greater than that in a
single segment, then you've included everything up to traditional
disk-overlaying mechanisms and have started to get silly.

I'll admit that when you have *hardware* support for segment/address-style
pointers as IIRC the 16-bit x86s did things may get more blurred - though
such 'pointers' aren't otherwise directly manipulable by code the same way
'natural' pointers are and hence may or may not legitimately extend what
constitutes the nominal virtual address space. I would say they didn't, and
that up through the 286 the architecture had a 16-bit virtual address space
(because anything beyond that wasn't homogeneous/transparent at the
application code level), but will at least concede that other reasonable
people could suggest otherwise.

- bill

glen herrmannsfeldt

unread,
Mar 12, 2002, 7:02:37 PM3/12/02
to

The Watcom compilers will generate large model 32 bit code, with
48 bit pointer variables containing segment selectors and a 32 bit
offset. It might be that OS/2 supports this. I don't know of any
other OS that does.

> Now, even though the Pentium II had a 45 bit virtual
>> and 36 bit physical address space,

>You just jumped from 'can be said to have' to 'had'. Subtle, but
>worth noting.

Well, I changed from the architecture (x86) to an implement (Pentium II).

If the bank select mechanism was a defined part of the architecture
I would agree. If not, I might disagree. Does it work for code,
or just data?

> If a single CSECT
>> (subroutine, function, etc.) is limited to 64K or so, but one can
>> transparently (to the high-level language programmer) call between
>> them that is probably good enough. For data access, you might
>> limit the range of each subscript of an array, but allow for large
>> multidimensional arrays. Again, how transparent it is depends on
>> how it is used.

>I have this feeling that a good definition of the virtual address width of a
>processor really shouldn't depend on what you're doing with it, but that's
>just a gut reaction.

I would say it should be part of the hardware architecture. Though
virtual storage tends to require software support, it is usually
limited by hardware.

-- glen

Chuck Stevens

unread,
Mar 12, 2002, 7:44:14 PM3/12/02
to
All of the following is my personal opinion. Starting at the end and
working backward:

Though the B1995 was indeed master-slave, the slave was not limited to I/O
handling; in fact, IIRC it was the master that had that duty (via GISMO),
along with processor scheduling (also in GISMO). Other interpreters could
run on either processor (including the SDL interpreter for the oeperating
system). The slave was fully available for everything else.

I recall several well-optimized dual-processor DMSII shops that ran
somewhere around 180% user processor utilization or even more, and yes,
disabling one processor on such a machine indeed did reduce throughput by
almost exactly half. Such a machine was indeed powerful; the large
B1000 systems weren't just nipping at the heels of the lower end of the
Large Systems line, they very often thoroughly waxed the B5900 (and
occasionally even the B68/6900) on batch program benchmarks.

As I remember the fables, there was a follow-on machine to the GEM being
designed at Santa Barbara, though I don't think a prototype was ever built.
An "indirect" memory scheme was rejected, because memory access at the
hardware/interpreter level was already handled more like I/O on more
conventional machines, and adding indirection to that scheme would have
slowed memory access down (or increased the cost of hardware) unacceptably.
Accordingly this design would address the 16-megabit memory addressing
limitation by increasing the registers from 24 to 32 bits (thereby
increasing directly-addressible memory to 4096 gigabits, or 512 MB).

The hardware would have been rather more expensive to build than the 24-bit
processor, and while a 512MB architectural limit might have seemed
astronomical at the time, it was clear that at some point that wouldn't be
enough, and that point was likely to come rather sooner than later.

Even going from 24-bit to 32-bit registers would have taken the B1000
irrevocably and permanently into direct competition with the Large System
both in terms of market position and to some degree in terms of construction
costs. (Heck, as stated before, it already was). I believe Burroughs
management felt that massively increasing the performance of the Small
System by increasing the registers to 32 bits would not have been a wise
marketing direction, and that it would have been even less wise to take the
Small System even further into Large System territory by going beyond 32
bits. The fables have it that the follow-on project was killed sometime
before the spring of 1984.

Efforts then began to forward-fit those Keen Features that the B1000
software provided but the Large System lacked into Large System software to
make migration from B1000 to Large System attractive. Some such features
made it (page mode in CANDE), some didn't (page lit in CANDE, full support
in the existing MCS's, including GEMCOS, for ANSI-74 COBOL communications
facility), and some made it part-way (PASS RD). Many rued the demise of
the B1000 line; it was a much-loved machine.

I don't remember any difference in performance during the evolution of the
top-of-the-line B1900; the B1995 was less costly to run than the dual-proc
models that preceded it, and had a much smaller footprint. ISTR that the
dual-processor machines were introduced with the B1900 line, but I don't
think the single-proc B1955 was any more powerful than the first B1860,
given equivalent memory. Both were somewhat better than the B1720,
however, as I recall.

In any case, in an environment in which multiple process-bound user programs
were competing for the processor, going to a two-processor machine came
very, very close indeed to doubling throughput.

-Chuck Stevens

"Andrew McWhirter" <SPAM....@bigfoot.com> wrote in message
news:3C8E7FF1...@bigfoot.com...

Terje Mathisen

unread,
Mar 12, 2002, 11:47:37 PM3/12/02
to
Bill Todd wrote:
>
> "glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message
> Modern x86 can be said to
> > have a 45 bit virtual address space, with 13 bits for segment selectors,
> > and 32 within each segment. (The low three bits of segment selectors
> > are ring and local/global, so I don't count them.) It is fairly easy
> > to write compilers to load segment selectors for both code and data
> > references.
>
> I'll at least start to entertain that suggestion when you show me some
> commonly-used compilers that will happily handle a terabyte array on x86
> (there could be some - I'm hardly a compiler junkie). Though (since I *am*

Watcom, which was one of the very first 386 compilers, did support
'large/far' addresses, i.e. segment + 32-bit offset, and some OS code
might even have used it somewhere.

(I.e. NetWare used Watcom for the C parts for many years.)

Terje

--
- <Terje.M...@hda.hydro.com>
"almost all programming can be viewed as an exercise in caching"

Bill Todd

unread,
Mar 13, 2002, 12:56:11 AM3/13/02
to

"Terje Mathisen" <terje.m...@hda.hydro.com> wrote in message
news:3C8ED9E9...@hda.hydro.com...

> Bill Todd wrote:
> >
> > "glen herrmannsfeldt" <g...@ugcs.caltech.edu> wrote in message
> > Modern x86 can be said to
> > > have a 45 bit virtual address space, with 13 bits for segment
selectors,
> > > and 32 within each segment. (The low three bits of segment selectors
> > > are ring and local/global, so I don't count them.) It is fairly easy
> > > to write compilers to load segment selectors for both code and data
> > > references.
> >
> > I'll at least start to entertain that suggestion when you show me some
> > commonly-used compilers that will happily handle a terabyte array on x86
> > (there could be some - I'm hardly a compiler junkie). Though (since I
*am*
>
> Watcom, which was one of the very first 386 compilers, did support
> 'large/far' addresses, i.e. segment + 32-bit offset, and some OS code
> might even have used it somewhere.

That's why I was careful to specify a terabyte array as the test. If the
compiler was smart enough to handle such (via segmentation) *and* mung all
pointer arithmetic accordingly and transparently (with aliasing, that would
be challenging, especially between independently-compiled modules), then my
hat's off to it. Otherwise, it doesn't satisfy my test criteria (not that
they enjoy any special significance, but they are the subject under
discussion at the moment).

- bill

Patrick Schaaf

unread,
Mar 13, 2002, 1:42:56 AM3/13/02
to
g...@ugcs.caltech.edu (glen herrmannsfeldt) writes:

>Virtual is a little harder to define. Modern x86 can be said to


>have a 45 bit virtual address space, with 13 bits for segment selectors,
>and 32 within each segment. (The low three bits of segment selectors
>are ring and local/global, so I don't count them.) It is fairly easy
>to write compilers to load segment selectors for both code and data

>references. Now, even though the Pentium II had a 45 bit virtual
>and 36 bit physical address space, it only has a 32 bit MMU datapath.


>This makes it difficult to use the 45 bit virtual address space,
>but it is still there, anyway.

No, it is not. The segments all provide windows (base,limit) into a 32 bit
logical address space (called linear address space by Intel). This linear
address is then mapped through the 32bit virtual -> 36 bit physical
TLB/page table setup. At no point in time (without reloading the page
table base register) can you access more than the 32 bit virtual range.

Of course, reloading the page table base register gives you more than
32 bit "virtual" with or without use of segment registers.

Thus, in practise _and_ theory, the x86 has a 32 bit virtual address space,
regardless of segmentation.

best regards
Patrick

Sander Vesik

unread,
Mar 13, 2002, 8:57:19 AM3/13/02
to
In comp.arch Bill Todd <bill...@metrocast.net> wrote:
>
> "Sander Vesik" <san...@haldjas.folklore.ee> wrote in message
> news:10159704...@haldjas.folklore.ee...
>> In comp.arch Bill Todd <bill...@metrocast.net> wrote:
>
> ...
>
>> > I'll at least start to entertain that suggestion when you show me some
>> > commonly-used compilers that will happily handle a terabyte array on x86
>> > (there could be some - I'm hardly a compiler junkie). Though (since I
> *am*
>> > an old assembler junkie) I still would have major reservations about
>> > defining something like hardware virtual address space based on software
>> > trickery .
>> >
>>
>> It depends on how you define it - if it is "the maximal size of one
>> contiguous object" then yes, its always the same as the maximal size
>> of a segment. OTOH, if it "the maximum amount of memory directly
>> acessible using pointers" then it becomes num_segments x segment_size.
>
> Which only changes the question to, what's a pointer, and what's memory, and
> what's 'directly accessible'? If you define a pointer as the content of a
> general register (holding an address), then you're back to one segment max.
> If you define a pointer as something else manipulated by software (whether
> compiler-generated or otherwise) to address a range greater than that in a
> single segment, then you've included everything up to traditional
> disk-overlaying mechanisms and have started to get silly.

Or directly addressable could be 'is usable with a pointer deref/ in a
jmp/call/return instruction' - x86 definately had "far" jumps and returns
and I think direct "far" loads/stores where a pointer was not a 32 bit
quantity but a 16+32 bit quantity (i could be mis-remembering). At most
you had to load a segment register into one of teh extra registers.

If it cannot be directly done at ISA level, then its definately not
directly addressable 8-)

>
> I'll admit that when you have *hardware* support for segment/address-style
> pointers as IIRC the 16-bit x86s did things may get more blurred - though
> such 'pointers' aren't otherwise directly manipulable by code the same way
> 'natural' pointers are and hence may or may not legitimately extend what
> constitutes the nominal virtual address space. I would say they didn't, and
> that up through the 286 the architecture had a 16-bit virtual address space
> (because anything beyond that wasn't homogeneous/transparent at the
> application code level), but will at least concede that other reasonable
> people could suggest otherwise.
>
> - bill
>
>
>

--

Joe Pfeiffer

unread,
Mar 13, 2002, 10:20:26 AM3/13/02
to
mailer...@bof.de (Patrick Schaaf) writes:

> g...@ugcs.caltech.edu (glen herrmannsfeldt) writes:
>
> >Virtual is a little harder to define. Modern x86 can be said to
> >have a 45 bit virtual address space, with 13 bits for segment selectors,
> >and 32 within each segment. (The low three bits of segment selectors
> >are ring and local/global, so I don't count them.) It is fairly easy
> >to write compilers to load segment selectors for both code and data
> >references. Now, even though the Pentium II had a 45 bit virtual
> >and 36 bit physical address space, it only has a 32 bit MMU datapath.
> >This makes it difficult to use the 45 bit virtual address space,
> >but it is still there, anyway.
>
> No, it is not. The segments all provide windows (base,limit) into a 32 bit
> logical address space (called linear address space by Intel). This linear
> address is then mapped through the 32bit virtual -> 36 bit physical
> TLB/page table setup. At no point in time (without reloading the page
> table base register) can you access more than the 32 bit virtual range.

No, he's right: a 45 bit, 2 dimensional, virtual space is available
to the programmer. That's the most reasonable way to describe the
size of the virtual address space.

It is true that only 4 GB of this can be mapped into the linear
address space at a time, but that can be managed through making
segments valid and invalid without reloading the PTBR.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
Southwestern NM Regional Science and Engr Fair: http://www.nmsu.edu/~scifair

Stephen Fuld

unread,
Mar 13, 2002, 12:43:56 PM3/13/02
to

"Chuck Stevens" <charles...@unisys.com> wrote in message
news:a6m7cv$dis$1...@si05.rsvl.unisys.com...

snip

>Many rued the demise of
> the B1000 line; it was a much-loved machine.

Since this was one of the very few (perhaps only?) general purpose and
widely available bit addressed machine, it probably had an interesting
architecture. Is there anything available to read that explains the
architecture, particularly how the addressing worked? I would like to know
things like:

How were characters specified?
Could you define variables of say 29 bits long?
Were there allignment considerations, especially for performance?
Were there any other unique fetures besides addresing to the bit?
Performance comparison versus other similar machines - that is, did the bit
addressability cost much performance?

Etc.

It would be a shame if something as unique as this got lost in the mists of
history. I suspect there are lessons to learn that are still applicable

J Ahlstrom

unread,
Mar 13, 2002, 1:19:50 PM3/13/02
to

Stephen Fuld wrote:

>
>
> snip


>
>
>
> Since this was one of the very few (perhaps only?) general purpose and
> widely available bit addressed machine, it probably had an interesting
> architecture. Is there anything available to read that explains the
> architecture, particularly how the addressing worked? I would like to know
> things like:
>
> How were characters specified?

Could depend on the S-Language - the language-dependent instruction set
interpreted by an S-Language-dependent microcoded-interpreter or the
machine emulator. There were S-Languages for Fortran, Cobol, RPG,
SDL (a systems implementation language) and emulators for 1401, 1130, ...
Most (all?) S-languages used 8-bit Chars (EBCDIC? ASCII-8?, I don't remember).
The machine emulators used 6-bit chars, 'cause that's what the
machines being emulated used

>
> Could you define variables of say 29 bits long?

You could define an S-language with say 29 bit integers
or write a microprogram that operated on 29 bit integers.
There was substantial hardware support for bit-variable-length
variables on bit-variable boundries.

>
> Were there allignment considerations, especially for performance?

Not alignment considerations, because there was a LOT of hardware
to remove such considerations. The actual memory was 4 columns of 8 bits
or 32 bits wide, 1 byte was read from each column into a barrell shifter
and masker with 0 to 24 bits masked and returned in the read or
written in the write.

>
> Were there any other unique fetures besides addresing to the bit?

One S-Language (and hence microcoded interpreter)
per HLL (originally RPG was the same as COBOL,
later it was given its own S-Language and interpreter).
Supposedly different HLLs could greatly benefit from
specifically tailored S-languages. I was not convinced then
and am not convinced now. Except in the case of absolutely
minimizing code size and data size which may have been
the whole point.

Variable length operands
Writeable control store
Execution of microcode completely from WCS, completely from RAM
or from a mixture. WCS was replaced by microcode cache in 1800 et seq.

>
> Performance comparison versus other similar machines - that is, did the bit
> addressability cost much performance?

It actually provided better performance when memory was small and expensive.
When memory became cheap and large, there was little value in making
fields the tiniest possible and compressing code as much as possible.
See
Wilner, Wayne in one of the Fall Joint Computer Conferences
in 1972 or so.
See also:
Organick, Elliot I.; Hinds, J. A. Interpreting Machines : Architecture
and Programming of the B1700-1800
Series (Operating and Programming Systems Ser., Vol. 5)
frequently y available at abebooks.com

>
>
> Etc.
>
> It would be a shame if something as unique as this got lost in the mists of
> history. I suspect there are lessons to learn that are still applicable
> today.

Major lesson is that technology advances and any stake you put in the
ground becomes obsolete.
n
Bit Addressability had a place in a world of expensive RAM.
The 1700 et seq were designed in and for and
cost-effective in an age when RAM was expensive. They allowed small-medium
scale/performance machines to multiprogram several processes written in
different languages in several 10s to 100s of K bytes. When I went to
work for Burroughs at the 1700 plant in June 1974, Burroughs had just
lowered the price of 1700 RAM from $16,000 for 16K to $11,000 for
16K.

Different S-Languages may have had a place in a world of
expensive RAM

Neither of these is relevant today IMHO.

>
>
> --
> - Stephen Fuld
> e-mail address disguised to prevent spam

--
Government is not reason. Government is not eloquence. It is force.
And, like fire, it is a dangerous servant and a fearful master."
G Washington


Lee Bertagnolli

unread,
Mar 13, 2002, 1:22:06 PM3/13/02
to
Stephen Fuld <s.f...@worldnet.att.net> wrote:

> snip

> Etc.

There are probably more than a few folks here who worked on B1000 systems,
I did for a time. Nifty little machine, with an abbreviated instruction
set (15-20 instructions?). Architecture targetted interpreters were
written in this instruction set, to support higher-level languages.
OS and system utilities were written in SDL, an algol-similar language
(surprise!) with BIT-type variables. The machine also supported a digit
(4-bit) addressable COBOL/RPG architecture, and a 36-bit FORTRAN
architecture. The instruction sets for each of these architectures were
optimized for frequency-based encoding, such that more frequently used
instructions required fewer bits to represent. This is the only system
I ever worked on where the compilers routinely generated executables
smaller in size that the corresponding source code. There was a hell of
a lot of good computer science in this machine, and it is a sorry state
of affairs that most of it is not to be found in comtemporary architectures.

--Lee

J Ahlstrom

unread,
Mar 13, 2002, 1:27:57 PM3/13/02
to
Sorry my last long post snipped
the part about it being about
the B1700 and its follow-ons.

I promise to more careful with the scissors.

JKA

J Ahlstrom

unread,
Mar 13, 2002, 1:32:46 PM3/13/02
to
About the B1700 and its follow-ons

Lee Bertagnolli wrote:

>
>
> There are probably more than a few folks here who worked on B1000 systems,
> I did for a time. Nifty little machine, with an abbreviated instruction
> set (15-20 instructions?). Architecture targetted interpreters were
> written in this instruction set, to support higher-level languages.
> OS and system utilities were written in SDL, an algol-similar language
> (surprise!) with BIT-type variables. The machine also supported a digit
> (4-bit) addressable COBOL/RPG architecture, and a 36-bit FORTRAN
> architecture. The instruction sets for each of these architectures were
> optimized for frequency-based encoding, such that more frequently used
> instructions required fewer bits to represent. This is the only system
> I ever worked on where the compilers routinely generated executables
> smaller in size that the corresponding source code. There was a hell of
> a lot of good computer science in this machine, and it is a sorry state
> of affairs that most of it is not to be found in comtemporary architectures.
>
> --Lee

In my earlier response on this topic I forgot that SDL allowed
you to define bit-varying variables down to 1 bit long (and up to ???).
One of the interesting things was that it took 32 bits to represent
that 1 bit. Each variable was stored in a "descriptor" a 32-bit
thing that described and contained the value of variables
of 1 to 24 bits or described and contained a pointer to
arrays, structures or values > 24 bits. IIRC.

Help, you 1700 jocks out there.

David Gay

unread,
Mar 11, 2002, 10:29:35 PM3/11/02
to

J Ahlstrom <jahl...@cisco.com> writes:
> > It would be a shame if something as unique as this got lost in the mists of
> > history. I suspect there are lessons to learn that are still applicable
> > today.
>
> Major lesson is that technology advances and any stake you put in the
> ground becomes obsolete.
> n
> Bit Addressability had a place in a world of expensive RAM.
> The 1700 et seq were designed in and for and
> cost-effective in an age when RAM was expensive. They allowed small-medium
> scale/performance machines to multiprogram several processes written in
> different languages in several 10s to 100s of K bytes. When I went to
> work for Burroughs at the 1700 plant in June 1974, Burroughs had just
> lowered the price of 1700 RAM from $16,000 for 16K to $11,000 for
> 16K.
>
> Different S-Languages may have had a place in a world of
> expensive RAM
>
> Neither of these is relevant today IMHO.

I think some of these space considerations apply to some kinds of embedded
computing (*) - I'm working w/ some hardware which has 16k of rom and 1k of
ram (ok, the latest incarnation has 128k of rom and 4k of ram). Some of
those old techniques are undoubtably relevant...

See http://webs.cs.berkeley.edu/tos/

*: I'd hazard "lowest possible cost" and "lowest possible power" as the two
most relevant domains.

--
David Gay
dg...@acm.org

Simon Slavin

unread,
Mar 13, 2002, 7:09:37 PM3/13/02
to
In article <tdn3cz7...@shell01.TheWorld.com>,
Chris Jones <c...@theWorld.com> wrote:

> Terje Mathisen <terje.m...@hda.hydro.com> writes:
>
> [...]
>
> PS. It really didn't help that it had to take a full interrupt for each
> character sent or received over the terminal ports. :-(
>
> That's nothing: early DG Novas interrupted for every BIT (which may or
> may not have been an improvement over having to poll)!

Early Apple Macintoshes interrupted every tick (50th or 60th
of a second). At this time they did the appropriate context-
switching and that handled so much data that it tended to
wipe the entire on-chip Level 2 memory cache, making it
useless for anything which lasted longer than a tick.

Eventually Apple wised-up and rewrote their OS to handle
scheduling more efficiently. Upgrading to that version of
the OS magically made your Mac speed up by about 50% for no
extra hardware.

Simon.
--
http://www.hearsay.demon.co.uk | [One] thing that worries me about Bush and
No junk email please. | Blair's "war on terrorism" is: how will they
| know when they've won it ? -- Terry Jones
THE FRENCH WAS THERE

Gary Renaud

unread,
Mar 13, 2002, 9:30:36 PM3/13/02
to
"Since [the Burroughs 17xx/18xx/19xx] was one of the very few (perhaps only?) general purpose and
widely available bit addressed machine, it probably had an interesting architecture. Is there anything
available to read that explains the architecture, particularly how the addressing worked? "

Well, for the micro-architecture, you can go to: (Warning: this file is EIGHT MEG+.)

http://www.spies.com/~aek/pdf/burroughs/B1700_SysRefMan.pdf

S-languages (the architectures the various compilers compiled to) are tougher to find. The only
thing I have is 20 pages in the book:

Advances in Computer Architecture
Glenford J. Myers
New York, 1982
John Wiley & Sons, Inc.
ISBN 0-471-07878-6


--

"C- The analysis was competent, but to receive a higher grade, the business
plan must be feasible, and overnight package delivery is not."
-- Grade received by Fred Smith, who later founded FedEx

Gary Renaud (Coco too! Cami three!) gre...@acm.org <--- Please use this
For contact info, see: http://home.earthlink.net/~sleepyjackal/contact.htm

Keith Landry

unread,
Mar 14, 2002, 12:18:32 PM3/14/02
to
The B1000 was also made more Large Systems compatible with the addition
of WFL
and the COBOL74B compiler.

Terje Mathisen

unread,
Mar 13, 2002, 4:02:53 PM3/13/02
to
Bill Todd wrote:
>
> "Terje Mathisen" <terje.m...@hda.hydro.com> wrote in message
> > Watcom, which was one of the very first 386 compilers, did support
> > 'large/far' addresses, i.e. segment + 32-bit offset, and some OS code
> > might even have used it somewhere.
>
> That's why I was careful to specify a terabyte array as the test. If the
> compiler was smart enough to handle such (via segmentation) *and* mung all
> pointer arithmetic accordingly and transparently (with aliasing, that would
> be challenging, especially between independently-compiled modules), then my
> hat's off to it. Otherwise, it doesn't satisfy my test criteria (not that
> they enjoy any special significance, but they are the subject under
> discussion at the moment).

It is quite possible that they did support that though, with some OS
help to set it up:

On 16-bit compiles you could allocate a 'huge' memory block, which
basically consisted of an array of segment descriptors, each pointing to
successive 64 kB blocks.

All array/pointer access to such a structure was transparently handled
by the compiler, i.e. it would increment the offset part of the pointer,
check for a carry, and if so increment the segment part.

Due to Intel having reserved the three least significant bits of segment
descriptors, instead of the top bits, you had to increment the segment
field by some multiple of 8 to get to the next descriptor though, which
is why you couldn't make it totally transparent even to ugly code that
would cast to unsigned long, do updates, and then cast back.

Anyway, since the x86 only handles up to 4 GB in the page tables, you
would need OS help to mark currently unmapped segments as invalid,
thereby causing a trap when using them.

I.e. afaik nobody have ever implemented this, but it is definitely
_possible_. :-)

Jeff Teunissen

unread,
Mar 18, 2002, 7:00:08 PM3/18/02
to
jmfb...@aol.com wrote:

> In article <3C8B0DD7...@ev1.net>,
> Charles Richmond <rich...@ev1.net> wrote:

[snip]

> >You can *never* have too many address bits, cpu registers,
> >disk space, etc.
>
> Yes, you can. Unlimited resources promotes waste.

Having vast resources promotes the _use_ of those resources. Sometimes, this
is wasteful. Sometimes it's not.

Working in games (the market segment that _really_ drives hardware advancement
these days), it's not waste to use an extra 2MB of memory for the latest in
realism. Because the customers always want better, more immersive, games,
they're willing to get bigger machines that can give them that realism...and
you're not going to be playing "Return to Castle Wolfenstein" or "WarCraft
III" on a VAX.

Windows doesn't drive hardware to be bigger, faster, better -- they follow
along behind the gaming industry. Since people who play games keep getting
bigger, faster machines, Microsoft keeps building bigger, more
resource-intensive operating systems to use those capabilities.

Microsoft's use of the vast computing resources available on these machines is
mostly pointless, but not entirely wasteful.

> > .. But there has to be a physical limit on
> >such things. Wherever you drive the stake into the ground,
> >eventually the progress of technology will render the
> >decision to be too conservative. And "eventually" seems to
> >be occuring sooner and sooner these days...
>
> The computing biz has been terribly one-sided for the last decade.
> We're producing hardware to compensate for programming slopiness.
> In fact, the hardware types depend on slopiness. I suppose it's
> an aspect of the hardware biz getting divorced from the software
> biz.

The computing business was always one-sided. The difference is that now, it's
largely the home customers that decide where the market's going. The practices
of the Great Monopolist notwithstanding.

Perhaps one day the Ents* will stand up and find out that they are strong.

* J.R.R. Tolkien reference, for those who've been living under a rock

--
| Jeff Teunissen -=- Pres., Dusk To Dawn Computing -=- deek @ d2dc.net
| GPG: 1024D/9840105A 7102 808A 7733 C2F3 097B 161B 9222 DAB8 9840 105A
| Core developer, The QuakeForge Project http://www.quakeforge.net/
| Specializing in Debian GNU/Linux http://www.d2dc.net/~deek/

jmfb...@aol.com

unread,
Mar 19, 2002, 4:26:57 AM3/19/02
to
In article <3C967EA3...@d2dc.net>,

Jeff Teunissen <de...@d2dc.net> wrote:
>jmfb...@aol.com wrote:
>
>> In article <3C8B0DD7...@ev1.net>,
>> Charles Richmond <rich...@ev1.net> wrote:
>
>[snip]
>
>> >You can *never* have too many address bits, cpu registers,
>> >disk space, etc.
>>
>> Yes, you can. Unlimited resources promotes waste.
>
>Having vast resources promotes the _use_ of those resources. Sometimes,
this
>is wasteful. Sometimes it's not.
>
>Working in games (the market segment that _really_ drives hardware
advancement
>these days),

Only in the retail stores. Hopefully, that may begin to change.

> ..it's not waste to use an extra 2MB of memory for the latest in


>realism. Because the customers always want better, more immersive, games,
>they're willing to get bigger machines that can give them that
realism...and
>you're not going to be playing "Return to Castle Wolfenstein" or "WarCraft
>III" on a VAX.

Why not? We shipped games to get the kiddies interested in using
computers. Playing games forced them to learn about interfacing
with all kinds of computer gear; how to type; how to spell; how
to analyze and solve problems using their brains. Shipping
a HLL with the system software (something I never did manage
to convince the suits to do) would have hooked even more kids
into writing their own games. That teaches them to teach themselves
how to debug, learn a new language in a day, write code, how not
to write code, etc.


>
>Windows doesn't drive hardware to be bigger, faster, better -- they follow
>along behind the gaming industry. Since people who play games keep getting
>bigger, faster machines, Microsoft keeps building bigger, more
>resource-intensive operating systems to use those capabilities.
>
>Microsoft's use of the vast computing resources available on these
machines is
>mostly pointless, but not entirely wasteful.

Huh. It would be a better approach if they didn't use all of
those resources and left some for the user address space.

>
>> > .. But there has to be a physical limit on
>> >such things. Wherever you drive the stake into the ground,
>> >eventually the progress of technology will render the
>> >decision to be too conservative. And "eventually" seems to
>> >be occuring sooner and sooner these days...
>>
>> The computing biz has been terribly one-sided for the last decade.
>> We're producing hardware to compensate for programming slopiness.
>> In fact, the hardware types depend on slopiness. I suppose it's
>> an aspect of the hardware biz getting divorced from the software
>> biz.
>
>The computing business was always one-sided.

Sigh! You don't know what the hell you're talking about...
or who you're talking to.

> The difference is that now, it's
>largely the home customers that decide where the market's going. The
practices
>of the Great Monopolist notwithstanding.

Gawd..you do have PC-itis.

>
>Perhaps one day the Ents* will stand up and find out that they are strong.
>
>* J.R.R. Tolkien reference, for those who've been living under a rock
>

/BAH

Steve O'Hara-Smith

unread,
Mar 19, 2002, 5:06:22 PM3/19/02
to
On Tue, 19 Mar 02 09:26:57 GMT
jmfb...@aol.com wrote:

JC> In article <3C967EA3...@d2dc.net>,
JC> Jeff Teunissen <de...@d2dc.net> wrote:

JC> >you're not going to be playing "Return to Castle Wolfenstein" or "WarCraft
JC> >III" on a VAX.
JC>
JC> Why not? We shipped games to get the kiddies interested in using

Mainly because vaxen don't usually come with hardware assisted 3D
graphics subsystems and five channel surround sound. Modern games need these
to obfuscate the way they resemble a cross between a bad infocom clone and
space invaders in sophistication.

JC> computers. Playing games forced them to learn about interfacing
JC> with all kinds of computer gear; how to type; how to spell; how

Not these games.

JC> >Microsoft's use of the vast computing resources available on these
JC> machines is
JC> >mostly pointless, but not entirely wasteful.
JC>
JC> Huh. It would be a better approach if they didn't use all of
JC> those resources and left some for the user address space.

Very definitely yes.

JC> Gawd..you do have PC-itis.

Common disease these days that one.

--
C:>WIN | Directable Mirrors
The computer obeys and wins. |A Better Way To Focus The Sun
You lose and Bill collects. | licenses available - see:
| http://www.sohara.org/

Jeff Teunissen

unread,
Mar 19, 2002, 9:30:08 PM3/19/02
to
jmfb...@aol.com wrote:

[snip]

> >Working in games (the market segment that _really_ drives hardware
> >advancement these days),
>
> Only in the retail stores. Hopefully, that may begin to change.

Nope. It won't change, not any time soon anyway. And it's not only in the
retail stores.

There isn't a single business market segment that needs beefier hardware and
can afford to pay for it. Physicists and biologists get hardware because the
government pays for it, so they don't count. The enterprise will make do with
whatever's available, as long as they can get enough of it.

The only market segment that continually needs faster hardware, is games. No
matter how good your machine is today, you will need an upgrade to play a game
that will be released a year from now at its highest quality settings.

This isn't planned obsolescence -- it's real obsolescence.

> > ..it's not waste to use an extra 2MB of memory for the latest in
> >realism. Because the customers always want better, more immersive,
> >games, they're willing to get bigger machines that can give them that
> >realism...and you're not going to be playing "Return to Castle
> >Wolfenstein" or "WarCraft III" on a VAX.
>
> Why not?

Because a VAX can't handle the demands of a modern game.

[snip]

> >Microsoft's use of the vast computing resources available on these
> >machines is mostly pointless, but not entirely wasteful.
>
> Huh. It would be a better approach if they didn't use all of
> those resources and left some for the user address space.

You're the one using Windows, not me.

[snip]

> >> The computing biz has been terribly one-sided for the last decade.
> >> We're producing hardware to compensate for programming slopiness.
> >> In fact, the hardware types depend on slopiness. I suppose it's
> >> an aspect of the hardware biz getting divorced from the software
> >> biz.
> >
> >The computing business was always one-sided.
>
> Sigh! You don't know what the hell you're talking about...
> or who you're talking to.

I know precisely who I'm talking to, and what I'm talking about, and I stand
by what I said.

The computing business has always been one-sided. Customers never drove the
industry more than they do now. DEC was an aberration, one of the few
companies that recognized, temporarily, that their customers were where their
money came from. They stopped listening. IBM never really listened much,
though they did turn stuff created by customers into IBM products.

The small computer companies actually listened to their customers, until they
were no longer small computer companies...and then most of them became failed
computer companies.

And Microsoft no longer has to listen to anybody, so they don't.

> >The difference is that now, it's largely the home customers that
> >decide where the market's going. The practices of the Great Monopolist
> >notwithstanding.
>
> Gawd..you do have PC-itis.

No, not really. I have eyes.

Randall Bart

unread,
Mar 19, 2002, 10:09:21 PM3/19/02
to
'Twas Tue, 19 Mar 2002 00:00:08 GMT when all comp.sys.unisys stood in awe as
Jeff Teunissen <de...@d2dc.net> uttered:

>Because the customers always want better, more immersive, games,
>they're willing to get bigger machines that can give them that realism...and
>you're not going to be playing "Return to Castle Wolfenstein" or "WarCraft
>III" on a VAX.

Rogue was on VAX. What more game do you need?
--
RB |\ © Randall Bart
aa |/ ad...@RandallBart.spam.com Bart...@att.spam.net
nr |\ Please reply without spam 1-917-715-0831
dt ||\ Here I am: http://RandallBart.com/ I LOVE YOU
a |/ He Won't Get Far: http://www.callahanonline.com/calhat9.htm
l |\ DOT-HS-808-065 MSMSMSMSMSMSMS=6/28/107 Joel 3:9-10
l |/ Terrorism: http://www.markfiore.com/animation/adterror.html

jmfb...@aol.com

unread,
Mar 20, 2002, 4:08:41 AM3/20/02
to
In article <20020319230622....@eircom.net>,

Steve O'Hara-Smith <ste...@eircom.net> wrote:
>On Tue, 19 Mar 02 09:26:57 GMT
>jmfb...@aol.com wrote:
>
>JC> In article <3C967EA3...@d2dc.net>,
>JC> Jeff Teunissen <de...@d2dc.net> wrote:
>
>JC> >you're not going to be playing "Return to Castle Wolfenstein" or
"WarCraft
>JC> >III" on a VAX.
>JC>
>JC> Why not? We shipped games to get the kiddies interested in using
>
> Mainly because vaxen don't usually come with hardware assisted 3D
>graphics subsystems and five channel surround sound. Modern games need
these
>to obfuscate the way they resemble a cross between a bad infocom clone and
>space invaders in sophistication.

Heh. There are number of "kids" who have installed TOPS-10 so
they can play Adventure. Who needs all of that sex appeal when
your mind's eye can provide better "graphics"?


>
>JC> computers. Playing games forced them to learn about interfacing
>JC> with all kinds of computer gear; how to type; how to spell; how
>
> Not these games.

yea. Well....


>
>JC> >Microsoft's use of the vast computing resources available on these
>JC> machines is
>JC> >mostly pointless, but not entirely wasteful.
>JC>
>JC> Huh. It would be a better approach if they didn't use all of
>JC> those resources and left some for the user address space.
>
> Very definitely yes.
>
>JC> Gawd..you do have PC-itis.
>
> Common disease these days that one.
>

Well, I keep trying to point out (as others do to me) that there
is more to computing than killing ogres. ;-)

Nick Maclaren

unread,
Mar 20, 2002, 6:38:21 AM3/20/02
to

In article <a79ro9$72s$4...@bob.news.rcn.net>, jmfb...@aol.com writes:
|> >
|> Well, I keep trying to point out (as others do to me) that there
|> is more to computing than killing ogres. ;-)

Unfortunately, administering, tuning and even using a modern system
is becoming more and more like fighting ogres :-)

As I said a long time back, the difference between that and playing
Adventure (which dates the statement) is that Adventure is both
more fun and more rational. You are lost in a twisty little maze
of documentation pages, all unhelpful ....


Regards,
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QH, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679

jmfb...@aol.com

unread,
Mar 20, 2002, 4:49:00 AM3/20/02
to
In article <a79sbd$6k2$1...@pegasus.csx.cam.ac.uk>,

nm...@cus.cam.ac.uk (Nick Maclaren) wrote:
>
>In article <a79ro9$72s$4...@bob.news.rcn.net>, jmfb...@aol.com writes:
>|> >
>|> Well, I keep trying to point out (as others do to me) that there
>|> is more to computing than killing ogres. ;-)
>
>Unfortunately, administering, tuning and even using a modern system
>is becoming more and more like fighting ogres :-)

ROTFL. Uh, well, yea. I was thinking about one-eyed kind :-))).

>
>As I said a long time back, the difference between that and playing
>Adventure (which dates the statement) is that Adventure is both
>more fun and more rational. You are lost in a twisty little maze
>of documentation pages, all unhelpful ....

<GRIN> I had to find another phone number to call AOL when they
disconnected the two I'd been using for five years. I kept
getting twist little messages, all alike. Never did find the
magic combination of tones that led me to a working phone number.
I ended up going to the library and using their phone, their
number, and their access to find a phone # on AOL's web page.

jmfb...@aol.com

unread,
Mar 20, 2002, 5:08:02 AM3/20/02
to
In article <3C97F228...@d2dc.net>,
Jeff Teunissen <de...@d2dc.net> wrote:

HEY!!! Right margin 70.

>jmfb...@aol.com wrote:
>
>[snip]
>
>> >Working in games (the market segment that _really_ drives hardware
>> >advancement these days),
>>
>> Only in the retail stores. Hopefully, that may begin to change.
>
>Nope. It won't change, not any time soon anyway. And it's not only in the
>retail stores.
>
>There isn't a single business market segment
>that needs beefier hardware and
>can afford to pay for it. Physicists and biologists
>get hardware because the
>government pays for it, so they don't count.

Do you really, really not understand how the computing
biz evolves?

>The enterprise will make do with
>whatever's available, as long as they can get enough of it.
>
>The only market segment that continually needs
>faster hardware, is games.

If you believe this, then you have no idea what is going on in
other parts of the computing world.

> No
>matter how good your machine is today, you will
>need an upgrade to play a game
>that will be released a year from now at its
>highest quality settings.
>
>This isn't planned obsolescence -- it's real obsolescence.

And I would call it getting too big for its britches.
Don't misunderstand me. The kiddies (even the old ones)
doing and playing those games are very sophisticated.
I lurk in a games newsgroup because their posts remind
me of our customers: smart, know how to learn and loud
when things don't work. The latter seems to be a missing
ingredient in the business world.

What's worrying me is that their growth appears to be getting stunted.

>> > ..it's not waste to use an extra 2MB of memory for the latest in
>> >realism. Because the customers always want better, more immersive,
>> >games, they're willing to get bigger machines that can give them that
>> >realism...and you're not going to be playing "Return to Castle
>> >Wolfenstein" or "WarCraft III" on a VAX.
>>
>> Why not?
>
>Because a VAX can't handle the demands of a modern game.

This doesn't make any sense. Are you confusing OS capability
with architectural possibility?

>
>[snip]
>
>> >Microsoft's use of the vast computing resources available on these
>> >machines is mostly pointless, but not entirely wasteful.
>>
>> Huh. It would be a better approach if they didn't use all of
>> those resources and left some for the user address space.
>
>You're the one using Windows, not me.

[puzzled emoticon] This proves something? Do you think this
is my choice? You might consider that it was my only choice.


>
>[snip]
>
>> >> The computing biz has been terribly one-sided for the last decade.
>> >> We're producing hardware to compensate for programming slopiness.
>> >> In fact, the hardware types depend on slopiness. I suppose it's
>> >> an aspect of the hardware biz getting divorced from the software
>> >> biz.
>> >
>> >The computing business was always one-sided.
>>
>> Sigh! You don't know what the hell you're talking about...
>> or who you're talking to.
>
>I know precisely who I'm talking to, and what I'm
>talking about, and I stand
>by what I said.

That's too bad. You have so much more to learn.

>
>The computing business has always been one-sided.
>Customers never drove the
>industry more than they do now. DEC was an
>aberration, one of the few
>companies that recognized, temporarily,

20 years is "temporarily"?

> .. that their customers were where their


>money came from. They stopped listening.

No. "They" did not stop listening; you are talking to one
of Them.

> ... IBM never really listened much,


>though they did turn stuff created by customers into IBM products.
>
>The small computer companies actually listened
>to their customers, until they
>were no longer small computer companies...and
>then most of them became failed
>computer companies.
>
>And Microsoft no longer has to listen to anybody, so they don't.

It didn't listen to anybody but themselves even in the beginning.

>
>> >The difference is that now, it's largely the home customers that
>> >decide where the market's going. The practices of the Great Monopolist
>> >notwithstanding.
>>
>> Gawd..you do have PC-itis.
>
>No, not really. I have eyes.
>

You're not looking at in all nooks and crannies of the computing
biz. It would help if you realized how hard/software products
evolve from a one-time implementation into a general distribution.

S Campion

unread,
Mar 20, 2002, 9:28:34 AM3/20/02
to
Randall Bart <Bart...@att.spam.net> wrote:

>'Twas Tue, 19 Mar 2002 00:00:08 GMT when all comp.sys.unisys stood in awe as
>Jeff Teunissen <de...@d2dc.net> uttered:
>
>>Because the customers always want better, more immersive, games,
>>they're willing to get bigger machines that can give them that realism...and
>>you're not going to be playing "Return to Castle Wolfenstein" or "WarCraft
>>III" on a VAX.
>
>Rogue was on VAX. What more game do you need?

How about a decent chess game ?
Is crafty available on VMS ?
Fritz sure isnt.

S Campion

unread,
Mar 20, 2002, 9:31:15 AM3/20/02
to
jmfb...@aol.com wrote:

>> .. that their customers were where their
>>money came from. They stopped listening.
>
>No. "They" did not stop listening; you are talking to one
>of Them.

"Them " was senior management and the system architects.
IRC you were just the den mother /glorified clerk and tea lady.

You dont count.

Pete Fenelon

unread,
Mar 20, 2002, 9:39:28 AM3/20/02
to
In alt.folklore.computers Steve O'Hara-Smith <ste...@eircom.net> wrote:
> JC>
> JC> Why not? We shipped games to get the kiddies interested in using
>
> Mainly because vaxen don't usually come with hardware assisted 3D
> graphics subsystems and five channel surround sound. Modern games need these
> to obfuscate the way they resemble a cross between a bad infocom clone and
> space invaders in sophistication.
>

A PPOE of mine was called (at the time; it's since changed) Infocom
(many companies all over the world are...). It wasn't "that" Infocom, it
was an intranet/E-commerce/etc. sort of place.

Legend has it that they once had a chap there manning the support
phoneline. Chap takes a phone call, and it's obvious that he's answering
quite a lot of questions. Most of the answers are of the form "North,
east, east, get rock, kill snake" - in fact, a complete solution to an
Infocom (*that* Infocom) adventure...

pete
--
pe...@fenelon.com "Irk the purists, irk the purists, it's a right good laugh."

It is loading more messages.
0 new messages