Anton Ertl wrote:
> Some (many?) architectures of the 1960s (earlier? later?) have the
> feature that, when loading from an address, if a certain bit in the
> word is set, the CPU uses that word as the address to access, and this
> repeats until a word without this bit is found. At least that's how I
> understand the descriptions of this feature.
> The major question I have is why these architectures have this
> feature.
Solves the memory access problem {arrays, nested arrays, linked lists,...}
The early machines had "insufficient" address generation means, and used
indirection as a trick to get around their inefficient memory address mode.
> The only use I can come up with for the arbitrarily repeated
> indirection is the implementation of logic variables in Prolog.
> However, Prolog was first implemented in 1970, and it did not become a
> big thing until the 1980s (if then), so I doubt that this feature was
> implemented for Prolog.
Some of the indirection machines had indirection-bit located in the
container at the address generated, others had the indirection in
the address calculation. In the case of the PDP-10 there was a time-
out counter and there were applications that worked fine up to a
particular size, and then simply failed when the indirection watch
dog counter kept "going off".
> A use for a single indirection is the implementation of the memory
> management in the original MacOS: Each dynamically allocated memory
> block was referenced only from a single place (its handle), so that
> the block could be easily relocated. Only the address of the handle
> was freely passed around, and accessing the block then always required
> double indirection. MacOS was implemented on the 68000, which did not
> have the indirect bit; this demonstrates that the indirect bit is not
> necessary for that. Nevertheless, such a usage pattern might be seen
> as a reason to add the indirect bit. But is it enough?
Two things: 1) the indirect bit is insufficient, 2) optimizing compilers
got to the point they were better at dereferencing linked lists than
the indirection machines were. {Reuse and all that rot.}
> Were there any other usage patterns? What happened to them when the
> indirect bit went out of fashion?
Arrays, matrixes, scatter, gather, lists, queues, stacks, arguments,....
We did all sorts of infinite-indirect stuff in asm on the PDP-10 {KI}
when programming at college.
They went out of fashion when compilers got to the point they could
hold the intermediate addresses in registers and short circuit the
amount of indirection needed--improving performance due to accessing
fewer memory locations.
The large register files of RISC spelled their doom.
> One other question is how the indirect bit works with stores. How do
> you change the first word in the chain, the last one, or any word in
> between?
In the machines where the indirection is at the instruction level, this
was simple, in the machines where the indirection was at the target, it
was more difficult.
> - anton
Summary::
First the architects thought registers were expensive.
{Many doubled down by OP-Mem ISAs.}
The architects endowed memory addressing with insufficient capabilities.
{Many to satisfy the OP-Mem and Mem-OP ISA they had imposed upon themselves}
Then they added indirection to make up for insufficient addressing.
And then everyone waited until RISC showed up (1980) before realizing their
error in register counts.
{Along about this time, Compilers started getting good.}