On 16-07-11 18:51 , JF Mezei wrote:
> On 2016-07-11 06:44, Jeff Findley wrote:
>> much on a C-64 using a "higher level language" due to the overhead of
>> incurred. Even in the 1980's, compilers, optimizers, and linkers
>> delivered code which was very "bloated". The 1960's would have been
>> far, far, worse.
How might a 1980's or 1960's linker deliver "bloated code"? In those
days, linkers merely arranged given modules of code in memory and
corrected accordingly the cross-references from one module to another;
linkers did not generate or modify code in other ways.
> Not quite. older languages such as COBOL and Fortran provided reasonably
> efficient assembly.
>
> When IBM designed the 360 architecture, it had COBOL and FORTRAN in mind
> and provided instructiosn to make COBOL more efficient (fixed decimal
> math, as well as moving many characters in one operation). The simpler
> MOVE command in COBOL could result in a single MVC assembler instruction
> if it was a fixed length character field.
>
> However, the Apollo computer instruction set is very basic, so compilers
> would not have done a good job of making most efficient code generation.
> (not "bloated" but not "most efficient" either
Hmm. On the other hand, the compiler's search for the most efficient
translation can be easier for a target machine with a very small
instruction set, because the search-space is smaller. Today, using
search-based optimization, one could probably make a compiler that
generates very efficient code for the AGC, given also that an AGC
program cannot be very large.
But in the 1960s, the choice of assembly language (and the somewhat
enhanced virtual-machine interpreted language) was no doubt a
conservative and risk-reducing decision.
> Also, because those computers had direct interfaces to devices/sensors,
> those interfaces become easier to managhe in assembler since you have
> direct access to memory, registers and interrupts.
I don't think that is a major issue. All intermediate-level and
high-level languages used for embedded systems have standard or
implementation-specific means for easy, direct access to memory
locations and memory-mapped registers. In C, one just converts (casts)
the integer address of a memory location into a pointer, through which
the memory location can be read and written. In Ada one can do the same,
but one can also declare the memory location as a variable which has a
given address, and access the memory location through this variable,
without using pointer syntax.
> Note: today, things are different because compilers are smart enough to
> organise code to make use of various capabilities for the architecture
> (pipelining, out of order execution, multiple execution units,
> pre-fetching, branch prediction etc).
But the AGC had none of those architectural features, so this ability of
modern compilers cannot have played a role in the decision, then, to use
assembly language for the Apollo missions.
> If I read correctly, the read-only "rope" memory that contained the
> programs had to manually be connected to represent the right bits. This
> makes it much easier to work with assembly language since the "compiled"
> code with the opcodes/operands in bits can more easily be matched to the
> original source code.
I don't understand your reasoning here. Yes, the relationship between
assembly language source-code and bits in memory is more direct than for
higher-level languages, but for the AGC the rope-weaving was done by
dedicated staff, not by the programmers, so from the programmers' point
of view the only difference wrt the present-day method of automatically
transferring the assembler's output into a FLASH memory is that the
manual programming took longer and cost more.
Using assembly language may have made it easier to make local changes to
the program in such a way that only some of the "ropes" had to be
rewoven. Even today the maintenance of on-board spacecraft SW may
require that changes are made by local binary patches and not by
uploading a whole new software binary. If the SW is compiled into binary
from a high-level language, it may be difficult to control the compiler
so that a local change in the source language produces only a local
change in the binary, and not, say, a shifting of a large number of
binary instructions forwards or backwards in memory, which would require
a large patch.