Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

An Argument for Big-Endian: Packed Decimal

623 views
Skip to first unread message

Quadibloc

unread,
Aug 5, 2019, 5:35:28 PM8/5/19
to
Generally speaking, the big-endian versus little-endian argument has been
interminable, and has generated more heat than light.

Historically, since computers were developed in the Western world, storage of
chracters in computer words was from "left" to "right", that is, from the most
significant part to the least significant part, basically without thought - it
was simply assumed to be the natural way to do it, as it corresponded to how
numbers and words were written by us on paper.

When advances in technology made it useful to build small-sized computers, with
a word length less than that of numbers it might be desired to calculate with on
those computers, some of those computers (one example is the Honeywell 316 and
related machines) would put the least significant part of a 32-bit number in the
first 16-bit word making it up.

This was done for the practical reason that adding two numbers together could
then proceed simply: go to the addresses where they're stored, get the least
significant parts first, add them, then use the carry from that for adding the
two most significant parts together.

The PDP-11 computer was a 16-bit computer made by Digital Equipment Corporation.
This company made aggressively priced minicomputers, but the PDP-11 was intended
to be more than just a run-of-the-mill minicomputer; instead of using an old-
fashioned ISA like that of DEC's own PDP-8 and PDP-4/7/9/15, or 16-bit machines
like the Honeywell 316 or the Hewlett-Packard 2115A, it was to be modern and
advanced, and in some ways similar to mainframes like the IBM System/360.

So, because it was to have a low price, it handled 32-bit numbers with their
least significant 16 bits first.

To make it less like an ordinary minicomputer, and more like a mainframe, the
byte ordering within a 32-bit integer was going to be consistent - like on the
System/360. It couldn't be ABCD, but instead of being CDAB, it could be DCBA.
Just put the first character in a 16-bit word in the least significant part, and
give that part the lower byte address!

This is how the idea of little-endian byte order was born.

The general belief today among computer scientists, it seems to me, is that the
industry should standardize on little-endian. Yes, big-endian seems more
familiar to people out of habit, but byte-order is just a convention.

Personally, I like big-endian myself. But I can see that it's an advantage that,
given that we address objects in memory by their lowest-addressed byte, the
place value for the first part is independent of length and thus always
constant.

But having the world of binary numbers used for arithmetic follow one rule, and
numbers as strings of ASCII characters that are read in and printed follow the
opposite rule... only works if those to worlds really are separate.

When the IBM System/360 came out, of course the PDP-11, which it partly helped
to inspire, didn't exist yet. So little-endian didn't exist as an alternative to
consider yet.

But despite that, if there had been a choice, big-endian would have been the
right choice for System/360.

The IBM System 360 mainframe, just like the microcomputer on your desktop,
handled character strings in its memory, even if they were in EBCDIC instead of
ASCII, and it also handled binary two's complement integers. (That was a bit of
a departure for IBM, as the 7090 used sign-magnitude.)

But it also had something else that Intel and AMD x86-architecture processors
only give very limited support to. Packed decimal numbers.

A packed decimal number is a way to represent numbers inside a computer as a
sequence of decimal digits, each one represented by four binary bits with the
binary-coded-decimal representation of that number.

The System/360 had pack and unpack instructions, to convert between numbers in
character string form and numbers in packed decimal form. Obviously, having both
kinds of numbers in the same order, with the first digit in the lowest address,
made those instructions easier to implement.

But the System/360 also *did arithmetic* with packed decimal quantities. And
they often used the same ALU, with a decimal adjust feature using nibble carries
turned on, to perform that arithmetic. So having the most significant part on
the same side of a number for binary numbers and packed decimal numbers made
_that_ easier.

Packed decimal numbers as an arithmetic data type, therefore, form a bridge
between numbers in character representation and numbers in binary
representation. As there are benefits from their byte ordering matching the byte
ordering of _both_ of those forms of numbers, their presence in a computer
architecture makes big-endian ordering advantageous for that architecture.

John Savard

Ahem A Rivet's Shot

unread,
Aug 6, 2019, 3:00:16 AM8/6/19
to
On Mon, 5 Aug 2019 14:35:27 -0700 (PDT)
Quadibloc <jsa...@ecn.ab.ca> wrote:

> Historically, since computers were developed in the Western world,
> storage of chracters in computer words was from "left" to "right", that
> is, from the most significant part to the least significant part,
> basically without thought - it was simply assumed to be the natural way
> to do it, as it corresponded to how numbers and words were written by us
> on paper.

However a little extra thought will reveal that we generally
manipulate numbers starting at the least significant end (except for long
division).

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Scott Lurndal

unread,
Aug 6, 2019, 9:38:43 AM8/6/19
to
Quadibloc <jsa...@ecn.ab.ca> writes:
>Generally speaking, the big-endian versus little-endian argument has been
>interminable, and has generated more heat than light.
>

>Packed decimal numbers as an arithmetic data type, therefore, form a bridge
>between numbers in character representation and numbers in binary
>representation. As there are benefits from their byte ordering matching the byte
>ordering of _both_ of those forms of numbers, their presence in a computer
>architecture makes big-endian ordering advantageous for that architecture.

Bit-endian ordering also creates some difficulties when doing arithmetic,
particularly when the architecture supports variable length "packed decimal"
operands. Consider addition, for example; normally one start with the least
significant digit and applies any carry to digits of more significance; but
that's not particularly efficient, expecially if the result would overflow.

Burroughs medium systems would do the arithmetic from the most significant
digit (logically adding leading zeros to the shorter operand). This allowed
overflow to be detected quickly.

To do this, they add corresponding digits and if they carry, overflow is signaled.
If not and the result was '9', they increment a 9's counter and add the next pair
of digits; if a subsequent pair of digits produces a carry, and the the nines counter
is nonzero, an overflow is signaled.

One can find a flow-chart of the algorithm in the B3500 RefMan on Bitsavers.
1025475_B2500_B3500_RefMan_Oct69.pdf

Peter Flass

unread,
Aug 6, 2019, 4:51:00 PM8/6/19
to
Big-endian has always made more sense to me, but this is a discussion that
will never be resolved. We’re long past the time where hardware
considerations of either are significant.

--
Pete

Scott Lurndal

unread,
Aug 6, 2019, 4:58:04 PM8/6/19
to
That's not the case. The processor we just taped out can be configured
for big-endian or little-endian independently in all privilege levels of
the processor (ARM Aarch64). Big-endian is popular with networking appliances
(because multibyte data on IEEE 803.1 networks is interpreted as big-endian);
whereas little-endian is popular in most other contexts. Little-endian is
somewhat easier to work with once one becomes accustomed to it. In any case,
99% of all programmers don't know or care what the endianness is.

With Arm64 processors, linux looks at the codefile (ELF) header and tells the
kernel what endianness to set for the process based on header flags during
the exec(2) system call.

Dan Espen

unread,
Aug 6, 2019, 9:06:20 PM8/6/19
to
sc...@slp53.sl.home (Scott Lurndal) writes:

> Quadibloc <jsa...@ecn.ab.ca> writes:
>>Generally speaking, the big-endian versus little-endian argument has been
>>interminable, and has generated more heat than light.
>>
>
>>Packed decimal numbers as an arithmetic data type, therefore, form a bridge
>>between numbers in character representation and numbers in binary
>>representation. As there are benefits from their byte ordering matching the byte
>>ordering of _both_ of those forms of numbers, their presence in a computer
>>architecture makes big-endian ordering advantageous for that architecture.
>
> Bit-endian ordering also creates some difficulties when doing arithmetic,
> particularly when the architecture supports variable length "packed decimal"
> operands. Consider addition, for example; normally one start with the least
> significant digit and applies any carry to digits of more significance; but
> that's not particularly efficient, expecially if the result would overflow.

On S/360 the packed decimal instructions have the address of the first
byte and a length. Using that, the hardware should know where
ALL the digits are.

Typically, the packed number is unpacked and printed with
either UNPK, ED, or EDMK. I suppose ED and EDMK could be
engineered to reverse nibble order, but we'd need a new instruction
to UNPK in reverse order.

I can't see how little endian is an optimization, how hard is it
for a machine to find all the bytes in a word, half word...just because
the instruction references the first byte, doesn't mean the machine
can't start working on the second or fourth.

--
Dan Espen

Quadibloc

unread,
Aug 6, 2019, 10:05:55 PM8/6/19
to
On Tuesday, August 6, 2019 at 7:06:20 PM UTC-6, Dan Espen wrote:

> I can't see how little endian is an optimization, how hard is it
> for a machine to find all the bytes in a word, half word...just because
> the instruction references the first byte, doesn't mean the machine
> can't start working on the second or fourth.

Fundamentally, it isn't. But remember, this started back in the days of small-
scale integration, if not discrete transistors.

John Savard

Andrew Swallow

unread,
Aug 7, 2019, 4:24:06 AM8/7/19
to
It is easy if you have index registers and instructions that can extract
the digit or byte. Where you have to shift and mask to extract and
insert it gets harder.

Dan Espen

unread,
Aug 7, 2019, 7:55:07 AM8/7/19
to
Hardly an excuse. There were all kinds of machines built before then
that were big endian and worked fine, including IBM 14xx.

--
Dan Espen

Charlie Gibbs

unread,
Aug 7, 2019, 12:26:04 PM8/7/19
to
On 2019-08-06, Peter Flass <peter...@yahoo.com> wrote:

> Big-endian has always made more sense to me, but this is a discussion that
> will never be resolved. We’re long past the time where hardware
> considerations of either are significant.

8080 One little,
8085 Two little,
8086 Three little-endians...
8088 Four little,
80186 Five little,
80286 Six little-endians...
80386 Seven little,
80386SX Eight little,
80486 Nine little-endians...
Pentium <segment fault>

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."

Gene Wirchenko

unread,
Aug 7, 2019, 4:21:51 PM8/7/19
to
On 7 Aug 2019 16:25:51 GMT, Charlie Gibbs <cgi...@kltpzyxm.invalid>
wrote:

>On 2019-08-06, Peter Flass <peter...@yahoo.com> wrote:
>
>> Big-endian has always made more sense to me, but this is a discussion that
>> will never be resolved. We’re long past the time where hardware
>> considerations of either are significant.
>
>8080 One little,
>8085 Two little,

No Z-80?

>8086 Three little-endians...
>8088 Four little,
>80186 Five little,
>80286 Six little-endians...
>80386 Seven little,
>80386SX Eight little,
>80486 Nine little-endians...
>Pentium <segment fault>

Sincerely,

Gene Wirchenko

John Levine

unread,
Aug 7, 2019, 5:12:29 PM8/7/19
to
In article <qiee6p$ev8$1...@dont-email.me>,
Dan Espen <dan1...@gmail.com> wrote:
>> Fundamentally, it isn't. But remember, this started back in the days of small-
>> scale integration, if not discrete transistors.
>
>Hardly an excuse. There were all kinds of machines built before then
>that were big endian and worked fine, including IBM 14xx.

As far as I can tell, until the DEC PDP-11 every machine that had a
byte order was big-endian. Even the -11 has some big-endianness in
some of its multi-word arithmetic hardware.

I have never been able to get a straight answer to the question of
where the PDP-11's byte order came from. The description in Computer
Engineering by Bell et al doesn't mention it. There's plenty of
speculation (please don't start) but no answer from anyone in a
position to know.



--
Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly

Quadibloc

unread,
Aug 7, 2019, 5:43:46 PM8/7/19
to
On Wednesday, August 7, 2019 at 3:12:29 PM UTC-6, John Levine wrote:

> I have never been able to get a straight answer to the question of
> where the PDP-11's byte order came from. The description in Computer
> Engineering by Bell et al doesn't mention it. There's plenty of
> speculation (please don't start) but no answer from anyone in a
> position to know.

That may be, but I would have thought the answer was obvious. The PDP-11 had
general registers. It was an attempt to get fancy, like a System/360. But DEC
was the low-price leader in the minicomputer business. So they wanted to keep
things simple, and have the least significant word of a 32-bit quantity first -
it saved a few transistors. And other machines like the Honeywell 316 did it
that way, it was a common practice.

Going little-endian for byte addressing and for packing characters in a word
gave you consistency "for free". That's an obvious enough fact that I don't
think it really is speculation to assume this is the cause.

Yes, it would be nice to have a primary source, but if, for whatever reaon, the
specific engineer responsible wishes to remain anonymous, we may never know for
sure.

John Savard

Peter Flass

unread,
Aug 7, 2019, 6:06:57 PM8/7/19
to
Quadibloc <jsa...@ecn.ab.ca> wrote:
>
> Yes, it would be nice to have a primary source, but if, for whatever reaon, the
> specific engineer responsible wishes to remain anonymous,

I can understand why!

--
Pete

John Levine

unread,
Aug 7, 2019, 7:13:40 PM8/7/19
to
In article <bcb6a694-3379-4157...@googlegroups.com> you write:
>On Wednesday, August 7, 2019 at 3:12:29 PM UTC-6, John Levine wrote:
>
>> I have never been able to get a straight answer to the question of
>> where the PDP-11's byte order came from. ...

>That may be, but I would have thought the answer was obvious. The PDP-11 had
>general registers. It was an attempt to get fancy, like a System/360. But DEC
>was the low-price leader in the minicomputer business. So they wanted to keep
>things simple, and have the least significant word of a 32-bit quantity first -
>it saved a few transistors. And other machines like the Honeywell 316 did it
>that way, it was a common practice.

Actually, the early PDP-11's handled nothing bigger than a 16 bit
word. (I programmed a PDP-11/20 so I'm speaking from experience
here.) Any 32 bit values were done in software. As others have
noted, when they added some optional multiple precision hardware
later, they got the word order wrong leading to "middle-endian"

For character strings, byte order doesn't really matter, since you
address the bytes rather than the words they might be packed into.

The DDP 316 was word addressed. I never programmed one, but the
reference manual at bitsavers says that in double precision values,
the most significant 15 bits are in the first word and the less
significant 15 bits are in the second word. The sign is the high
bit of the first word. Floating point also put the high part first.

For that and other reasons I doubt it was an influence on DEC.

Charlie Gibbs

unread,
Aug 7, 2019, 11:11:38 PM8/7/19
to
On 2019-08-07, Gene Wirchenko <ge...@shaw.ca> wrote:

> On 7 Aug 2019 16:25:51 GMT, Charlie Gibbs <cgi...@kltpzyxm.invalid>
> wrote:
>
>> 8080 One little,
>> 8085 Two little,
>
> No Z-80?

Maybe I could replace the 80186. But I can't have too many
or the rhyme wouldn't work.

There once was a fellow named Dan,
Whose limericks wouldn't quite scan.
When told this was so,
He replied, "Yes, I know,
But I always try to get as many syllables into the last line as I possibly can."

Bob Eager

unread,
Aug 8, 2019, 4:00:18 AM8/8/19
to
On Wed, 07 Aug 2019 23:13:39 +0000, John Levine wrote:

> Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
> (I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
> bit values were done in software.

Me too. It was luxury when we got bigger 11s that had hardware for it!

> The DDP 316 was word addressed. I never programmed one, but the
> reference manual at bitsavers says that in double precision values,
> the most significant 15 bits are in the first word and the less
> significant 15 bits are in the second word. The sign is the high bit of
> the first word. Floating point also put the high part first.

Yes, I was programming a 516 (pretty well the same) at around the same
time. Completely different. It was more fun for me though, because I
ended up modifying the CPU to change the instruction set a bit.



--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org

Gareth's was W7 now W10 Downstairs Computer

unread,
Aug 8, 2019, 4:58:21 AM8/8/19
to
On 08/08/2019 04:10, Charlie Gibbs wrote:
> On 2019-08-07, Gene Wirchenko <ge...@shaw.ca> wrote:
>
>> On 7 Aug 2019 16:25:51 GMT, Charlie Gibbs <cgi...@kltpzyxm.invalid>
>> wrote:
>>
>>> 8080 One little,
>>> 8085 Two little,
>>
>> No Z-80?
>
> Maybe I could replace the 80186. But I can't have too many
> or the rhyme wouldn't work.
>
> There once was a fellow named Dan,
> Whose limericks wouldn't quite scan.
> When told this was so,
> He replied, "Yes, I know,
> But I always try to get as many syllables into the last line as I possibly can."
>

There was a young fellow from Trinity,
Who, though he could trill like a linnet, he
Could never complete
Any verses with feet,
For, as he said, "Look, you fools, what I am writing here is free verse.".

Charlie Gibbs

unread,
Aug 8, 2019, 8:17:02 PM8/8/19
to
On 2019-08-08, Gareth's was W7 now W10 Downstairs Computer
There once was a man from Purdue,
Whose limericks stopped at line two.

There once was a fellow named Dunn,

(There was something about someone named Nero,
but I can't find a reference...)

Peter Flass

unread,
Aug 8, 2019, 8:41:27 PM8/8/19
to
Bob Eager <news...@eager.cx> wrote:
> On Wed, 07 Aug 2019 23:13:39 +0000, John Levine wrote:
>
>> Actually, the early PDP-11's handled nothing bigger than a 16 bit word.
>> (I programmed a PDP-11/20 so I'm speaking from experience here.) Any 32
>> bit values were done in software.
>
> Me too. It was luxury when we got bigger 11s that had hardware for it!

Didn’t the original -11s have to have the bootstrap toggled in from the
console? I seem to recall some machine where a boot prom was an option.

--
Pete

Gareth's was W7 now W10 Downstairs Computer

unread,
Aug 9, 2019, 4:30:17 AM8/9/19
to
Altogether now ...

16701
26
12702
352
5211
105711 ...

Can't remember much more, but it got cemented in my
mind 48 years ago :-)


googlegroups jmfbahciv

unread,
Aug 10, 2019, 3:04:57 PM8/10/19
to
Who did the design? If it was someone who left to start up Data General,
perhaps reading _The Soul of a Machine_ might give hints.

/BAH

googlegroups jmfbahciv

unread,
Aug 10, 2019, 3:08:15 PM8/10/19
to
Toggling to read in a papertape was the de rigor in 1971.

/BAH

Bob Eager

unread,
Aug 10, 2019, 4:33:50 PM8/10/19
to
On Sat, 10 Aug 2019 12:04:56 -0700, googlegroups jmfbahciv wrote:

> Who did the design? If it was someone who left to start up Data
> General,
> perhaps reading _The Soul of a Machine_ might give hints.

For the sake of those Googling:

"The Soul of a New Machine", by Tracy Kidder

John Levine

unread,
Aug 10, 2019, 4:46:39 PM8/10/19
to
In article <7e3c27ab-a45f-4e9a...@googlegroups.com>,
googlegroups jmfbahciv <jmfbah...@gmail.com> wrote:
>> I have never been able to get a straight answer to the question of
>> where the PDP-11's byte order came from.

>Who did the design? If it was someone who left to start up Data General,
>perhaps reading _The Soul of a Machine_ might give hints.

No, Ed de Castro's 16-bit machine was apparently more like what ended
up as the Nova, which was word addressed. The PDP-11 was Gordon
Bell's project.

Bob Eager

unread,
Aug 10, 2019, 4:56:48 PM8/10/19
to
On Sat, 10 Aug 2019 20:46:38 +0000, John Levine wrote:

> In article <7e3c27ab-a45f-4e9a...@googlegroups.com>,
> googlegroups jmfbahciv <jmfbah...@gmail.com> wrote:
>>> I have never been able to get a straight answer to the question of
>>> where the PDP-11's byte order came from.
>
>>Who did the design? If it was someone who left to start up Data
>>General,
>>perhaps reading _The Soul of a Machine_ might give hints.
>
> No, Ed de Castro's 16-bit machine was apparently more like what ended up
> as the Nova, which was word addressed. The PDP-11 was Gordon Bell's
> project.

The only DEC machine that de Castro was really involved in was the word
addressed PDP-8 (he was project manager).

Christian Brunschen

unread,
Aug 10, 2019, 5:08:25 PM8/10/19
to
In article <qiferr$1inr$1...@gal.iecc.com>, John Levine <jo...@taugh.com> wrote:
>In article <qiee6p$ev8$1...@dont-email.me>,
>Dan Espen <dan1...@gmail.com> wrote:
>>> Fundamentally, it isn't. But remember, this started back in the days
>of small-
>>> scale integration, if not discrete transistors.
>>
>>Hardly an excuse. There were all kinds of machines built before then
>>that were big endian and worked fine, including IBM 14xx.
>
>As far as I can tell, until the DEC PDP-11 every machine that had a
>byte order was big-endian. Even the -11 has some big-endianness in
>some of its multi-word arithmetic hardware.
>
>I have never been able to get a straight answer to the question of
>where the PDP-11's byte order came from. The description in Computer
>Engineering by Bell et al doesn't mention it. There's plenty of
>speculation (please don't start) but no answer from anyone in a
>position to know.

An excellent discussion of endianness - both regarding bytes and bits -
is IEN137, https://www.ietf.org/rfc/ien/ien137.txt, published on 1st
April 1980, and this actually uses the PDP11 as one of the examples
(though no, it does not claim to describe its origin either).

// Christian


Quadibloc

unread,
Aug 10, 2019, 6:08:49 PM8/10/19
to
On Saturday, August 10, 2019 at 1:04:57 PM UTC-6, googlegroups jmfbahciv wrote:
> On Wednesday, August 7, 2019 at 5:12:29 PM UTC-4, John Levine wrote:

> > I have never been able to get a straight answer to the question of
> > where the PDP-11's byte order came from. The description in Computer
> > Engineering by Bell et al doesn't mention it. There's plenty of
> > speculation (please don't start) but no answer from anyone in a
> > position to know.

> Who did the design? If it was someone who left to start up Data General,
> perhaps reading _The Soul of a Machine_ might give hints.

The Soul of a New Machine, by Tracy Kidder, was about a group of young designers
hired by Data General to design their 32-bit version of the Eclipse.

Edson de Castro, who designed the original Nova, and used to work at DEC... left
because DEC went with the PDP-11 instead of his design. So he wasn't working on
the PDP-11.

So I doubt it would have any information of use.

John Savard

Bob Eager

unread,
Aug 10, 2019, 7:17:00 PM8/10/19
to
It doesn't. I re-read it recently.

Rich Alderson

unread,
Aug 13, 2019, 5:47:31 PM8/13/19
to
Quadibloc <jsa...@ecn.ab.ca> writes:

> Edson de Castro, who designed the original Nova, and used to work at
> DEC... left because DEC went with the PDP-11 instead of his design. So he
> wasn't working on the PDP-11.

This is the mythology. The real story is a little more complicated, as I
learned last October at the Nova@50 celebration hosted by Bruce Ray of Wild
Hare. I met a number of original DG folks.

I've seen the 16-bit design which EdC pitched to DEC management. It looks like
a 16-bit extended PDP-8, rather than either a Nova or a PDP-11.

The Nova came out and was in production for a full year before DEC started on
the PDP-11. It was beating out the PDP-8/i (then the latest model) for sales,
which is why DEC went with a 16-bit system.

So no, DEC didn't go with EdC's 16-bit design--but neither did Data General.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

John Levine

unread,
Aug 13, 2019, 6:13:13 PM8/13/19
to
In article <mdd1rxo...@panix5.panix.com>,
Rich Alderson <ne...@alderson.users.panix.com> wrote:
>I've seen the 16-bit design which EdC pitched to DEC management. It looks like
>a 16-bit extended PDP-8, rather than either a Nova or a PDP-11.

Was it more like a stretched -8 or a squashed -9?

>The Nova came out and was in production for a full year before DEC started on
>the PDP-11. It was beating out the PDP-8/i (then the latest model) for sales,
>which is why DEC went with a 16-bit system.

The Nova was a good computer, a lot of performance from a low cost
design. It eventually had the same problem DEC did, nobody wanted
a mini any more when they could get commodity micros.

Anne & Lynn Wheeler

unread,
Aug 13, 2019, 7:19:34 PM8/13/19
to
John Levine <jo...@taugh.com> writes:
> The Nova was a good computer, a lot of performance from a low cost
> design. It eventually had the same problem DEC did, nobody wanted
> a mini any more when they could get commodity micros.

Old post, I've periodically reference, decade of DEC VAX sales, sliced
and diced by year, model, US/non-US ... around mid-80s drop in sales,
effectively shows mid-range market moving to large PCs and workstations
http://www.garlic.com/~lynn/2002f.html#0

IBM 4300s sold in the same mid-range market with similar numbers (modulo
large corporate orders of 100+ for distributed computing placing out
in departmental areas) ... and experienced similar decline.

I had a project that I was told neeeded some IBM content so went to order
some Series/1 ... but this was just after IBM had bought ROLM ... ROLM
was DG machines and years output of Series/1 were ordered to replace
DG. The person running ROLM datacenter (even before IBM bought them), I
had known several years earlier at IBM ... and horse traded some of help
with their build&test for some series/1.

later in SCI meetings
https://en.wikipedia.org/wiki/Scalable_Coherent_Interface

both sequent and data general went with four i486 proceessor (shared
cache) boards ... with SCI memory interface (allowing up to 64 boards,
256 processors). Never did see any of the data general machines
https://en.wikipedia.org/wiki/Sequent_Computer_Systems
... but did see some number of the sequent
https://en.wikipedia.org/wiki/Sequent_Computer_Systems#NUMA

this was before IBM bought them and shut them down. I had left in the
early 90s ... but did some consulting for Steve Chen ... who
was CTO of Sequent at the time
https://en.wikipedia.org/wiki/Steve_Chen_(computer_engineer)

--
virtualization experience starting Jan1968, online at home since Mar1970

Quadibloc

unread,
Aug 13, 2019, 7:58:59 PM8/13/19
to
On Tuesday, August 13, 2019 at 3:47:31 PM UTC-6, Rich Alderson wrote:

> So no, DEC didn't go with EdC's 16-bit design--but neither did Data General.

Well, of course he had to change it, at least a little! He couldn't use something
he designed while an employee of DEC, it would still have belonged to them. Plus,
since he had the PDP-11 to compete with, designing something that looked like the
HP 211x or the Honeywell x16 would not have been successful.

John Savard

Quadibloc

unread,
Aug 13, 2019, 8:00:34 PM8/13/19
to
On Tuesday, August 13, 2019 at 3:47:31 PM UTC-6, Rich Alderson wrote:
> Quadibloc <jsa...@ecn.ab.ca> writes:

> > Edson de Castro, who designed the original Nova, and used to work at
> > DEC... left because DEC went with the PDP-11 instead of his design. So he
> > wasn't working on the PDP-11.

> This is the mythology.

Where did I say that the design of Edson de Castro that DEC didn't go with - was
the same design as he used later for the Nova?

John Savard

Alfred Falk

unread,
Aug 14, 2019, 3:34:29 PM8/14/19
to
Rich Alderson <ne...@alderson.users.panix.com> wrote in
news:mdd1rxo...@panix5.panix.com:

> Quadibloc <jsa...@ecn.ab.ca> writes:
>
>> Edson de Castro, who designed the original Nova, and used to work at
>> DEC... left because DEC went with the PDP-11 instead of his design. So
>> he wasn't working on the PDP-11.
>
> This is the mythology. The real story is a little more complicated, as
> I learned last October at the Nova@50 celebration hosted by Bruce Ray
> of Wild Hare. I met a number of original DG folks.

Wild Hare? They still exist?! Wow!

> I've seen the 16-bit design which EdC pitched to DEC management. It
> looks like a 16-bit extended PDP-8, rather than either a Nova or a
> PDP-11.

I have heard that a factor in DEC's rejection of DeCastro's design was the
15" boards with 200+ contacts, which was deemed impractical to test and
manufacture reliably.

John Levine

unread,
Aug 14, 2019, 6:29:20 PM8/14/19
to
In article <qj1no3$jj4$1...@dont-email.me>, Alfred Falk <aef...@telus.net> wrote:
>I have heard that a factor in DEC's rejection of DeCastro's design was the
>15" boards with 200+ contacts, which was deemed impractical to test and
>manufacture reliably.

I can believe it. The PDP-6 had large boards which had reliability problems. I gather
a standard piece of the kit was a rubber mallet to tap all the boards and reseat the
connectors.

The PDP-7 through PDP-10 and early PDP-11s used small Flip Chip cards and wire wrapped
backplanes which worked well.

Alan Bowler

unread,
Jul 21, 2020, 8:51:10 PM7/21/20
to
On 2019-08-08 8:41 p.m., Peter Flass wrote:

> Didn’t the original -11s have to have the bootstrap toggled in from the
> console?

Yes. Did that a few times.

John Levine

unread,
Jul 21, 2020, 11:14:45 PM7/21/20
to
In article <rf82ht$79o$1...@dont-email.me>,
True, but it genderally didn't take 11 months to do.




--
Regards,
John Levine, jo...@taugh.com, Primary Perpetrator of "The Internet for Dummies",

Gareth Evans

unread,
Jul 22, 2020, 6:05:25 AM7/22/20
to
Altogther now, off the top of my head, from 46 years ago ...

16701
26
12702
352

... but I forget the rest!

Quadibloc

unread,
Jul 22, 2020, 12:45:53 PM7/22/20
to
I was able to Google a page which preserves it:

http://gunkies.org/wiki/PDP-11_Bootstrap_Loader

Set the address to 7744.

Then put the loader in:

7744 016701
7746 000026
7750 012702
7752 000352
7754 005211
7756 105711
7760 100376
7762 116162
7764 000002
7766 007400
7770 005267
7772 177756
7774 000765

and then the next word needs to contain the address of the boot device, which
may vary between systems.

John Savard

Gareth Evans

unread,
Jul 22, 2020, 2:50:50 PM7/22/20
to
Yes, thanks, and that triggers my memory of being able to read
the machine code in its neat orthogonal octal!



robin....@gmail.com

unread,
Jul 22, 2020, 8:39:57 PM7/22/20
to
ACE and DEUCE had a far better and well-designed system.
No loader was required to be in the machine.
All programs were self-loading.
Three punch cards contained 32 words to be loaded into a delay line.
The initial 4 rows of the first card contained 3 or 4 instructions
to read in the remaining 32 rows of the cards and to store them
directly in the high speed store.

Bob Eager

unread,
Jul 22, 2020, 8:43:01 PM7/22/20
to
As it happens, I have that printed on the programming card right beside
me on the desk (I am doing PDP-11 stuff).

Peter Flass

unread,
Jul 22, 2020, 10:56:51 PM7/22/20
to
The only real machine I know that couldn’t do this was the CDC 6400. S/360
had a three-card loader that you could IPL from the card reader.

--
Pete

Charlie Gibbs

unread,
Jul 23, 2020, 1:59:07 AM7/23/20
to
I wrote a one-card loader for the Univac 9300 (sort of like a 360/20)
which could load up to 16 subsequent cards (1280 bytes) into contiguous
memory locations and jump to the beginning. If you couldn't do what you
wanted in 1280 bytes (e.g. my 3-card memory dump), you could always write
a loader that would bring in whatever you wanted. (Mind you, that's what
the standard cards on the front of a binary deck did...)

--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <cgi...@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.

robin....@gmail.com

unread,
Jul 23, 2020, 4:55:55 AM7/23/20
to
On Thursday, July 23, 2020 at 12:56:51 PM UTC+10, Peter Flass wrote:
A number of the early machines had the loader in some form
of magnetic storage. I think that these computers had only
paper tape input, so the loader needed to assemble instruction
words from successive rows of paper tape.

As I said, ACE (1951) and DEUCE did not have any in-store
loader. For any program, the operator placed a deck of
program cards in the card reader and pressed a key on
the card reader, which cleared the high-speed store
and started the card reader.
The self-loading instructions on the first four rows of
a set of 3 cards were ordinary 32-bit binary-image instructions.
(as were the contents of the next 32 rows of the three cards).

Bob Eager

unread,
Jul 23, 2020, 5:33:49 AM7/23/20
to
Three cards? The Elliott 4100 series could do it on 12 rows (4 words) of
paper tape.

Gareth Evans

unread,
Jul 23, 2020, 7:08:38 AM7/23/20
to
Yes; should've thought of that, as I have PDP11 programming
cards from 1971 and also a fair selection of PDP8 and PDP11
manuals from that time.

PDP11 - undergraduate internship in the summer of 1971
and then a PDP11 assembler programmer from 1972 to 1981.

PDP8 - Final year at Essex in 1972 studying Computer
and Communications Engineering as a 3rd year specialism
of Electronics, we had the PDP8 as study examples both
for hardware and software.

Also, from 1971 I've a sales glossy, "Digital Products
and Applications" which covered the whole of their then
range.

(Reminder to self; must gird the loins and actually
assemble the PiDP8 and PiDP11 kits :-) )


Andy Walker

unread,
Jul 23, 2020, 8:09:38 AM7/23/20
to
On 23/07/2020 01:43, Bob Eager wrote:
> On Wed, 22 Jul 2020 09:45:52 -0700, Quadibloc wrote:
[...]
>> Set the address to 7744.
>> Then put the loader in:
>> 7744 016701 [...]
> As it happens, I have that printed on the programming card right beside
> me on the desk (I am doing PDP-11 stuff).

I was doing that in the '70s, and have now [of course] forgotten
all the details, but I have a vague memory that someone found a way of
shortening that by a couple of words, which makes a big difference when
you're setting switches by hand. Any takers?

Anyway, it was a great moment when the 11/05 was replaced by an
11/34, which just booted up into Unix "automatically".

The 11/05 was in a room carved out of a basement area previously
used as a "Gents". There had regularly been puddles on the floor, which
were ascribed to carelessness by the users. But when it was a computer
room, it became clear that the problem lay elsewhere. The University
wasn't disposed to do much about it -- yes, the building had been built
directly over a stream, yes, the [expensive] architect had bungled, but
what's the problem, really? "Well, there's a puddle half-way across
the floor, it's raining, the puddle is growing, and if it gets to where
the computers are, it could blow them, and cost ..." "Oh, /computers/.
Right." There was a pump installed within the hour.

--
Andy Walker,
Nottingham.

Ahem A Rivet's Shot

unread,
Jul 23, 2020, 8:30:03 AM7/23/20
to
On Wed, 22 Jul 2020 19:56:49 -0700
Peter Flass <peter...@yahoo.com> wrote:

> S/360 had a three-card loader that you could IPL from the card reader.

The 1130 had a single card loader that IPL'd from the card reader,
attempting to copy one in an 029 was a bad idea.

--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/

Gareth Evans

unread,
Jul 23, 2020, 9:57:57 AM7/23/20
to
On 23/07/2020 13:09, Andy Walker wrote:
> On 23/07/2020 01:43, Bob Eager wrote:
>> On Wed, 22 Jul 2020 09:45:52 -0700, Quadibloc wrote:
> [...]
>>> Set the address to 7744.
>>> Then put the loader in:
>>> 7744 016701 [...]
>> As it happens, I have that printed on the programming card right beside
>> me on the desk (I am doing PDP-11 stuff).
>
>     I was doing that in the '70s, and have now [of course] forgotten
> all the details, but I have a vague memory that someone found a way of
> shortening that by a couple of words, which makes a big difference when
> you're setting switches by hand.  Any takers?

You could save the input device directly in the first instruction as ...

12701
177560

instead of picking it up from the end, but can't see how
to save another word!


Scott Lurndal

unread,
Jul 23, 2020, 11:11:49 AM7/23/20
to
Burroughs medium systems used a one-card loader, and the 'load' button
was hardwired to issue a read from the card reader (or strapped to read
the first sector of disk) and transfer control to the buffer after the
read completed.

Joe Pfeiffer

unread,
Jul 23, 2020, 12:36:33 PM7/23/20
to
I remember the Nova with floppy disk drive had a two-instruction
loader -- first one started a DMA transfer from the disk to address 0,
second one was a jmp to itself.

Alfred Falk

unread,
Jul 23, 2020, 7:45:08 PM7/23/20
to
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote in
news:1btuxyt...@pfeifferfamily.net:

> I remember the Nova with floppy disk drive had a two-instruction
> loader -- first one started a DMA transfer from the disk to address 0,
> second one was a jmp to itself.

Correct. In full:

Reset
000376
Examine Sets address for subsquent deposits
0601xx NIOS xx Start IO on device xx (typically 33 or 37)
Deposit
000377 JMP 377
Deposit Next
000376
Start

(I had to look that up. Later machine with automatic load made it even
simpler:

Reset
xx device address (typically 33 or 37)
Start

Quadibloc

unread,
Jul 24, 2020, 6:51:27 AM7/24/20
to
On Saturday, August 10, 2019 at 1:04:57 PM UTC-6, googlegroups jmfbahciv wrote:

> Who did the design? If it was someone who left to start up Data General,
> perhaps reading _The Soul of a Machine_ might give hints.

The guy who left to start up Data General, Edson de Castro, designed the PDP-5
architecture which was later used in the PDP-8. He left specifically because they
rejected his design for their 16-bit computer, and made the PDP-11 instead.

John Savard

Quadibloc

unread,
Jul 24, 2020, 6:54:17 AM7/24/20
to
While this page doesn't explain when the decision to include little-endian was
made, it does name the originator of the design for the PDP-11: Harold
McFarland.

https://history-computer.com/ModernComputer/Electronic/PDP-11.html

John Savard

Quadibloc

unread,
Jul 24, 2020, 6:57:15 AM7/24/20
to
After all, since he worked on his original PDP-11 proposal while employed by
DEC, they could have sued him if he just used it without changes. And if the
PDP-11 was supposed to be 'better', then instead of an old-fashioned 'me-too'
design that looked like the HP 211x or the Honeywell 316/516, a new design that
he could argue was 'the best of both worlds' might be just the thing.

John Savard

Quadibloc

unread,
Jul 24, 2020, 7:00:54 AM7/24/20
to
And this page

http://hampage.hu/pdp-11/birth.html

has some more information.

John Savard

John Levine

unread,
Jul 24, 2020, 11:47:23 AM7/24/20
to
>> > > Edson de Castro, who designed the original Nova, and used to work at
>> > > DEC... left because DEC went with the PDP-11 instead of his design. So he
>> > > wasn't working on the PDP-11.
>>
>> > This is the mythology.
>>
>> Where did I say that the design of Edson de Castro that DEC didn't go with - was
>> the same design as he used later for the Nova?
>
>After all, since he worked on his original PDP-11 proposal while employed by
>DEC, they could have sued him if he just used it without changes. And if the
>PDP-11 was supposed to be 'better', then instead of an old-fashioned 'me-too'
>design that looked like the HP 211x or the Honeywell 316/516, a new design that
>he could argue was 'the best of both worlds' might be just the thing.

The Nova was a very good design for the time. It was straightforward
to program and more importantly, straightforward to manufacture. I
gather it was similar to the rejected PDP-X but not identical, which
isn't surprising since its designers had more time to reconsider and
refine the design.

John Levine

unread,
Jul 24, 2020, 11:54:58 AM7/24/20
to
In article <0972e4a9-2307-449f...@googlegroups.com>,
Quadibloc <jsa...@ecn.ab.ca> wrote:
>While this page doesn't explain when the decision to include little-endian was
>made, it does name the originator of the design for the PDP-11: Harold
>McFarland.
>
>https://history-computer.com/ModernComputer/Electronic/PDP-11.html

I have been trying for many years to find out why DEC used a
little-endian byte order in the PDP-11 rather than the big-endian that
all then-existing byte addressable machine used.

So far nobody has 'fessed up, although lots of people have sent along
uninformed speculation. I know lots of reasons that DEC might have
chosen little-endian, but I don't know why they actually did.

Quadibloc

unread,
Jul 24, 2020, 12:23:19 PM7/24/20
to
On Friday, July 24, 2020 at 9:54:58 AM UTC-6, John Levine wrote:
> In article <0972e4a9-2307-449f...@googlegroups.com>,
> Quadibloc <jsa...@ecn.ab.ca> wrote:
> >While this page doesn't explain when the decision to include little-endian was
> >made, it does name the originator of the design for the PDP-11: Harold
> >McFarland.
> >
> >https://history-computer.com/ModernComputer/Electronic/PDP-11.html
>
> I have been trying for many years to find out why DEC used a
> little-endian byte order in the PDP-11 rather than the big-endian that
> all then-existing byte addressable machine used.

> So far nobody has 'fessed up, although lots of people have sent along
> uninformed speculation. I know lots of reasons that DEC might have
> chosen little-endian, but I don't know why they actually did.

Incidentally, from some other sources, I see that MacFarland brought over the
"nucleus" of the PDP-11 design. In "What Have We Learned from the PDP-11",
Gordon Bell notes that giving it byte addressing was one of the significant ways
in which it was an improvement on previous architectures.

In the absence of a more detailed account of the development of the PDP-11, I'm
afraid that "speculation" is all we have.

John Savard

Gareth Evans

unread,
Jul 24, 2020, 12:28:13 PM7/24/20
to
On 24/07/2020 16:54, John Levine wrote:
> In article <0972e4a9-2307-449f...@googlegroups.com>,
> Quadibloc <jsa...@ecn.ab.ca> wrote:
>> While this page doesn't explain when the decision to include little-endian was
>> made, it does name the originator of the design for the PDP-11: Harold
>> McFarland.
>>
>> https://history-computer.com/ModernComputer/Electronic/PDP-11.html
>
> I have been trying for many years to find out why DEC used a
> little-endian byte order in the PDP-11 rather than the big-endian that
> all then-existing byte addressable machine used.
>
> So far nobody has 'fessed up, although lots of people have sent along
> uninformed speculation. I know lots of reasons that DEC might have
> chosen little-endian, but I don't know why they actually did.
>
>
>

But Little Endian is the obvious and logical approach,
otherwise when dealing with multi precision you have to fart
about to get to the least significant byte when presented
with the address of the variable in memory.

The only justification for Big Endian seems to come from
lazy programmers who need their hands held and nose
wiped when looking at core dumps.

HTH YMMV EOE

Peter Flass

unread,
Jul 24, 2020, 12:56:59 PM7/24/20
to
No, big-endian is the logical approach for any machine bigger that an 8008,
since words are operated on, and brought into the ALU, as a unit.
Big-endian is the way people think of numbers, otherwise you’d write
amounts like 00.000,1$ for a thousand dollars.

--
Pete

Quadibloc

unread,
Jul 24, 2020, 1:32:37 PM7/24/20
to
On Friday, July 24, 2020 at 10:28:13 AM UTC-6, Gareth Evans wrote:

> The only justification for Big Endian seems to come from
> lazy programmers who need their hands held and nose
> wiped when looking at core dumps.

In that case, Big-Endian does not go far enough. Clearly we also have to change
computers over from doing their arithmetic in incomprehensible binary to
calculating everything in decimal so that the contents of storage will make
sense.

We have the technology to do so now without wasting copious amounts of memory -
Chen-Ho encoding!

John Savard

Jon Elson

unread,
Jul 24, 2020, 1:32:43 PM7/24/20
to
John Levine wrote:


> The Nova was a very good design for the time. It was straightforward
> to program and more importantly, straightforward to manufacture. I
> gather it was similar to the rejected PDP-X but not identical, which
> isn't surprising since its designers had more time to reconsider and
> refine the design.
>
Having programmed both the Nova and the PDP-11, the PDP-11 was light years
ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
But, I think the Nova was an indication that DeCastro was stuck in the past,
and just wanted to make slight improvements to the PDP-8. The PDP-11 was a
bold step into a new way of thinking, a whole new concept of CPU
architecture. Hopefully I don't have to detail the differences here.
(And, my memory of the Nova has faded largely into the distant past,
I last used one 45 years ago!)

Jon

Radey Shouman

unread,
Jul 24, 2020, 2:08:27 PM7/24/20
to
When writing Arabic* that's exactly how it's done. That is, the digit
order is the same as it is in Latin script, which is opposite the letter
order for words. Although "arabic numerals" is perhaps a misattribution
it does seem that that is whence they were adopted by Europeans. It
seems they just got the endianness wrong -- to be consistent they should
have reversed the order.

Least significant digit first is also exactly how numbers are written
when they are being calculated by hand, which was the point of
zero-based notation in the first place.

* I would guess this is true of other languages using Arabic script,
like Farsi and Urdu. No idea about Hebrew.

Peter Flass

unread,
Jul 24, 2020, 2:45:23 PM7/24/20
to
Quadibloc <jsa...@ecn.ab.ca> wrote:
> On Friday, July 24, 2020 at 10:28:13 AM UTC-6, Gareth Evans wrote:
>
>> The only justification for Big Endian seems to come from
>> lazy programmers who need their hands held and nose
>> wiped when looking at core dumps.
>
> In that case, Big-Endian does not go far enough. Clearly we also have to change
> computers over from doing their arithmetic in incomprehensible binary to
> calculating everything in decimal so that the contents of storage will make
> sense.

Made sense years ago. It also allows unlimited-precision arithmetic, with
no concerns about word size.

>
> We have the technology to do so now without wasting copious amounts of memory -
> Chen-Ho encoding!
>
> John Savard
>



--
Pete

Peter Flass

unread,
Jul 24, 2020, 2:45:23 PM7/24/20
to
Reading Bell’s paper it appears that the Nova was based on the DEC “PDP-X”
design that was rejected in favor of the PDP-11.

--
Pete

John Levine

unread,
Jul 24, 2020, 3:28:53 PM7/24/20
to
In article <Z96dnZpUvreohobC...@giganews.com>,
Jon Elson <el...@pico-systems.com> wrote:
>> The Nova was a very good design for the time. It was straightforward
>> to program and more importantly, straightforward to manufacture. I
>> gather it was similar to the rejected PDP-X but not identical, which
>> isn't surprising since its designers had more time to reconsider and
>> refine the design.
>>
>Having programmed both the Nova and the PDP-11, the PDP-11 was light years
>ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
>word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
>But, I think the Nova was an indication that DeCastro was stuck in the past,
>and just wanted to make slight improvements to the PDP-8.

I agree the PDP-11 was more fun to program, but the Nova was an
excellent piece of engineering. It was two large circuit boards that
pluggeed into a simple backplane so it was cheap and easy to
manufacture. The Nova shipped in 1969 at a base price of $4K or $8K
for a usable configuration. The PDP-11/20 shipped a year later priced
at $20K, partly because it was a more complex design, but also because
it was built from many small modules that plugged into a custom wired
backplane.

Eventually the PDP-11 won as the extra complexity became cheaper to
implement, and the advantages of byte addressing became more
compelling, but DG sold a whole lot of computers, mostly through OEMs
who packaged them into something else so the end customer didn't do
the programming.

DEC's Omnibus in 1971 was a backplane for the PDP-8/E and DEC came
up with one for the PDP-11 around 1973.

John Levine

unread,
Jul 24, 2020, 3:30:37 PM7/24/20
to
In article <rff26m$d9g$1...@dont-email.me>,
Gareth Evans <headst...@yahoo.com> wrote:
>> So far nobody has 'fessed up, although lots of people have sent along
>> uninformed speculation. I know lots of reasons that DEC might have
>> chosen little-endian, but I don't know why they actually did.

>But Little Endian is the obvious and logical approach, ...

Like I said, we have plenty of uninformed speculation, but no actual
facts.

Personally, I've done more programming on little-endian machines than
big-endian but I don't feel strongly about it.

Scott Lurndal

unread,
Jul 24, 2020, 3:32:28 PM7/24/20
to
Peter Flass <peter...@yahoo.com> writes:
>Quadibloc <jsa...@ecn.ab.ca> wrote:
>> On Friday, July 24, 2020 at 10:28:13 AM UTC-6, Gareth Evans wrote:
>>
>>> The only justification for Big Endian seems to come from
>>> lazy programmers who need their hands held and nose
>>> wiped when looking at core dumps.
>>
>> In that case, Big-Endian does not go far enough. Clearly we also have to change
>> computers over from doing their arithmetic in incomprehensible binary to
>> calculating everything in decimal so that the contents of storage will make
>> sense.
>
>Made sense years ago. It also allows unlimited-precision arithmetic, with
>no concerns about word size.

Well, to be fair, even a BCD machine generally has an underlying
"word" size of some form. Burroughs medium systems, while supporting
operand lengths of up to 100 digits, still fetched from memory in 10 digit
(40 bit) chunks.

Note that on those systems the most significant digit had the lowest
address, and the hardware adder started with the most significant digit
(if the field lengths of both operands were different, the shorter had
implied leading zeros). The algorithm worked form MSD to LSD so that
it could catch overflow immediately and not store a partial result on
overflow. It did this by counting leading digit-by-digit sums of
9 until overflow was detected or an single digit add summed to less
than 9 (proving the overflow of the receiving field was not possible)
before storing the result.


Ahem A Rivet's Shot

unread,
Jul 24, 2020, 4:00:04 PM7/24/20
to
On Fri, 24 Jul 2020 10:32:36 -0700 (PDT)
Quadibloc <jsa...@ecn.ab.ca> wrote:

> In that case, Big-Endian does not go far enough. Clearly we also have to
> change computers over from doing their arithmetic in incomprehensible
> binary to calculating everything in decimal so that the contents of
> storage will make sense.

Careful now, you'll re-invent ENIAC soon.

Ahem A Rivet's Shot

unread,
Jul 24, 2020, 4:30:02 PM7/24/20
to
On Fri, 24 Jul 2020 15:54:57 -0000 (UTC)
John Levine <jo...@taugh.com> wrote:

> So far nobody has 'fessed up, although lots of people have sent along
> uninformed speculation. I know lots of reasons that DEC might have
> chosen little-endian, but I don't know why they actually did.

I don't know either but I'd bet it wasn't any single reason but
rather an assessment that on balance it was probably the better option.

Dan Espen

unread,
Jul 24, 2020, 4:40:49 PM7/24/20
to
Calculating everything in decimal makes a lot of sense. For business
applications the input and output must be decimal and the data has a
small amount of calculation done to it before it must be printed or
displayed.

Back in the day, I ran performance tests for decimal add vs. the CVB
and CVD instructions. The 2 conversion instructions took 10 times
longer than an add.

The S/360 does decimal arithmetic, but only after the data is packed.
That's a trade off of space for ease of use. Having started programming
on 14xx equipment, I miss the simplicity. I don't think the space
saving was a good trade-off.

The 14xx gave us numbers only limited in magnitude by storage size. The
key to that technology was an extra delimiting bit (the wordmark) in a
character. As much as I liked the word mark concept, I'm unsure that it
should have been carried forward.

I did a lot of Assembler and COBOL for business applications on S/360.
I can't think of any time it made sense to take our decimal input and
convert it to binary for efficiency reasons.

--
Dan Espen

robin....@gmail.com

unread,
Jul 24, 2020, 8:54:23 PM7/24/20
to
On Saturday, July 25, 2020 at 4:08:27 AM UTC+10, Radey Shouman wrote:
> Peter Flass <p......@yahoo.com> writes:
>
> > Gareth Evans <h......@yahoo.com> wrote:
> >> On 24/07/2020 16:54, John Levine wrote:
> >>> In article <0972e4a9-2307-449f-8d23-......@googlegroups.com>,
> >>> Quadibloc <j......@ecn.ab.ca> wrote:
> >>>> While this page doesn't explain when the decision to include
> >>>> little-endian was
> >>>> made, it does name the originator of the design for the PDP-11: Harold
> >>>> McFarland.
> >>>>
> >>>> https://history-computer.com/ModernComputer/Electronic/PDP-11.html
> >>>
> >>> I have been trying for many years to find out why DEC used a
> >>> little-endian byte order in the PDP-11 rather than the big-endian that
> >>> all then-existing byte addressable machine used.
> >>>
> >>> So far nobody has 'fessed up, although lots of people have sent along
> >>> uninformed speculation. I know lots of reasons that DEC might have
> >>> chosen little-endian, but I don't know why they actually did.
> >>>
> >>>
> >>>
> >>
> >> But Little Endian is the obvious and logical approach,
> >> otherwise when dealing with multi precision you have to fart
> >> about to get to the least significant byte when presented
> >> with the address of the variable in memory.

> > No, big-endian is the logical approach for any machine bigger that an 8008,
> > since words are operated on, and brought into the ALU, as a unit.
> > Big-endian is the way people think of numbers, otherwise you’d write
> > amounts like 00.000,1$ for a thousand dollars.
>
> When writing Arabic* that's exactly how it's done. That is, the digit
> order is the same as it is in Latin script, which is opposite the letter
> order for words. Although "arabic numerals" is perhaps a misattribution
> it does seem that that is whence they were adopted by Europeans. It
> seems they just got the endianness wrong -- to be consistent they should
> have reversed the order.
>
> Least significant digit first is also exactly how numbers are written
> when they are being calculated by hand, which was the point of
> zero-based notation in the first place.

ACE and DEUCE held values internally in "Chinese" binary, that it with the
least-significant bit on the left.

Being serial machines, the least-significant bit of a word
emerged first from storage on its way to the adders.

Peter Flass

unread,
Jul 24, 2020, 9:03:06 PM7/24/20
to

Peter Flass

unread,
Jul 24, 2020, 10:43:16 PM7/24/20
to
John Levine <jo...@taugh.com> wrote:
> In article <Z96dnZpUvreohobC...@giganews.com>,
> Jon Elson <el...@pico-systems.com> wrote:
>>> The Nova was a very good design for the time. It was straightforward
>>> to program and more importantly, straightforward to manufacture. I
>>> gather it was similar to the rejected PDP-X but not identical, which
>>> isn't surprising since its designers had more time to reconsider and
>>> refine the design.
>>>
>> Having programmed both the Nova and the PDP-11, the PDP-11 was light years
>> ahead of the Nova. The Nova was basically a PDP-8 extended to a 16 bit
>> word. Yes, it had 4 registers, which was a huge improvement over the PDP-8.
>> But, I think the Nova was an indication that DeCastro was stuck in the past,
>> and just wanted to make slight improvements to the PDP-8.
>
> I agree the PDP-11 was more fun to program, but the Nova was an
> excellent piece of engineering. It was two large circuit boards that
> pluggeed into a simple backplane so it was cheap and easy to
> manufacture. The Nova shipped in 1969 at a base price of $4K or $8K
> for a usable configuration. The PDP-11/20 shipped a year later priced
> at $20K, partly because it was a more complex design, but also because
> it was built from many small modules that plugged into a custom wired
> backplane.

“Flip Chips?”

>
> Eventually the PDP-11 won as the extra complexity became cheaper to
> implement, and the advantages of byte addressing became more
> compelling, but DG sold a whole lot of computers, mostly through OEMs
> who packaged them into something else so the end customer didn't do
> the programming.
>
> DEC's Omnibus in 1971 was a backplane for the PDP-8/E and DEC came
> up with one for the PDP-11 around 1973.
>



--
Pete

rnet...@gmail.com

unread,
Jul 24, 2020, 11:14:47 PM7/24/20
to
On Friday, July 24, 2020 at 10:43:16 PM UTC-4, Peter Flass wrote:
> John Levine wrote:

> > I agree the PDP-11 was more fun to program...
> > ...it was a more complex design, but also because
> > it was built from many small modules that plugged into a custom wired
> > backplane.

> “Flip Chips?”

Not really. Flip Chips was DEC's name for their line of logic boards which became somewhat obsolete when 14 and 16 pin DIP packaged RTL, DTL, and TTL circuits became available. The PDP-11 modules used the same physical board design as the Flip Chips but housed much more complex circuits, designed purely for the PDP-11.

Bob Netzlof

John Levine

unread,
Jul 24, 2020, 11:19:13 PM7/24/20
to
In article <128418668.617331888.3212...@news.eternal-september.org>,
Peter Flass <peter...@yahoo.com> wrote:
>> at $20K, partly because it was a more complex design, but also because
>> it was built from many small modules that plugged into a custom wired
>> backplane.
>
>“Flip Chips?”

Yes. DEC had very bad experiences with the large boards in the PDP-6
and it was a long time before they used them again.

Quadibloc

unread,
Jul 25, 2020, 4:55:41 AM7/25/20
to
On Friday, July 24, 2020 at 2:00:04 PM UTC-6, Ahem A Rivet's Shot wrote:
> On Fri, 24 Jul 2020 10:32:36 -0700 (PDT)
> Quadibloc <jsa...@ecn.ab.ca> wrote:
>
> > In that case, Big-Endian does not go far enough. Clearly we also have to
> > change computers over from doing their arithmetic in incomprehensible
> > binary to calculating everything in decimal so that the contents of
> > storage will make sense.
>
> Careful now, you'll re-invent ENIAC soon.

Although I did have tongue in cheek as I typed that, it was the IBM 7070 I was
thinking of.

ENIAC is interesting for other entirely different reasons: basically, it's what
we now call a dataflow architecture.

John Savard

Quadibloc

unread,
Jul 25, 2020, 4:57:39 AM7/25/20
to
On Friday, July 24, 2020 at 12:08:27 PM UTC-6, Radey Shouman wrote:
> Peter Flass <peter...@yahoo.com> writes:

> > Big-endian is the way people think of numbers, otherwise you’d write
> > amounts like 00.000,1$ for a thousand dollars.

> When writing Arabic* that's exactly how it's done. That is, the digit
> order is the same as it is in Latin script, which is opposite the letter
> order for words. Although "arabic numerals" is perhaps a misattribution
> it does seem that that is whence they were adopted by Europeans. It
> seems they just got the endianness wrong -- to be consistent they should
> have reversed the order.

Ah, but the languages of India are written from left to right, like ours. So the
Arabs got the order wrong, and Europeans corrected it!

But you _are_ right that it is going too far to say that big-endian is "the way
humans think of numbers".

John Savard

Quadibloc

unread,
Jul 25, 2020, 5:03:44 AM7/25/20
to
On Friday, July 24, 2020 at 10:56:59 AM UTC-6, Peter Flass wrote:

> No, big-endian is the logical approach for any machine bigger that an 8008,
> since words are operated on, and brought into the ALU, as a unit.
> Big-endian is the way people think of numbers, otherwise you’d write
> amounts like 00.000,1$ for a thousand dollars.

Basically, while Danny Cohen, in his article "On Holy Wars and a Plea for Peace"
is _basically_ right that the differences between big-endian and little-endian
are small: little-endian allows addition to start with the first piece of a
number read from memory, and big-endian allows comparison to begin with the
first piece of a number read from memory...

and I can't agree that "big-endian is the way people think of numbers" in the
sense of being how humans are hard-wired to think of numbers; the Arabs manage
to write decimal numbers in little-endian fashion...

because big-endian compares to the cultural convention for writing numbers in
the Western world, I noted one small advantage of big-endian that Danny Cohen
overlooked.

That's where I began this thread:

Inside a computer, numbers that appear in _text strings_ are in big-endian
order.

So if a computer happens to have a packed decimal data type, if it's big-endian, then there is no conflict between packed decimal numbers

- having the same endianness as character strings, for ease of conversion,

- having the same endianness as binary numbers, to allow the ALU to use common
circuitry for decimal and binary arithmetic with less overhead.

This is a small thing, but I think it's enough to tilt the balance.

John Savard

Ahem A Rivet's Shot

unread,
Jul 25, 2020, 6:00:04 AM7/25/20
to
On Sat, 25 Jul 2020 02:03:42 -0700 (PDT)
Quadibloc <jsa...@ecn.ab.ca> wrote:

> - having the same endianness as binary numbers, to allow the ALU to use
> common circuitry for decimal and binary arithmetic with less overhead.
>
> This is a small thing, but I think it's enough to tilt the balance.

I would think this a large thing in the days of discrete transistor
and SSL to MSL integration.

Gareth Evans

unread,
Jul 25, 2020, 6:57:23 AM7/25/20
to
The writing of numbers by humans has nothing whatsoever to do
with the internal operation of computers.

But am I right in remembering that the PDP11 had a mix of Big
and Little Endian; the basic processor being Little but some of
the floating point options being Big?


Alfred Falk

unread,
Jul 25, 2020, 11:48:32 AM7/25/20
to
John Levine <jo...@taugh.com> wrote in news:rfg8bf$1ceu$1...@gal.iecc.com:

> In article
> <128418668.617331888.3212...@news.eternal-september
> .org>, Peter Flass <peter...@yahoo.com> wrote:
>>> at $20K, partly because it was a more complex design, but also
>>> because it was built from many small modules that plugged into a
>>> custom wired backplane.
>>
>>“Flip Chips?”
>
> Yes. DEC had very bad experiences with the large boards in the PDP-6
> and it was a long time before they used them again.

I no longer remember where I heard or read it, but this person* claimed part
of DEC's reason for rejecting De Castro's design was that the large boards
would be impractical to test. I do remember seeing an ad somewhere from a
company claiming that they were the only one that could provide DG with a
test system for their very large (15" square), hence complex, boards.

* might have been a DG salesman

John Levine

unread,
Jul 25, 2020, 1:23:41 PM7/25/20
to
In article <rfh36i$jsc$2...@dont-email.me>,
Gareth Evans <headst...@yahoo.com> wrote:
>But am I right in remembering that the PDP11 had a mix of Big
>and Little Endian; the basic processor being Little but some of
>the floating point options being Big?

Yes, some of the multi-word arithmetic formats were middle-endian.

They straigtened it all out on the VAX, but the fact that it was
so hard to make things consistently little-endian tells us that
the arguement that it's more "natural" is silly.

John Levine

unread,
Jul 25, 2020, 1:25:09 PM7/25/20
to
In article <20200725105407.f6d5...@eircom.net>,
Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>> - having the same endianness as binary numbers, to allow the ALU to use
>> common circuitry for decimal and binary arithmetic with less overhead.
>>
>> This is a small thing, but I think it's enough to tilt the balance.
>
> I would think this a large thing in the days of discrete transistor
>and SSL to MSL integration.

The VAX had decimal number formats but by then transistors were pretty cheap.

Rich Alderson

unread,
Jul 25, 2020, 2:03:17 PM7/25/20
to
rnet...@gmail.com writes:

> On Friday, July 24, 2020 at 10:43:16 PM UTC-4, Peter Flass wrote:
> > John Levine wrote:
>
> > > I agree the PDP-11 was more fun to program...
> > > ...it was a more complex design, but also because
> > > it was built from many small modules that plugged into a custom wired
> > > backplane.
> =20
> > =E2=80=9CFlip Chips?=E2=80=9D
>
> Not really. Flip Chips was DEC's name for their line of logic boards which =
> became somewhat obsolete when 14 and 16 pin DIP packaged RTL, DTL, and TTL =
> circuits became available. The PDP-11 modules used the same physical board =
> design as the Flip Chips but housed much more complex circuits, designed pu=
> rely for the PDP-11.
>
> Bob Netzlof

If you look at the DEC documentation, FlipChip(TM) was applied to every
single-height board they produced, and sometimes to the dual, quad, and hex
height boards as well. The complexity of the circuits on the boards has
nothing to do with what they are called.

--
Rich Alderson ne...@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen

Rich Alderson

unread,
Jul 25, 2020, 2:12:59 PM7/25/20
to
John Levine <jo...@taugh.com> writes:

> In article <128418668.617331888.3212...@news.eternal-september.org>,
> Peter Flass <peter...@yahoo.com> wrote:
> >> at $20K, partly because it was a more complex design, but also because
> >> it was built from many small modules that plugged into a custom wired
> >> backplane.
> >
> >“Flip Chips?”

> Yes. DEC had very bad experiences with the large boards in the PDP-6
> and it was a long time before they used them again.

The large boards in the PDP-6 were an outgrowth of the original System Module(TM)
family, DEC's first product. These were used to build the PDP-1, PDP-4, and
PDP-5 successfully.

The problem with the PDP-6 boards, which are sui generis, is that they require
solder connections along two opposite edges of the board. Repair/replacement of
a board in a unit (for example, a 36 bit register, where each board is a bit)
requires unsoldering every neighboring board and moving the wires out of the way
in order to reach the board in question.

I have seen an actual PDP-6 (at SAIL, autumn 1984) and had the issue demonstrated
to me by people who knew wherof they spoke, so I'm not guessing about this.

The FlipChip(TM) was invented for the PDP-7, as an alternative to the physically
larger System Modules(TM). *Every* DEC computer manufactured after the
introduction of the PDP-7 used FlipChips (whether with discrete transistors or
with integrated circuits, gold fingers on one side of the board or both, single,
dual, quad, or hex height) and exactly one kind of backplane connector.

*No* DEC computer after the PDP-6 used PDP-6 style boards.

Radey Shouman

unread,
Jul 25, 2020, 2:38:17 PM7/25/20
to
Quadibloc <jsa...@ecn.ab.ca> writes:

> On Friday, July 24, 2020 at 12:08:27 PM UTC-6, Radey Shouman wrote:
>> Peter Flass <peter...@yahoo.com> writes:
>
>> > Big-endian is the way people think of numbers, otherwise you’d write
>> > amounts like 00.000,1$ for a thousand dollars.
>
>> When writing Arabic* that's exactly how it's done. That is, the digit
>> order is the same as it is in Latin script, which is opposite the letter
>> order for words. Although "arabic numerals" is perhaps a misattribution
>> it does seem that that is whence they were adopted by Europeans. It
>> seems they just got the endianness wrong -- to be consistent they should
>> have reversed the order.
>
> Ah, but the languages of India are written from left to right, like ours. So the
> Arabs got the order wrong, and Europeans corrected it!

That would require knowing how they write their numbers. I don't, do you?

> But you _are_ right that it is going too far to say that big-endian is "the way
> humans think of numbers".
>
> John Savard

--

Quadibloc

unread,
Jul 25, 2020, 2:39:54 PM7/25/20
to
On Saturday, July 25, 2020 at 4:57:23 AM UTC-6, Gareth Evans wrote:
> On 25/07/2020 09:57, Quadibloc wrote:

> > But you _are_ right that it is going too far to say that big-endian is "the way
> > humans think of numbers".

> The writing of numbers by humans has nothing whatsoever to do
> with the internal operation of computers.

Computers operate on character strings internally, and those character strings
get sent to printers to produce text for humans to read.

> But am I right in remembering that the PDP11 had a mix of Big
> and Little Endian; the basic processor being Little but some of
> the floating point options being Big?

Yes. I think it was big-endian due to a lack of communication... as the PDP-11
was the first attempt to make a computer consistently little-endian, this was a
novel and unfamiliar concept, and so if the team making the floating-point add-
on didn't have that concept clearly and firmly explained to them, naturally they
would just make the floating-point processor in the way they would assume to be
right and natural and the same as every other computer used.

John Savard

Quadibloc

unread,
Jul 25, 2020, 2:42:11 PM7/25/20
to
On Saturday, July 25, 2020 at 12:38:17 PM UTC-6, Radey Shouman wrote:

> That would require knowing how they write their numbers. I don't, do you?

As a matter of fact, as a coin collector, yes, I know that old coins from India,
Thailand, Burma and so on that have dates written using their versions of the
digits from India have the most significant digit on the left.

John Savard

John Levine

unread,
Jul 25, 2020, 3:29:07 PM7/25/20
to
In article <mddh7tv...@panix5.panix.com>,
Rich Alderson <ne...@alderson.users.panix.com> wrote:
>The problem with the PDP-6 boards, which are sui generis, is that they require
>solder connections along two opposite edges of the board. Repair/replacement of
>a board in a unit (for example, a 36 bit register, where each board is a bit)
>requires unsoldering every neighboring board and moving the wires out of the way
>in order to reach the board in question.
>
>I have seen an actual PDP-6 (at SAIL, autumn 1984) and had the issue demonstrated
>to me by people who knew wherof they spoke, so I'm not guessing about this.

I actually used a PDP-6 but never got to look inside the cabinet. I gather the
connectors were also a problem and sites had a rubber mallet to tap the boards
to reseat them when the computer was flaky.

>The FlipChip(TM) was invented for the PDP-7, as an alternative to the physically
>larger System Modules(TM). *Every* DEC computer manufactured after the
>introduction of the PDP-7 used FlipChips (whether with discrete transistors or
>with integrated circuits, gold fingers on one side of the board or both, single,
>dual, quad, or hex height) and exactly one kind of backplane connector.

For quite a while, yes, but the later models of PDP-8 and PDP-11 used
larger boards they didn't call flip chips.

Peter Flass

unread,
Jul 25, 2020, 5:43:37 PM7/25/20
to
Radey Shouman <sho...@comcast.net> wrote:
> Quadibloc <jsa...@ecn.ab.ca> writes:
>
>> On Friday, July 24, 2020 at 12:08:27 PM UTC-6, Radey Shouman wrote:
>>> Peter Flass <peter...@yahoo.com> writes:
>>
>>>> Big-endian is the way people think of numbers, otherwise you’d write
>>>> amounts like 00.000,1$ for a thousand dollars.
>>
>>> When writing Arabic* that's exactly how it's done. That is, the digit
>>> order is the same as it is in Latin script, which is opposite the letter
>>> order for words. Although "arabic numerals" is perhaps a misattribution
>>> it does seem that that is whence they were adopted by Europeans. It
>>> seems they just got the endianness wrong -- to be consistent they should
>>> have reversed the order.
>>
>> Ah, but the languages of India are written from left to right, like ours. So the
>> Arabs got the order wrong, and Europeans corrected it!
>
> That would require knowing how they write their numbers. I don't, do you?

How are abacuses (abaci?) organized. My impression is big-endian.
>
>> But you _are_ right that it is going too far to say that big-endian is "the way
>> humans think of numbers".
>>
>> John Savard
>



--
Pete

Peter Flass

unread,
Jul 25, 2020, 5:43:41 PM7/25/20
to
I thought I read somewhere that FP in the early -11s was originally
software only. (maybe from Bell?)


--
Pete

Bob Eager

unread,
Jul 25, 2020, 6:21:40 PM7/25/20
to
Pretty sure our 11/20 didn't have it.

As it happens, I've been doing a disassembler for the PDP-11 over the
last couple of days. I identified the EIS floating point (four
instructions, very simple) and the FP-11 floating point (lots of
instructions, a bit arcane).



--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org

John Levine

unread,
Jul 25, 2020, 6:34:39 PM7/25/20
to
In article <117289487.617406095.5134...@news.eternal-september.org> you write:
>I thought I read somewhere that FP in the early -11s was originally
>software only. (maybe from Bell?)

The FP hardware was always optional although on the larger machines
I think everyone got it.

On the original 11/20 had an optional Unibus arithemetic device that
did multiply and divide and multiple shift, no hardware FP at all.

On the 11/45 and 11/70 it was an additional full sized board that fit
into a reserved backplane slot. For the later 11/23 it was a couple of
chips that plugged into the CPU board.

John Levine

unread,
Jul 25, 2020, 6:35:48 PM7/25/20
to
In article <2109925314.617405978.747...@news.eternal-september.org>,
>How are abacuses (abaci?) organized. My impression is big-endian.

It's entirely up to the operator. There's no mechanical connection between the columns.

I always used mine big-endian but if I were left handed I probably
would have done it little-endian.

John Levine

unread,
Jul 25, 2020, 6:49:02 PM7/25/20
to
In article <ho3pji...@mid.individual.net>,
Bob Eager <news...@eager.cx> wrote:
>> I thought I read somewhere that FP in the early -11s was originally
>> software only. (maybe from Bell?)
>
>Pretty sure our 11/20 didn't have it.

I happen to have a pdp11/20/15/r20 handbook in my hand and I can
assure you there was no floating point available, only an optional
extended arithmetic peripheral that did what EIS later did, in a
different way.

Joe Pfeiffer

unread,
Jul 25, 2020, 6:54:48 PM7/25/20
to
Peter Flass <peter...@yahoo.com> writes:

> Radey Shouman <sho...@comcast.net> wrote:
>> Quadibloc <jsa...@ecn.ab.ca> writes:
>>
>>> On Friday, July 24, 2020 at 12:08:27 PM UTC-6, Radey Shouman wrote:
>>>> Peter Flass <peter...@yahoo.com> writes:
>>>
>>>>> Big-endian is the way people think of numbers, otherwise you’d write
>>>>> amounts like 00.000,1$ for a thousand dollars.
>>>
>>>> When writing Arabic* that's exactly how it's done. That is, the digit
>>>> order is the same as it is in Latin script, which is opposite the letter
>>>> order for words. Although "arabic numerals" is perhaps a misattribution
>>>> it does seem that that is whence they were adopted by Europeans. It
>>>> seems they just got the endianness wrong -- to be consistent they should
>>>> have reversed the order.
>>>
>>> Ah, but the languages of India are written from left to right, like ours. So the
>>> Arabs got the order wrong, and Europeans corrected it!
>>
>> That would require knowing how they write their numbers. I don't, do you?
>
> How are abacuses (abaci?) organized. My impression is big-endian.

The normal convention is most significant digits are on the left, just
like writing numbers. But since there is no byte order in an abacus I
don't see how the terms even apply to it.
It is loading more messages.
0 new messages