On Wednesday, December 22, 2021 at 1:40:37 AM UTC-7, Thomas Koenig wrote:
> There is one bit of difference: How summation is arranged does
> not matter (much) for performance as long as the whole word
> is read in in one chunk. For little-endian, you have to do
> some instruction for byte swapping for memcmp().
That's a new point, but comparison versus addition was noted at the
very beginning of the little-endian versus big-endian argument.
A while back, I pointed out another argument for big-endian that I
hadn't seen, but it may have been around.
If a computer has a packed decimal data type, then big-endian has
an advantage because:
Packed decimal is meant to facilitate arithmetic on numbers contained
in character strings. The character strings have big-endian order, so
packed decimal should be big-endian to facilitate this.
Because the zone bits are squeezed out, packed decimal resembles binary;
an arithmetic unit could handle both packed decimal and binary if you could
change what was done to generate carries for every four bits. But that only
works conveniently when packed decimal and binary have the same endianness.
Of course, the IBM System/360, which has packed decimal, is big-endian.
This may not be a decisive point to some, because some will question
whether or not there is any need for packed decimal arithmetic in a
computer.
John Savard