Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

big endian vs. little endian, why?

313 views
Skip to first unread message

Eric Chomko

unread,
Jul 27, 2005, 12:54:59 PM7/27/05
to
I work with CCSDS data which is inherently big endian. My question is, how
did the nature of big endian vs. little endian get started?I know that
Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
big endian, but why? Why isn't everything big endian, which seems the more
intuitive way to represent data?

I have a Linux box that I'm conisdering replacing with a old Sun SPARC
just to have Unix in a big endian box. And with Apple switching, I'm not
considering them due various reasons, of which a finicky C compiler is on
the top of the list.

Eric

Pascal Bourguignon

unread,
Jul 27, 2005, 1:30:46 PM7/27/05
to

echom...@polaris.umuc.edu (Eric Chomko) writes:
> I work with CCSDS data which is inherently big endian. My question is, how
> did the nature of big endian vs. little endian get started?I know that
> Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> big endian, but why? Why isn't everything big endian, which seems the more
> intuitive way to represent data?

Cheap constructors switched to little endian to spare one cycle when
adding big numbers.


> I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> just to have Unix in a big endian box. And with Apple switching, I'm not
> considering them due various reasons, of which a finicky C compiler is on
> the top of the list.

Well, perhaps you could run a big endian processor (sparc or ppc)
emulated by Qemu on stock Intel hardware. It might even end faster
(and will definitely end faster eventually) than running good old
hardware.


--
__Pascal Bourguignon__ http://www.informatimago.com/
Kitty like plastic.
Confuses for litter box.
Don't leave tarp around.

mensa...@aol.com

unread,
Jul 27, 2005, 2:01:12 PM7/27/05
to

Eric Chomko wrote:
> I work with CCSDS data which is inherently big endian. My question is, how
> did the nature of big endian vs. little endian get started?I know that
> Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> big endian, but why?

The memory bus is not always the same size as the cpu registers.

Jonathan Griffitts

unread,
Jul 27, 2005, 2:33:08 PM7/27/05
to
In article <dc8e93$1pk8$2...@news.ums.edu>, Eric Chomko writes

>I work with CCSDS data which is inherently big endian. My question is, how
>did the nature of big endian vs. little endian get started?I know that
>Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
>big endian, but why?

You need to look at the hardware implementations. Back when transistors
were much more expensive, a lot of machines used only byte-wide data
paths in the CPU. Arithmetic was done byte-serial. If data is fetched
little-endian and a byte at a time from memory, byte-serial addition is
easy to do, and the ALU stores the carry bit between bytes. It makes for
a clean and efficient architecture.


>Why isn't everything big endian, which seems the more
>intuitive way to represent data?

"More intuitive" is a human thing, it only matters if you're trying to
read core dumps. I've read plenty of such dumps in both big and little
endian format, I don't recall it ever causing cognitive dissonance. I
suspect that would be true for most people working at that low level.

An interesting issue for big/little endian relates to people who abuse C
pointers. For example, using a (short *) pointer to read from a long
variable. This sorta works for little-endian machines, doesn't work at
all sensibly for big endian. I have heard the argument that this makes
little-endian representation superior, but that only works if you
believe that such coding styles are a good idea.

I suppose this trick might allow a *compiler* to generate more efficient
code in a few cases, which would be a legitimate advantage.
--
Jonathan Griffitts
AnyWare Engineering Boulder, CO, USA

Eric Sosman

unread,
Jul 27, 2005, 2:44:40 PM7/27/05
to

Jonathan Griffitts wrote:
> In article <dc8e93$1pk8$2...@news.ums.edu>, Eric Chomko writes
>
>>I work with CCSDS data which is inherently big endian. My question is, how
>>did the nature of big endian vs. little endian get started?I know that
>>Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
>>big endian, but why?
>
>
> You need to look at the hardware implementations. Back when transistors
> were much more expensive, a lot of machines used only byte-wide data
> paths in the CPU. Arithmetic was done byte-serial. If data is fetched
> little-endian and a byte at a time from memory, byte-serial addition is
> easy to do, and the ALU stores the carry bit between bytes. It makes for
> a clean and efficient architecture.

IBM 1620 did arithmetic pretty much this way, but
stored numbers in big-endian form anyhow. You addressed
each value at its low-significance high-address digit,
and the ALU marched "backwards" through digits of greater
significance at lower addresses.

--
Eric....@sun.com

ararghmai...@now.at.arargh.com

unread,
Jul 27, 2005, 3:25:35 PM7/27/05
to
On Wed, 27 Jul 2005 14:44:40 -0400, Eric Sosman <eric....@sun.com>
wrote:

So did the 14xx Systems.

--
ArarghMail507 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the garbage from the reply address.

Jonathan Griffitts

unread,
Jul 27, 2005, 3:39:03 PM7/27/05
to
In article <dc8kmo$3gd$1...@news1brm.Central.Sun.COM>, Eric Sosman writes

Working backwards through a big-endian number works but it is
inconvenient if your instruction set includes immediate operands stored
in the instruction stream. If immediate operands are little-endian,
then the program-counter just marches through them in normal sequence.

Nick Spalding

unread,
Jul 27, 2005, 3:52:13 PM7/27/05
to
Eric Sosman wrote, in <dc8kmo$3gd$1...@news1brm.Central.Sun.COM>
on Wed, 27 Jul 2005 14:44:40 -0400:

14xx family likewise.
--
Nick Spalding

Nick Spalding

unread,
Jul 27, 2005, 3:53:03 PM7/27/05
to
ararghmai...@NOW.AT.arargh.com wrote, in
<trnfe1d0lvj5nplmh...@4ax.com>
on Wed, 27 Jul 2005 14:25:35 -0500:

> On Wed, 27 Jul 2005 14:44:40 -0400, Eric Sosman <eric....@sun.com>
> wrote:
>
> >
> >
> >Jonathan Griffitts wrote:
> >> In article <dc8e93$1pk8$2...@news.ums.edu>, Eric Chomko writes
> >>
> >>>I work with CCSDS data which is inherently big endian. My question is, how
> >>>did the nature of big endian vs. little endian get started?I know that
> >>>Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> >>>big endian, but why?
> >>
> >>
> >> You need to look at the hardware implementations. Back when transistors
> >> were much more expensive, a lot of machines used only byte-wide data
> >> paths in the CPU. Arithmetic was done byte-serial. If data is fetched
> >> little-endian and a byte at a time from memory, byte-serial addition is
> >> easy to do, and the ALU stores the carry bit between bytes. It makes for
> >> a clean and efficient architecture.
> >
> > IBM 1620 did arithmetic pretty much this way, but
> >stored numbers in big-endian form anyhow. You addressed
> >each value at its low-significance high-address digit,
> >and the ALU marched "backwards" through digits of greater
> >significance at lower addresses.
>
> So did the 14xx Systems.

Beat me to it!
--
Nick Spalding

Joe Pfeiffer

unread,
Jul 27, 2005, 3:32:32 PM7/27/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

> I work with CCSDS data which is inherently big endian. My question
> is, how

I'm curious: why is it "inherently" big endian? I'm really trying,
but having a lot of trouble imagining a situation where the data
transfer really needs to be either big endian or little endian.
Networking protocols tend to be big endian, but that doesn't mean
there's anything inherent about it.

> did the nature of big endian vs. little endian get started?I know that
> Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> big endian, but why? Why isn't everything big endian, which seems the more
> intuitive way to represent data?

Your intuition, maybe. My intuition is that less significant data
should be at lower addresses -- little endian. The fact that the
first byte-addressed machine I worked on was a VAX probably colors my
intuition here. :)

> I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> just to have Unix in a big endian box. And with Apple switching, I'm not
> considering them due various reasons, of which a finicky C compiler is on
> the top of the list.

Now I'm *really* curious -- what's going on that endianness is *that*
important? I can't remember every being in a situation where htonl
and friends couldn't cure endianess quickly and easily...
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
skype: jjpfeifferjr

ararghmai...@now.at.arargh.com

unread,
Jul 27, 2005, 4:05:13 PM7/27/05
to
On Wed, 27 Jul 2005 13:39:03 -0600, Jonathan Griffitts
<jgrif...@spamcop.net> wrote:

<snip>


>> IBM 1620 did arithmetic pretty much this way, but
>>stored numbers in big-endian form anyhow. You addressed
>>each value at its low-significance high-address digit,
>>and the ALU marched "backwards" through digits of greater
>>significance at lower addresses.
>
>Working backwards through a big-endian number works but it is
>inconvenient if your instruction set includes immediate operands stored
>in the instruction stream. If immediate operands are little-endian,
>then the program-counter just marches through them in normal sequence.

I presume that the way to handle that is to store the immediate
operands in the same order as the data operands.

Greg Menke

unread,
Jul 27, 2005, 4:08:23 PM7/27/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

>
> I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> just to have Unix in a big endian box. And with Apple switching, I'm not
> considering them due various reasons, of which a finicky C compiler is on
> the top of the list.
>
> Eric

Linux on the MAC is pretty nice, nearly as easy as on a Sun box. We use
Linux on a G4 for testing software that gets deployed onto embedded
PowerPC 750's- similar cpu architecture so no endianness problems,
minimal risk of code generation issues and Linux saves us the
development host licensing foolishness.

I like the Sun hardware a bit more than Apple, but a Sun machine that
will hang with a G5 is still a bit pricey.

Gregm

David Wade

unread,
Jul 27, 2005, 4:39:47 PM7/27/05
to
"Eric Chomko" <echom...@polaris.umuc.edu> wrote in message
news:dc8e93$1pk8$2...@news.ums.edu...

> I work with CCSDS data which is inherently big endian. My question is, how
> did the nature of big endian vs. little endian get started?I know that
> Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> big endian, but why? Why isn't everything big endian, which seems the more
> intuitive way to represent data?
>

Because the 8008 was designed by semiconductor designers not computer
peoplem, and they did things to minimise the silicon used. In this case it
allows microcode to be re-used. So if you have a short relative jump, and a
long relative dump, you can re-use the microcode, So you you always add the
LSB in, and then just have a check in the end that says "is this a 16 bit
jump" and if so get the MSB and add into the upper half. Once the 8008 had
started that way, they continued for compatability reasons. Contrast this
with the 6800 etc. which was designed to be easy to program. And the way the
68xx were less compatible as they evolved, but much nicer CPUs to code for.
Then wonder why the 80xxx series was so successfull.

> I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> just to have Unix in a big endian box. And with Apple switching, I'm not
> considering them due various reasons, of which a finicky C compiler is on
> the top of the list.
>

Don't you get the same finicty compiler on Solaris (I assume you mean GCC)
unles you have mega bucks. SparcUltra 5's seem good value at present. I keep
thinking about updating my Sparc10. its ever so slow. However Solaris has so
many "non UNIX" things I wonder weather to go Linux but I have heard its
sssslllllllooooowwwwwwww ish on the SPARC...

> Eric


John Savard

unread,
Jul 27, 2005, 5:01:25 PM7/27/05
to
On Wed, 27 Jul 2005 16:54:59 +0000 (UTC), echom...@polaris.umuc.edu
(Eric Chomko) wrote, in part:

>Why isn't everything big endian, which seems the more
>intuitive way to represent data?

I would *like* to say that it is because they manufacture computers in
Israel and the Arab world, where the language in use is written from
right to left, but numbers are still written with their most significant
digit on the left, so that in these countries they make computers that
are intuitive for them.

Unfortunately, that is not the case. It is true the Intel 8087 was
designed in Israel, but that isn't why the 8008 and successors were
little-endian.

As has already been noted, some computers with a 16-bit word length,
when handling 32-bit integers, placed the least significant word in a
lower memory location, so that they could somewhat more simply fetch the
least significant part first, and then carry later. The Honeywell 316
computer is one example of this.

But such computers still stored character data in 16-bit words so that
the first character was in the leftmost or most significant part.

Then, one fine morning, someone at the Digital Equipment Corporation got
a brilliant idea.

We're already making old-fashioned 12-bit and 18-bit computers. Now, the
world is going to the 8-bit character. Let's make a 16-bit computer that
is _not_ old-fashioned - people who want an old-fashioned 16-bit
computer can already buy a Honeywell 316 - we'll make one that is new
and exciting.

And they did. The PDP-11 architecture was very innovative.

And somewhere along the way they had *another* brilliant idea. No, we
won't add extra transistors to our design so that we can add 1 to the
address, and fetch the least significant part first. Because we can be
consistent *without* adding an extra transistor! Yes! We'll just put the
_second_ character of two characters in a 16-bit word in the most
significant part, so that the four characters in a 32-bit number will be
consistently backwards!

And, lo and behold, the idea of consistent little-endian computing was
born. That the idea was so bizarre and confusing that it was not
understood by the engineers who then made the PDP-11's big-endian
floating-point unit does not deprive DEC and the PDP-11 of the glory -
or, perhaps, the blame - of inflicting this upon the world.

John Savard
http://www.quadibloc.com/index.html
_________________________________________
Usenet Zone Free Binaries Usenet Server
More than 120,000 groups
Unlimited download
http://www.usenetzone.com to open account

John Savard

unread,
Jul 27, 2005, 5:06:39 PM7/27/05
to
On Wed, 27 Jul 2005 21:01:25 GMT, jsa...@excxn.aNOSPAMb.cdn.invalid
(John Savard) wrote, in part:

>And, lo and behold, the idea of consistent little-endian computing was
>born. That the idea was so bizarre and confusing that it was not
>understood by the engineers who then made the PDP-11's big-endian
>floating-point unit does not deprive DEC and the PDP-11 of the glory -
>or, perhaps, the blame - of inflicting this upon the world.

"And now, the _rest_ of the story."

Back in the 1970s, IBM was very dominant in the field of mainframe
computing, which was still the most significant portion of the computing
field.

Thus, it was to some extent the company that people "loved to hate",
like Microsoft and perhaps even Intel today. On the other hand, the
PDP-11 computer was new, cool, and exciting. People were doing fun
things with it - like writing UNIX. (It may have been *first* developed
on a PDP-7, but it started getting distributed for PDP-11 computers, and
was used on that platform at many universities.)

And so people got used to little-endian operation, and some even got the
idea that it was the right way to organize a computer.

Greg Menke

unread,
Jul 27, 2005, 6:11:51 PM7/27/05
to

"David Wade" <g8...@yahoo.com> writes:

> "Eric Chomko" <echom...@polaris.umuc.edu> wrote in message
> news:dc8e93$1pk8$2...@news.ums.edu...
>

> Don't you get the same finicty compiler on Solaris (I assume you mean GCC)
> unles you have mega bucks. SparcUltra 5's seem good value at present. I keep
> thinking about updating my Sparc10. its ever so slow. However Solaris has so
> many "non UNIX" things I wonder weather to go Linux but I have heard its
> sssslllllllooooowwwwwwww ish on the SPARC...
>

Sun's compiler is only $1000 or so, chump change when compared to the
rest of the costs of a project. OTOH, kind of expensive if you just
want to mess around.

For sure get yourself a reasonable Ultra 5- they're nearly cheap as dirt
nowadays. I'm waiting for the < $5 each (just cpu and ram) price point
so I can get a pile of them and make a little Beowulf cluster. The 2.4
kernel seems zippy on them, but I've not benchmarked it.

Gregm

Jonathan Griffitts

unread,
Jul 27, 2005, 7:42:07 PM7/27/05
to
In article <1bfyu0y...@viper.cs.nmsu.edu>, Joe Pfeiffer writes
. . .

>Now I'm *really* curious -- what's going on that endianness is *that*
>important? I can't remember every being in a situation where htonl
>and friends couldn't cure endianess quickly and easily...

I have run into people who are unaccountably passionate about
endian-ness of their computers. I figure that these must be people with
very limited exposure to different computer architectures. If a
little-endian architecture causes boggling, imagine the reaction to a
machine that uses one's complement instead of two's complement, or any
of the various decimal machines.

By my standards, all modern mainstream computers are very similar (and
boring).

John Savard

unread,
Jul 28, 2005, 1:21:36 AM7/28/05
to
On Wed, 27 Jul 2005 17:42:07 -0600, Jonathan Griffitts
<jgrif...@spamcop.net> wrote, in part:

>I have run into people who are unaccountably passionate about
>endian-ness of their computers. I figure that these must be people with
>very limited exposure to different computer architectures. If a
>little-endian architecture causes boggling, imagine the reaction to a
>machine that uses one's complement instead of two's complement, or any
>of the various decimal machines.

I never could figure out what one's complement was good for.

However, one's complement did relate rather trivially to the nicer
sign-magnitude notation, and if a computer has both add and subtract
hardware, sign-magnitude is not harder than two's complement... so why
have an end-around carry?

Decimal, of course, is *better* than binary, because it makes it easier
to explain how the computer works to someone who never worked with one
before; thus, it is good for the same reason that big-endian is good; it
makes sense to the naive person.

As it is, IBM - or one of its researchers - is trying to use variants of
Chen-Ho encoding to spur interest in a revival of decimal computing; so
far, this has not generated much interest. While wasting 0.342% of one's
RAM might seem a trivial price to pay, the extra circuit complexity just
does not seem to have a justification...

Visit my pages at

http://www.quadibloc.com/arch/arcint.htm

even if I cannot promise you much from them besides a good laugh.

Bruce Hoult

unread,
Jul 28, 2005, 2:27:24 AM7/28/05
to
In article <1bfyu0y...@viper.cs.nmsu.edu>,
Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:

> echom...@polaris.umuc.edu (Eric Chomko) writes:
>
> > did the nature of big endian vs. little endian get started?I know that
> > Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> > big endian, but why? Why isn't everything big endian, which seems the more
> > intuitive way to represent data?
>
> Your intuition, maybe. My intuition is that less significant data
> should be at lower addresses -- little endian. The fact that the
> first byte-addressed machine I worked on was a VAX probably colors my
> intuition here. :)

I very much prefer big-endian.

The first byte-addressed machines (or any machines, for that matter)
that I used were PDP-11 then 6502 then VAX then 8086, all of which were
little-endian. But then I moved onto 68000, SPARC, MIPS, PowerPC, HPPA
and felt like I'd come home.

Why? I don't know. There isn't one stand-out reason, but a multitude
of small ones. Here are just two of them:

- reading a hex dump on a little-endian machine is *painfull*. Hell,
the VAX standard tools did machine code listings with each line
(instruction) printed in reverse, just because litle-endian is sooo
painful.

- with a big-endian machine you can optimize string operations by using
32 (or 64, or 128...) bit unsigned integer operations to sort and
compare strings, which is 4/8/16 times faster than using byte
operations. Best with Pascal-style counted strings, but works with C
strings with a little bit-banging trickery. e.g. see:
http://groups-beta.google.com/group/alt.folklore.computers/msg/6fd9e4093c
382f5b (note that strlen and the corresponding strcpy/strcat works on
both big- and little-endian machines, but the corresponding strcmp works
only on big-endian).

> > I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> > just to have Unix in a big endian box. And with Apple switching, I'm not
> > considering them due various reasons, of which a finicky C compiler is on
> > the top of the list.
>
> Now I'm *really* curious -- what's going on that endianness is *that*
> important? I can't remember every being in a situation where htonl
> and friends couldn't cure endianess quickly and easily...

If an "old SPARC" is ok now, what's wrong with a new Apple, which will
in a few years become as "old Apple", but much fater than the "old Sun"?

--
Bruce | 41.1670S | \ spoken | -+-
Hoult | 174.8263E | /\ here. | ----------O----------

Morten Reistad

unread,
Jul 28, 2005, 3:01:36 AM7/28/05
to
In article <uc$gSfkPv...@griffitts.org>,

Jonathan Griffitts <jgrif...@spamcop.net> wrote:
>In article <1bfyu0y...@viper.cs.nmsu.edu>, Joe Pfeiffer writes
>. . .
>>Now I'm *really* curious -- what's going on that endianness is *that*
>>important? I can't remember every being in a situation where htonl
>>and friends couldn't cure endianess quickly and easily...
>
>I have run into people who are unaccountably passionate about
>endian-ness of their computers. I figure that these must be people with
>very limited exposure to different computer architectures. If a
>little-endian architecture causes boggling, imagine the reaction to a
>machine that uses one's complement instead of two's complement, or any
>of the various decimal machines.

A consistently big-endian machine does allow some rotten programming
tricks in conversion of representations; and has the Internet bit/byte/word
order built in.

This may lead to some consternation for programmers that suddenly have
to code by the book when they meet a little endian machine.

>By my standards, all modern mainstream computers are very similar (and
>boring).

A natural progression towards standarization. Try driving a T-Ford
if you want to see how old cars were different too.

-- mrr

Nick Spalding

unread,
Jul 28, 2005, 6:19:35 AM7/28/05
to
Eric Chomko wrote, in <dc8e93$1pk8$2...@news.ums.edu>
on Wed, 27 Jul 2005 16:54:59 +0000 (UTC):

> I work with CCSDS data which is inherently big endian. My question is, how
> did the nature of big endian vs. little endian get started?I know that
> Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
> big endian, but why? Why isn't everything big endian, which seems the more
> intuitive way to represent data?

I think it has to do with the perception that the /natural/ thing to do
with an address pointer is to increment it and the /natural/ way to do
arithmetic is least significant end first, leading to little-endian as
the /natural/ arrangement.

> I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> just to have Unix in a big endian box. And with Apple switching, I'm not
> considering them due various reasons, of which a finicky C compiler is on
> the top of the list.
>
> Eric

--
Nick Spalding

Eric Chomko

unread,
Jul 28, 2005, 1:34:28 PM7/28/05
to
Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: In article <dc8e93$1pk8$2...@news.ums.edu>, Eric Chomko writes

: >I work with CCSDS data which is inherently big endian. My question is, how
: >did the nature of big endian vs. little endian get started?I know that
: >Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
: >big endian, but why?

: You need to look at the hardware implementations. Back when transistors
: were much more expensive, a lot of machines used only byte-wide data
: paths in the CPU. Arithmetic was done byte-serial. If data is fetched
: little-endian and a byte at a time from memory, byte-serial addition is
: easy to do, and the ALU stores the carry bit between bytes. It makes for
: a clean and efficient architecture.


: >Why isn't everything big endian, which seems the more
: >intuitive way to represent data?

: "More intuitive" is a human thing, it only matters if you're trying to
: read core dumps. I've read plenty of such dumps in both big and little
: endian format, I don't recall it ever causing cognitive dissonance. I
: suspect that would be true for most people working at that low level.

Think of processing a bit stream. Convert it to a byte stream and then
process. Little endian presents nightmares as you must flip all 2, 4, and
8 byte integers in order to proecess them.

: An interesting issue for big/little endian relates to people who abuse C

Message has been deleted

Eric Chomko

unread,
Jul 28, 2005, 1:49:56 PM7/28/05
to
Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
: echom...@polaris.umuc.edu (Eric Chomko) writes:

: > I work with CCSDS data which is inherently big endian. My question
: > is, how

: I'm curious: why is it "inherently" big endian? I'm really trying,
: but having a lot of trouble imagining a situation where the data
: transfer really needs to be either big endian or little endian.
: Networking protocols tend to be big endian, but that doesn't mean
: there's anything inherent about it.

Because you'e processing a bit stream that is turned into a byte stream
and all MSBs of each field are on the left side.

: > did the nature of big endian vs. little endian get started?I know that


: > Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
: > big endian, but why? Why isn't everything big endian, which seems the more
: > intuitive way to represent data?

: Your intuition, maybe. My intuition is that less significant data
: should be at lower addresses -- little endian. The fact that the
: first byte-addressed machine I worked on was a VAX probably colors my
: intuition here. :)

Same can be said of those that worked either on Intel or Motorola
microprocessors first.

: > I have a Linux box that I'm conisdering replacing with a old Sun SPARC


: > just to have Unix in a big endian box. And with Apple switching, I'm not
: > considering them due various reasons, of which a finicky C compiler is on
: > the top of the list.

: Now I'm *really* curious -- what's going on that endianness is *that*
: important? I can't remember every being in a situation where htonl
: and friends couldn't cure endianess quickly and easily...

Think of an unsigned char array that is 1024 bytes long. It was once a bit
stream that is now a byte stream. I then process every field by using a
memcpy from the unsigned char array into either char arrays (ASCII) or
into 2, 4 or 8 byte integers.

In a big endian computer this is straight forward as all is laid out left
to right. In a little endian computer you must first reverse all the bytes
of each integer before the move. That reverse process, though it does
work, slows down processing noticably after about 1000 to 10000 or more
frames (1024 byte array).

Eric

: --

Message has been deleted

Eric Chomko

unread,
Jul 28, 2005, 2:00:59 PM7/28/05
to
David Wade (g8...@yahoo.com) wrote:
: "Eric Chomko" <echom...@polaris.umuc.edu> wrote in message

: news:dc8e93$1pk8$2...@news.ums.edu...
: > I work with CCSDS data which is inherently big endian. My question is, how
: > did the nature of big endian vs. little endian get started?I know that
: > Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
: > big endian, but why? Why isn't everything big endian, which seems the more
: > intuitive way to represent data?
: >

: Because the 8008 was designed by semiconductor designers not computer
: peoplem, and they did things to minimise the silicon used. In this case it
: allows microcode to be re-used. So if you have a short relative jump, and a
: long relative dump, you can re-use the microcode, So you you always add the
: LSB in, and then just have a check in the end that says "is this a 16 bit
: jump" and if so get the MSB and add into the upper half. Once the 8008 had
: started that way, they continued for compatability reasons. Contrast this
: with the 6800 etc. which was designed to be easy to program. And the way the
: 68xx were less compatible as they evolved, but much nicer CPUs to code for.
: Then wonder why the 80xxx series was so successfull.

Because IBM picked the 8088 over the 6809 for the IBM PC. It really is
THAT simple! Being a Motorola fan and liking the 6800 (5 volts and ground)
vs. the 8080 (5 and 12 volts as well as ground) I can't agree with you
more. How many different 'return's did Intel 8080 have?

: > I have a Linux box that I'm conisdering replacing with a old Sun SPARC


: > just to have Unix in a big endian box. And with Apple switching, I'm not
: > considering them due various reasons, of which a finicky C compiler is on
: > the top of the list.

: Don't you get the same finicty compiler on Solaris (I assume you mean GCC)
: unles you have mega bucks. SparcUltra 5's seem good value at present. I keep
: thinking about updating my Sparc10. its ever so slow. However Solaris has so
: many "non UNIX" things I wonder weather to go Linux but I have heard its
: sssslllllllooooowwwwwwww ish on the SPARC...

I'm considering a SPARC 5 because I can get it for ~$100. I don't need
speed just a good development environment. And gcc will be the C compiler.

At work (eat your heart out!), I'm getting a Sun Blade 2500 with two
proecessors at 1.8GHz, 1 GB RAM, 512 GB RAID, DVD-RW and an IBM LTO tape
drive.

Eric


Eric Chomko

unread,
Jul 28, 2005, 2:18:29 PM7/28/05
to
John Savard (jsa...@excxn.aNOSPAMb.cdn.invalid) wrote:
: On Wed, 27 Jul 2005 16:54:59 +0000 (UTC), echom...@polaris.umuc.edu

: (Eric Chomko) wrote, in part:

: >Why isn't everything big endian, which seems the more
: >intuitive way to represent data?

: I would *like* to say that it is because they manufacture computers in
: Israel and the Arab world, where the language in use is written from
: right to left, but numbers are still written with their most significant
: digit on the left, so that in these countries they make computers that
: are intuitive for them.

: Unfortunately, that is not the case. It is true the Intel 8087 was
: designed in Israel, but that isn't why the 8008 and successors were
: little-endian.

: As has already been noted, some computers with a 16-bit word length,
: when handling 32-bit integers, placed the least significant word in a
: lower memory location, so that they could somewhat more simply fetch the
: least significant part first, and then carry later. The Honeywell 316
: computer is one example of this.

: But such computers still stored character data in 16-bit words so that
: the first character was in the leftmost or most significant part.

: Then, one fine morning, someone at the Digital Equipment Corporation got
: a brilliant idea.

Perhaps that was the same day that Kenneth Olsen decided that the
microproecessor was nothing more than a toy?

: We're already making old-fashioned 12-bit and 18-bit computers. Now, the


: world is going to the 8-bit character. Let's make a 16-bit computer that
: is _not_ old-fashioned - people who want an old-fashioned 16-bit
: computer can already buy a Honeywell 316 - we'll make one that is new
: and exciting.

: And they did. The PDP-11 architecture was very innovative.

Memory to memory moves being top on the list, IMO.

: And somewhere along the way they had *another* brilliant idea. No, we


: won't add extra transistors to our design so that we can add 1 to the
: address, and fetch the least significant part first. Because we can be
: consistent *without* adding an extra transistor! Yes! We'll just put the
: _second_ character of two characters in a 16-bit word in the most
: significant part, so that the four characters in a 32-bit number will be
: consistently backwards!

The first time I saw that on a compiled listing of a PDP-11 piece of
assembly code for a numeric literal I thought, "why the heck did they do
that?" I simply reversed the numbers and paper and convered the hex code
to decimal and moved on.

: And, lo and behold, the idea of consistent little-endian computing was


: born. That the idea was so bizarre and confusing that it was not
: understood by the engineers who then made the PDP-11's big-endian
: floating-point unit does not deprive DEC and the PDP-11 of the glory -
: or, perhaps, the blame - of inflicting this upon the world.

Somehow I think little endianism preceded DEC, though they are as much to
blame as anyone else. :)

Eric

: John Savard

Jonathan Griffitts

unread,
Jul 28, 2005, 2:25:36 PM7/28/05
to
In article <dcb4v4$1e44$1...@news.ums.edu>, Eric Chomko writes
>Jonathan Griffitts (jgrif...@spamcop.net) wrote:
. . .

>: >Why isn't everything big endian, which seems the more
>: >intuitive way to represent data?
>
>: "More intuitive" is a human thing, it only matters if you're trying to
>: read core dumps. I've read plenty of such dumps in both big and little
>: endian format, I don't recall it ever causing cognitive dissonance. I
>: suspect that would be true for most people working at that low level.
>
>Think of processing a bit stream. Convert it to a byte stream and then
>process. Little endian presents nightmares as you must flip all 2, 4, and
>8 byte integers in order to proecess them.

Why?

As I see it, if you're processing each multi-byte integer as a unit then
you must wait until you have all of the bytes before you can do the
processing. In this case it doesn't matter what order they arrive in.

If you're doing bit-serial or byte-serial arithmetic then little-endian
works much better, because carry-propagation goes upward through the
integer. Big-endian would be much more of a problem.

Perhaps I don't understand the operation you mean when you say "flip"
the bytes, or I don't know what kind of processing you're doing on the
bit stream. Can you define the procedure you're talking about?

Maybe you're speaking of processing an alien bit stream that was
generated with different endian-ness than the CPU you're running on?
That's a whole different issue.

Greg Menke

unread,
Jul 28, 2005, 2:36:28 PM7/28/05
to

echom...@polaris.umuc.edu (Eric Chomko) writes:

> David Wade (g8...@yahoo.com) wrote:
>
> I'm considering a SPARC 5 because I can get it for ~$100. I don't need
> speed just a good development environment. And gcc will be the C compiler.

$100 for a Sparcstation 5? For that kind of money you can get a well
equipped Ultra 5 or 10.

Gregm

Jonathan Griffitts

unread,
Jul 28, 2005, 2:40:35 PM7/28/05
to
In article <dcb7hl$1hg0$1...@news.ums.edu>, Eric Chomko writes

>Somehow I think little endianism preceded DEC,

It certainly did, because in those days it made good economic sense to
build hardware that way, especially if you were trying to hit a
low-end price point. It gets you MUCH MUCH less complex hardware at
essentially zero cost on the software side.

> though they are as much to
>blame as anyone else. :)

"Blame" is way too strong. You're missing my point, to get it you need
to try to take the point of view of a 1960s hardware architect.

Little-endian, byte-serial (or even bit-serial) hardware architecture
was often a very sensible choice before VLSI gave us insanely cheap
transistors.

Charlie Gibbs

unread,
Jul 28, 2005, 3:07:20 PM7/28/05
to
In article <42e869f4...@news.usenetzone.com>,
jsa...@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:

> Decimal, of course, is *better* than binary, because it makes it
> easier to explain how the computer works to someone who never worked
> with one before; thus, it is good for the same reason that big-endian
> is good; it makes sense to the naive person.

Ah, but are the decimal numbers stored big-endian or little-endian?
If they're packed two digits per byte, will we have the NUXI problem?

And, most importantly - is it time to discuss excess-3 yet?

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Eric Chomko

unread,
Jul 28, 2005, 2:47:38 PM7/28/05
to
John Savard (jsa...@excxn.aNOSPAMb.cdn.invalid) wrote:
: On Wed, 27 Jul 2005 17:42:07 -0600, Jonathan Griffitts
: <jgrif...@spamcop.net> wrote, in part:

: >I have run into people who are unaccountably passionate about
: >endian-ness of their computers. I figure that these must be people with
: >very limited exposure to different computer architectures. If a
: >little-endian architecture causes boggling, imagine the reaction to a
: >machine that uses one's complement instead of two's complement, or any
: >of the various decimal machines.

: I never could figure out what one's complement was good for.

Why having a negative zero of course!

: However, one's complement did relate rather trivially to the nicer


: sign-magnitude notation, and if a computer has both add and subtract
: hardware, sign-magnitude is not harder than two's complement... so why
: have an end-around carry?

: Decimal, of course, is *better* than binary, because it makes it easier
: to explain how the computer works to someone who never worked with one
: before; thus, it is good for the same reason that big-endian is good; it
: makes sense to the naive person.

But hexadecimal is best of all and certainly better than octal, and the
numbers back be up on this WRT popularity these days. The best part of hex
is the letters A-F or a-f (for all the modern Unix weenies that think they
are e.e. cummings) is so esoteric and fun to use around those naive people
you mention.

: As it is, IBM - or one of its researchers - is trying to use variants of


: Chen-Ho encoding to spur interest in a revival of decimal computing; so
: far, this has not generated much interest. While wasting 0.342% of one's
: RAM might seem a trivial price to pay, the extra circuit complexity just
: does not seem to have a justification...

Sort of reminds me why high-level architectures never got off the found.

: Visit my pages at

: http://www.quadibloc.com/arch/arcint.htm

: even if I cannot promise you much from them besides a good laugh.

...no doubt.

Eric

: John Savard

Eric Chomko

unread,
Jul 28, 2005, 2:51:53 PM7/28/05
to
Bruce Hoult (br...@hoult.org) wrote:
: In article <1bfyu0y...@viper.cs.nmsu.edu>,
: Joe Pfeiffer <pfei...@cs.nmsu.edu> wrote:

: > echom...@polaris.umuc.edu (Eric Chomko) writes:
: >
: > > did the nature of big endian vs. little endian get started?I know that
: > > Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
: > > big endian, but why? Why isn't everything big endian, which seems the more
: > > intuitive way to represent data?
: >
: > Your intuition, maybe. My intuition is that less significant data
: > should be at lower addresses -- little endian. The fact that the
: > first byte-addressed machine I worked on was a VAX probably colors my
: > intuition here. :)

: I very much prefer big-endian.

: The first byte-addressed machines (or any machines, for that matter)
: that I used were PDP-11 then 6502 then VAX then 8086, all of which were
: little-endian. But then I moved onto 68000, SPARC, MIPS, PowerPC, HPPA
: and felt like I'd come home.

: Why? I don't know. There isn't one stand-out reason, but a multitude
: of small ones. Here are just two of them:

: - reading a hex dump on a little-endian machine is *painfull*. Hell,
: the VAX standard tools did machine code listings with each line
: (instruction) printed in reverse, just because litle-endian is sooo
: painful.

Agreed.

: - with a big-endian machine you can optimize string operations by using

: 32 (or 64, or 128...) bit unsigned integer operations to sort and
: compare strings, which is 4/8/16 times faster than using byte
: operations. Best with Pascal-style counted strings, but works with C
: strings with a little bit-banging trickery. e.g. see:
: http://groups-beta.google.com/group/alt.folklore.computers/msg/6fd9e4093c
: 382f5b (note that strlen and the corresponding strcpy/strcat works on
: both big- and little-endian machines, but the corresponding strcmp works
: only on big-endian).

: > > I have a Linux box that I'm conisdering replacing with a old Sun SPARC
: > > just to have Unix in a big endian box. And with Apple switching, I'm not
: > > considering them due various reasons, of which a finicky C compiler is on
: > > the top of the list.
: >
: > Now I'm *really* curious -- what's going on that endianness is *that*
: > important? I can't remember every being in a situation where htonl
: > and friends couldn't cure endianess quickly and easily...

: If an "old SPARC" is ok now, what's wrong with a new Apple, which will
: in a few years become as "old Apple", but much fater than the "old Sun"?

An old Sun is ~$100 whereas a new Mac is $1000s, depending on what you
get.

Eric

: --

Eric Chomko

unread,
Jul 28, 2005, 2:56:33 PM7/28/05
to
Morten Reistad (firs...@lastname.pr1v.n0) wrote:
: In article <uc$gSfkPv...@griffitts.org>,

: Jonathan Griffitts <jgrif...@spamcop.net> wrote:
: >In article <1bfyu0y...@viper.cs.nmsu.edu>, Joe Pfeiffer writes
: >. . .
: >>Now I'm *really* curious -- what's going on that endianness is *that*
: >>important? I can't remember every being in a situation where htonl
: >>and friends couldn't cure endianess quickly and easily...
: >
: >I have run into people who are unaccountably passionate about
: >endian-ness of their computers. I figure that these must be people with
: >very limited exposure to different computer architectures. If a
: >little-endian architecture causes boggling, imagine the reaction to a
: >machine that uses one's complement instead of two's complement, or any
: >of the various decimal machines.

: A consistently big-endian machine does allow some rotten programming
: tricks in conversion of representations; and has the Internet bit/byte/word
: order built in.

Rotten? I'd say that those conversions are straight forward.

: This may lead to some consternation for programmers that suddenly have


: to code by the book when they meet a little endian machine.

Having to flip all the bytes around to make sense is "programming by the
book"? Seems the real klooge is having to flip bytes around.

: >By my standards, all modern mainstream computers are very similar (and
: >boring).

: A natural progression towards standarization. Try driving a T-Ford
: if you want to see how old cars were different too.

That might be cool!

Eric

: -- mrr

Eric Chomko

unread,
Jul 28, 2005, 3:00:58 PM7/28/05
to
Nick Spalding (spal...@iol.ie) wrote:
: Eric Chomko wrote, in <dc8e93$1pk8$2...@news.ums.edu>

: on Wed, 27 Jul 2005 16:54:59 +0000 (UTC):

: > I work with CCSDS data which is inherently big endian. My question is, how
: > did the nature of big endian vs. little endian get started?I know that
: > Intel and DEC are little endian and Motorola, HP, Sun, SGIs and others are
: > big endian, but why? Why isn't everything big endian, which seems the more
: > intuitive way to represent data?

: I think it has to do with the perception that the /natural/ thing to do
: with an address pointer is to increment it and the /natural/ way to do
: arithmetic is least significant end first, leading to little-endian as
: the /natural/ arrangement.

The weirdest thing with little endian is dealing with a sign bit. "First
bit of the last byte" is more cumbersome than "left most bit".

: > I have a Linux box that I'm conisdering replacing with a old Sun SPARC

Anne & Lynn Wheeler

unread,
Jul 28, 2005, 3:02:02 PM7/28/05
to

over a year ago, i bought a dell dimension 8300 with multithreaded 3.4ghz
processor, 4gbytes of memory, pair of 256gbyte sata drives with sata
raid controller.

i've observed my first personal computer was a 360/30 ... the univ.
would shutdown the machine room over the weekend and i got the whole
place to myself from 8am sat. until 8am monday (it sometimes made it a
little difficult going to monday classes after being up for 48hrs
straight).

in anycase, the 360/30 personal computer had 64kbytes of memory
... 2**16 ... and nearly 40 years later ... i have a personal computer
with 4gbytes memory ... 2**32 ... an increase of 2**16 in memory size
over a period of nearly 40 years.
http://www.garlic.com/~lynn/2005b.html#18 CAS and LL/SC
http://www.garlic.com/~lynn/2005h.html#35 Systems Programming for 8 Year-olds

i've periodically complained about the machine getting sluggish with
mozilla tabs ... i have a default tab bookmark folder that i regularly
use ... click on it and it will fetch 130 some-odd urls simultaneously
... then i start selecting other URLs (into new tabs) from the
original 130 (and deleting as i go along). somewhere around 250 open
tabs ... things start to really bog down.
http://www.garlic.com/~lynn/2004e.html#11 Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!
http://www.garlic.com/~lynn/2004e.html#54 Is there a way to configure your web browser to use multiple

nominally opening a new URL in a new background tab ... should go on
asyncronously to what you are doing in the current tab. around 250-300
tabs ... the browser just about locks up while fetching the new URL
(and you may also start getting browser popup messages about things
getting sluggish ... and you have to click on the popups to clear
them, which is really annoying).

it isn't starting to page ... real storage in-use will have maybe
500-600mbytes in use by the browser ... with a couple gbytes still
free.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Joe Pfeiffer

unread,
Jul 28, 2005, 2:36:05 PM7/28/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

> Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
> : echom...@polaris.umuc.edu (Eric Chomko) writes:
>
> : > I work with CCSDS data which is inherently big endian. My question
> : > is, how
>
> : I'm curious: why is it "inherently" big endian? I'm really trying,
> : but having a lot of trouble imagining a situation where the data
> : transfer really needs to be either big endian or little endian.
> : Networking protocols tend to be big endian, but that doesn't mean
> : there's anything inherent about it.
>
> Because you'e processing a bit stream that is turned into a byte stream
> and all MSBs of each field are on the left side.

That's what I meant when I said the protocols tend to be big-endian,
but there's nothing inherent about that. It would have been equally
easy to have defined the protocol to be little endian (looking at your
response, I see I should have realized that when you said it was
inherently big endian you almost certainly meant the protocol was big
endian. Oops).

<snip>

> : > I have a Linux box that I'm conisdering replacing with a old Sun SPARC
> : > just to have Unix in a big endian box. And with Apple switching, I'm not
> : > considering them due various reasons, of which a finicky C compiler is on
> : > the top of the list.
>
> : Now I'm *really* curious -- what's going on that endianness is *that*
> : important? I can't remember every being in a situation where htonl
> : and friends couldn't cure endianess quickly and easily...
>
> Think of an unsigned char array that is 1024 bytes long. It was once a bit
> stream that is now a byte stream. I then process every field by using a
> memcpy from the unsigned char array into either char arrays (ASCII) or
> into 2, 4 or 8 byte integers.

I can see that memcpy would be a win for the ASCII arrays; I'm
skeptical about the integers, though.

> In a big endian computer this is straight forward as all is laid out left
> to right. In a little endian computer you must first reverse all the bytes
> of each integer before the move. That reverse process, though it does
> work, slows down processing noticably after about 1000 to 10000 or more
> frames (1024 byte array).

To have portable code you should pass 'em all through ntohs or ntohl
as appopriate when doing the assignment; a big-endian compiler will
do nothing while a little-endian compiler will swap bytes
appropriately.

Eric Chomko

unread,
Jul 28, 2005, 3:23:23 PM7/28/05
to
Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: In article <dcb4v4$1e44$1...@news.ums.edu>, Eric Chomko writes

: >Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: . . .
: >: >Why isn't everything big endian, which seems the more
: >: >intuitive way to represent data?
: >
: >: "More intuitive" is a human thing, it only matters if you're trying to
: >: read core dumps. I've read plenty of such dumps in both big and little
: >: endian format, I don't recall it ever causing cognitive dissonance. I
: >: suspect that would be true for most people working at that low level.
: >
: >Think of processing a bit stream. Convert it to a byte stream and then
: >process. Little endian presents nightmares as you must flip all 2, 4, and
: >8 byte integers in order to proecess them.

: Why?

: As I see it, if you're processing each multi-byte integer as a unit then
: you must wait until you have all of the bytes before you can do the
: processing. In this case it doesn't matter what order they arrive in.

Except you have to reverse their order in order to process them.

: If you're doing bit-serial or byte-serial arithmetic then little-endian


: works much better, because carry-propagation goes upward through the
: integer. Big-endian would be much more of a problem.

: Perhaps I don't understand the operation you mean when you say "flip"
: the bytes, or I don't know what kind of processing you're doing on the
: bit stream. Can you define the procedure you're talking about?

1024 unsigned char array. In it are various fields of ASCII and integers,
2,4, and 8 bytes with the MSB on the left. THOSE must be reversed in order
to make sense in a little endian computer.

: Maybe you're speaking of processing an alien bit stream that was


: generated with different endian-ness than the CPU you're running on?
: That's a whole different issue.

As I was saying about CCSDS, it IS inherently big-endian. Nothing alien,
just a blue book that describes it. See section 1.6, of:
http://public.ccsds.org/publications/archive/701x0b3.pdf

Eric

: --

Eric Chomko

unread,
Jul 28, 2005, 3:29:08 PM7/28/05
to
Greg Menke (gregm-...@toadmail.com) wrote:

: echom...@polaris.umuc.edu (Eric Chomko) writes:

I think I'll hold out for the Ultra 5 or 10. I've been poking around this
site: http://sunsolve.sun.com/handbook_pub/Systems/

Also, the cost of the monitor, a 19 in. 13W3 connector unit, is $70 of the
hundred. The cable was $5 and the system box $25 w/KB and mouse. Are you
saying I can get a SPARC Ultra 5 or 10 for ~$25, for just the box (CPU,
RAM, HD, etc.)?

Eric

: Gregm

Greg Menke

unread,
Jul 28, 2005, 4:01:43 PM7/28/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

Ahh- full system cost. I've seen 270mhz Ultra 5's for $4 apiece with no
hard disk. You might shop around on ebay for an Ultra 5/10 w/ keyboard
and mouse, then use a PC monitor.

Gregm

grey...@gmaildo.ttocom

unread,
Jul 28, 2005, 5:13:20 PM7/28/05
to
On Wed, 27 Jul 2005 21:01:25 GMT, John Savard wrote:
> born. That the idea was so bizarre and confusing that it was not
> understood by the engineers who then made the PDP-11's big-endian
> floating-point unit does not deprive DEC and the PDP-11 of the glory -
> or, perhaps, the blame - of inflicting this upon the world.
>

Hi John;

Vim can be compiled to go from right to left.
I've been puzzled by numbers in Arabic countries, couldn't figure if
the were from r->l or l->r. Whatis the Hebrew standard? (If you
know)... I do know that Hebrew letter were once used as numbers.

--
greymaus


Peter Flass

unread,
Jul 28, 2005, 5:54:46 PM7/28/05
to
Jonathan Griffitts wrote:

> In article <1bfyu0y...@viper.cs.nmsu.edu>, Joe Pfeiffer writes
> . . .
>
>>Now I'm *really* curious -- what's going on that endianness is *that*
>>important? I can't remember every being in a situation where htonl
>>and friends couldn't cure endianess quickly and easily...
>
>
> I have run into people who are unaccountably passionate about
> endian-ness of their computers.
>

I've gotten used to little-endian, though I still prefer the "correct"
alternative. My annoyance with intel is that, while the bytes in a word
(dword, etc) are little-endian, the bits in a byte are documented as big
endian, so I sometimes have to draw a picture to determine where a
particular bit is located.

Peter Flass

unread,
Jul 28, 2005, 5:58:03 PM7/28/05
to
Bruce Hoult wrote:
...

>
> - with a big-endian machine you can optimize string operations by using
> 32 (or 64, or 128...) bit unsigned integer operations to sort and
> compare strings, which is 4/8/16 times faster than using byte
> operations.

You can still do this with L-E, by loading and then doing a BSW, which,
I believe, is one byte. What's annoying is not having the equivalent of
IBM's "ICM" instruction to load three bytes.

Peter Flass

unread,
Jul 28, 2005, 6:04:42 PM7/28/05
to
Charlie Gibbs wrote:

> In article <42e869f4...@news.usenetzone.com>,
> jsa...@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
>
>
>>Decimal, of course, is *better* than binary, because it makes it
>>easier to explain how the computer works to someone who never worked
>>with one before; thus, it is good for the same reason that big-endian
>>is good; it makes sense to the naive person.
>
>
> Ah, but are the decimal numbers stored big-endian or little-endian?
> If they're packed two digits per byte, will we have the NUXI problem?
>
>

Another intel stupidity. Little-endian within the number, but
big-endian within a byte, thus the digits appear:
2-1 | 4-3 | 6-5 ...
At least they could have been consistent!

CBFalconer

unread,
Jul 28, 2005, 6:08:45 PM7/28/05
to
Charlie Gibbs wrote:
>
> In article <42e869f4...@news.usenetzone.com>,
> jsa...@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
>
> > Decimal, of course, is *better* than binary, because it makes it
> > easier to explain how the computer works to someone who never worked
> > with one before; thus, it is good for the same reason that big-endian
> > is good; it makes sense to the naive person.
>
> Ah, but are the decimal numbers stored big-endian or little-endian?
> If they're packed two digits per byte, will we have the NUXI problem?
>
> And, most importantly - is it time to discuss excess-3 yet?

Works very well with nines-complement and one bit serial adders.
Four inverters in the bit lines to the multiplexed nixie drivers
allow propers display of all signs of numbers. All embodied in:

<http://cbfalconer.home.att.net/firstpc/>

--
"If you want to post a followup via groups.google.com, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson


CBFalconer

unread,
Jul 28, 2005, 6:08:46 PM7/28/05
to
Anne & Lynn Wheeler wrote:
>
... snip ...

>
> i've observed my first personal computer was a 360/30 ... the univ.
> would shutdown the machine room over the weekend and i got the
> whole place to myself from 8am sat. until 8am monday (it sometimes
> made it a little difficult going to monday classes after being up
> for 48hrs straight).
>
> in anycase, the 360/30 personal computer had 64kbytes of memory
> ... 2**16 ... and nearly 40 years later ... i have a personal
> computer with 4gbytes memory ... 2**32 ... an increase of 2**16 in
> memory size over a period of nearly 40 years.

You forget the factors for selling price, memory cycle time, disk
storage space, i/o speed. I could make a fortune with a time
machine.

CBFalconer

unread,
Jul 28, 2005, 6:08:46 PM7/28/05
to
Eric Chomko wrote:
>
> Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
>
... snip ...

>
>> Now I'm *really* curious -- what's going on that endianness is
>> *that* important? I can't remember every being in a situation
>> where htonl and friends couldn't cure endianess quickly and
>> easily...
>
> Think of an unsigned char array that is 1024 bytes long. It was
> once a bit stream that is now a byte stream. I then process every
> field by using a memcpy from the unsigned char array into either
> char arrays (ASCII) or into 2, 4 or 8 byte integers.
>
> In a big endian computer this is straight forward as all is laid
> out left to right. In a little endian computer you must first
> reverse all the bytes of each integer before the move. That
> reverse process, though it does work, slows down processing
> noticably after about 1000 to 10000 or more frames (1024 byte
> array).

Now consider that bit stream that it once was. If it went over a
RS232 line, or a modem that signals individual bits, or many other
systems, it went least significant bit first. That is little
endian from where I stand.

Jonathan Griffitts

unread,
Jul 28, 2005, 6:25:40 PM7/28/05
to
In article <dcbbbb$1ld6$1...@news.ums.edu>, Eric Chomko writes

>
>> Maybe you're speaking of processing an alien bit stream that was
>> generated with different endian-ness than the CPU you're running on?
>> That's a whole different issue.
>
>As I was saying about CCSDS, it IS inherently big-endian. Nothing alien,
>just a blue book that describes it.

When I said "alien" it was my way of referring to an integer format
which is not native to the CPU you're running. (The second half of that
sentence was intended to my meaning clear.)

So, I now understand that you ARE talking about the whole different
issue of importing integers from non-native data formats.

In your original posting, I thought the question was "why were some
computers designed using little-endian format for internal of integers."
Since that note made reference to Intel and DEC vs. Motorola et. al,
this was a reasonable interpretation. Your later clarifications show
that you're actually concerned with compatibility with an external data
format standard originated in 1989.

Anyway, I somehow doubt the hardware architects at DEC or Intel (or
Motorola) took CCSDS into account. One might better ask why the CCSDS
was designed to be incompatible with efficient little-endian hardware.

Just to play Devil's Advocate, I could suggest that the CCSDS standard
should have specified little-endian integers to allow this kind of
hardware optimization. I could easily imagine designing byte-serial
arithmetic into spacecraft flight hardware, to reduce gate-count and
power consumption in an FPGA or small ASIC. (I'm not seriously trying
to make this case -- this optimization likely doesn't make sense given
the protocol complexity I see in that spec. -- but since you have
described the overhead of swapping the byte order as "nightmares" then
perhaps little inefficiencies are important. Your own words can and
will be used against you, if I can give myself a talking point.)

(OK, I'm done jerking your chain now.)

By the way, this isn't all academic. I have personally designed
hardware byte-serial data processing, and it was recently by AFC
standards (mid-1990s). It wasn't in a CPU, it was a dedicated path for
decoding a fast compressed data stream in real-time. Naturally, the
integer format was little-endian!

CBFalconer

unread,
Jul 28, 2005, 9:50:39 PM7/28/05
to
Eric Chomko wrote:
> Jonathan Griffitts (jgrif...@spamcop.net) wrote:
>
... snip ...

>
> : Maybe you're speaking of processing an alien bit stream that was
> : generated with different endian-ness than the CPU you're running
> : on? That's a whole different issue.
>
> As I was saying about CCSDS, it IS inherently big-endian. Nothing
> alien, just a blue book that describes it. See section 1.6, of:
> http://public.ccsds.org/publications/archive/701x0b3.pdf

Please fix your newsreader. It is emitting an unconventional quote
marker, which fouls up much software everywhere.

John Savard

unread,
Jul 29, 2005, 12:33:49 AM7/29/05
to
On 28 Jul 2005 21:13:20 GMT, grey...@gmaildo.ttocom wrote, in part:

>I've been puzzled by numbers in Arabic countries, couldn't figure if
>the were from r->l or l->r. Whatis the Hebrew standard? (If you
>know)... I do know that Hebrew letter were once used as numbers.

When Hebrew letters are used as numbers, the most significant digit is
on the left, just as Arabic numbers are that way even when Arabs use
them in Arabic text - with Arabic style digits.

John Savard

unread,
Jul 29, 2005, 12:36:34 AM7/29/05
to
On Thu, 28 Jul 2005 12:40:35 -0600, Jonathan Griffitts
<jgrif...@spamcop.net> wrote, in part:

>In article <dcb7hl$1hg0$1...@news.ums.edu>, Eric Chomko writes

>>Somehow I think little endianism preceded DEC,

>It certainly did, because in those days it made good economic sense to
>build hardware that way, especially if you were trying to hit a
>low-end price point. It gets you MUCH MUCH less complex hardware at
>essentially zero cost on the software side.

Yes, and no.

Little-endian for multi-precision arithmetic long preceded DEC, for the
reasons you cite.

But the PDP-11 was the *first* computer that decided to reverse the
order of *characters within a word* so as to be consistent at a smaller
scale than the width of the memory bus, where being little-endian
provided a gain.

Roger Ivie

unread,
Jul 29, 2005, 1:57:47 AM7/29/05
to
On 2005-07-28, Peter Flass <Peter...@Yahoo.com> wrote:
> Another intel stupidity. Little-endian within the number, but
> big-endian within a byte, thus the digits appear:
> 2-1 | 4-3 | 6-5 ...
> At least they could have been consistent!

That's not a problem if you print your dumps out right to left.
--
Roger Ivie
ri...@ridgenet.net
http://anachronda.webhop.org/
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS/P d- s:+++ a+ C++ UB--(++++) !P L- !E W++ N++ o-- K w O- M+ V+++ PS+
PE++ Y+ PGP t+ 5+ X-- R tv++ b++ DI+++ D+ G e++ h--- r+++ z+++
------END GEEK CODE BLOCK------

John Savard

unread,
Jul 29, 2005, 9:14:06 AM7/29/05
to
On Thu, 28 Jul 2005 18:00:59 +0000 (UTC), echom...@polaris.umuc.edu
(Eric Chomko) wrote, in part:

>Because IBM picked the 8088 over the 6809 for the IBM PC. It really is
>THAT simple!

Well, since the 68008, used in the Sinclair QL, hadn't been around yet,
once IBM picked the 8088 over the 8086, they no longer had the choice of
picking the 68000 over the 8086.

In other words, the 6809 wasn't comparable to the 8088, being an 8-bit
processor with hardware multiply instead of a 16-bit processor with an
8-bit bus.

Eric Chomko

unread,
Jul 29, 2005, 2:51:07 PM7/29/05
to
Anne & Lynn Wheeler (ly...@garlic.com) wrote:

: over a year ago, i bought a dell dimension 8300 with multithreaded 3.4ghz


: processor, 4gbytes of memory, pair of 256gbyte sata drives with sata
: raid controller.

: i've observed my first personal computer was a 360/30 ... the univ.
: would shutdown the machine room over the weekend and i got the whole
: place to myself from 8am sat. until 8am monday (it sometimes made it a
: little difficult going to monday classes after being up for 48hrs
: straight).

I had an Interdata 7/16 for weekends to myself my entire senior year in
high school. In Frankfurt, Germany no less!

: in anycase, the 360/30 personal computer had 64kbytes of memory


: ... 2**16 ... and nearly 40 years later ... i have a personal computer
: with 4gbytes memory ... 2**32 ... an increase of 2**16 in memory size
: over a period of nearly 40 years.
: http://www.garlic.com/~lynn/2005b.html#18 CAS and LL/SC
: http://www.garlic.com/~lynn/2005h.html#35 Systems Programming for 8 Year-olds

: i've periodically complained about the machine getting sluggish with
: mozilla tabs ... i have a default tab bookmark folder that i regularly
: use ... click on it and it will fetch 130 some-odd urls simultaneously
: ... then i start selecting other URLs (into new tabs) from the
: original 130 (and deleting as i go along). somewhere around 250 open
: tabs ... things start to really bog down.
: http://www.garlic.com/~lynn/2004e.html#11 Gobble, gobble, gobble: 1.7 RC1 is a "turkey"!
: http://www.garlic.com/~lynn/2004e.html#54 Is there a way to configure your web browser to use multiple

: nominally opening a new URL in a new background tab ... should go on
: asyncronously to what you are doing in the current tab. around 250-300
: tabs ... the browser just about locks up while fetching the new URL
: (and you may also start getting browser popup messages about things
: getting sluggish ... and you have to click on the popups to clear
: them, which is really annoying).

Parkinson's Law, what else is new?

: it isn't starting to page ... real storage in-use will have maybe


: 500-600mbytes in use by the browser ... with a couple gbytes still
: free.

All every interesting, but what does this have to do with big vs. little
endian-ness?

Eric

: --

Eric Chomko

unread,
Jul 29, 2005, 2:59:12 PM7/29/05
to
Greg Menke (gregm-...@toadmail.com) wrote:
: echom...@polaris.umuc.edu (Eric Chomko) writes:

Will do, te best I've seen is:
http://cgi.ebay.com/Sun-Ultra-10-Creator-3D-Sparc-IIi-440Mhz-512-MB-Ram_W0QQitemZ5791744786QQcategoryZ20328QQrdZ1QQcmdZViewItem
and the one I can get locally for $25, is:
http://cgi.ebay.com/Sun-Sparc-5-110Mhz-256MB-4GB-Graphics_W0QQitemZ5791799446QQcategoryZ20327QQrdZ1QQcmdZViewItem

which seems backwards to me...

Anyway, I suspect that I will hold hout for a ~300 - 450 MHz system for
cheap as you stated as I'm really in no rush. Cripes, what I need is
space. Gotta get rid of stuff before I think of getting anything!

Eric

: Gregm

Eric Chomko

unread,
Jul 29, 2005, 3:04:41 PM7/29/05
to
Peter Flass (Peter...@Yahoo.com) wrote:
: Jonathan Griffitts wrote:

Yep, the bit interpretation in a byte is big-endian. Why would you want
your bytes to be opposite that?!

Here, the sign bit of that 4 byte integer is bit 24 in the word! Say what?
That as opposed to being in bit 0, or the leftmost bit. Makes more
sense...

Eric

Eric Chomko

unread,
Jul 29, 2005, 3:18:17 PM7/29/05
to
CBFalconer (cbfal...@yahoo.com) wrote:

: Eric Chomko wrote:
: >
: > Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
: >
: ... snip ...
: >
: >> Now I'm *really* curious -- what's going on that endianness is
: >> *that* important? I can't remember every being in a situation
: >> where htonl and friends couldn't cure endianess quickly and
: >> easily...
: >
: > Think of an unsigned char array that is 1024 bytes long. It was
: > once a bit stream that is now a byte stream. I then process every
: > field by using a memcpy from the unsigned char array into either
: > char arrays (ASCII) or into 2, 4 or 8 byte integers.
: >
: > In a big endian computer this is straight forward as all is laid
: > out left to right. In a little endian computer you must first
: > reverse all the bytes of each integer before the move. That
: > reverse process, though it does work, slows down processing
: > noticably after about 1000 to 10000 or more frames (1024 byte
: > array).

: Now consider that bit stream that it once was. If it went over a
: RS232 line, or a modem that signals individual bits, or many other
: systems, it went least significant bit first. That is little
: endian from where I stand.

Actually it is a transfer frame from a remote sensing satellite. A 8192
bit stream (1024 bytes), of which 892 bytes are actual data. Four bytes in
the front as a frame sync and 128 byte trailer used as error correction
symbols. Like internet packets, this data has all sorts of headers as
well, so the real data portion is even smaller than the 892 bytes, but
that is another story.

RS-232 is byte oriented even though it is a serial form of communciation.
You get 1 start bit plus one or two stop bits, and 8 bits in between as a
unit byte data. It is not bit oriented, nor has any notion of endian-ness
as a single byte can't be interprtted as either big or little endian by
itself.

Eric

: --

Eric Chomko

unread,
Jul 29, 2005, 3:41:16 PM7/29/05
to
Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: In article <dcbbbb$1ld6$1...@news.ums.edu>, Eric Chomko writes

: >
: >> Maybe you're speaking of processing an alien bit stream that was
: >> generated with different endian-ness than the CPU you're running on?
: >> That's a whole different issue.
: >
: >As I was saying about CCSDS, it IS inherently big-endian. Nothing alien,
: >just a blue book that describes it.

: When I said "alien" it was my way of referring to an integer format
: which is not native to the CPU you're running. (The second half of that
: sentence was intended to my meaning clear.)

: So, I now understand that you ARE talking about the whole different
: issue of importing integers from non-native data formats.

: In your original posting, I thought the question was "why were some
: computers designed using little-endian format for internal of integers."
: Since that note made reference to Intel and DEC vs. Motorola et. al,
: this was a reasonable interpretation. Your later clarifications show
: that you're actually concerned with compatibility with an external data
: format standard originated in 1989.

A more basic question, CCSDS aside, is why interpret bits left to right
and bytes right to left? Doesn't it make more sense to interpret bits and
bytes in the same direction? Surely no one would interpret bits right to
left, why bytes?

Someoone stated it is a throwback to random logic machines before
microcode, etc. I'm inclined to agree with that.

: Anyway, I somehow doubt the hardware architects at DEC or Intel (or


: Motorola) took CCSDS into account. One might better ask why the CCSDS
: was designed to be incompatible with efficient little-endian hardware.

Good question, but is little endian actually more efficient?

This whole thing sort of reminds me of the difference between straight
in-line cylinders vs. V-engine designs of an internal combustion engine.
All 4 cylinder cars are not V-type, 6 cylinders are both and straight 8s
are a thing of the past. Surely, a V-design favors more cylinders, whereas
fewer is better for in-line, as we seem to have learned it the hard way.

Big-endian vs. little-endian doesn't seem that cut and dry however.

: Just to play Devil's Advocate, I could suggest that the CCSDS standard


: should have specified little-endian integers to allow this kind of
: hardware optimization. I could easily imagine designing byte-serial
: arithmetic into spacecraft flight hardware, to reduce gate-count and
: power consumption in an FPGA or small ASIC. (I'm not seriously trying
: to make this case -- this optimization likely doesn't make sense given
: the protocol complexity I see in that spec. -- but since you have
: described the overhead of swapping the byte order as "nightmares" then
: perhaps little inefficiencies are important. Your own words can and
: will be used against you, if I can give myself a talking point.)

This begs the question about how the 1553 bus is constructed WRT
endian-ness if there is such a thing.

Also, given the nature of multplexing and demupltplexing of data streams
like CCSDS and the Internet WRT routing, isn't the handling of protocols
better in a big-endian world?

: (OK, I'm done jerking your chain now.)

...for the moment, anyway...

: By the way, this isn't all academic. I have personally designed


: hardware byte-serial data processing, and it was recently by AFC
: standards (mid-1990s). It wasn't in a CPU, it was a dedicated path for
: decoding a fast compressed data stream in real-time. Naturally, the
: integer format was little-endian!

Suffice to say that little-endian tends to work a stack (LIFO) and
big-endian works a queue (FIFO). I supposed when you argue which is the
better data structure you can make your case anyway you want.

Eric

: --

Eric Chomko

unread,
Jul 29, 2005, 3:44:34 PM7/29/05
to
CBFalconer (cbfal...@yahoo.com) wrote:

: Eric Chomko wrote:
: > Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: >
: ... snip ...
: >
: > : Maybe you're speaking of processing an alien bit stream that was
: > : generated with different endian-ness than the CPU you're running
: > : on? That's a whole different issue.
: >
: > As I was saying about CCSDS, it IS inherently big-endian. Nothing
: > alien, just a blue book that describes it. See section 1.6, of:
: > http://public.ccsds.org/publications/archive/701x0b3.pdf

: Please fix your newsreader. It is emitting an unconventional quote
: marker, which fouls up much software everywhere.

Sorry, but some of the software, the ones that grab email addresses within
the messages, is worth breaking. OTOH, I'll make a point of trimming the
"references", as that seems to work.

Where does the error occur?

Eric

: --

Eric Chomko

unread,
Jul 29, 2005, 3:46:34 PM7/29/05
to
Roger Ivie (ri...@ridgenet.net) wrote:

: On 2005-07-28, Peter Flass <Peter...@Yahoo.com> wrote:
: > Another intel stupidity. Little-endian within the number, but
: > big-endian within a byte, thus the digits appear:
: > 2-1 | 4-3 | 6-5 ...
: > At least they could have been consistent!

: That's not a problem if you print your dumps out right to left.

Then the ASCII text is back assword!

: --

Eric Chomko

unread,
Jul 29, 2005, 4:00:24 PM7/29/05
to
John Savard (jsa...@excxn.aNOSPAMb.cdn.invalid) wrote:
: On Thu, 28 Jul 2005 18:00:59 +0000 (UTC), echom...@polaris.umuc.edu

: (Eric Chomko) wrote, in part:

: >Because IBM picked the 8088 over the 6809 for the IBM PC. It really is
: >THAT simple!

: Well, since the 68008, used in the Sinclair QL, hadn't been around yet,
: once IBM picked the 8088 over the 8086, they no longer had the choice of
: picking the 68000 over the 8086.

I have a system called a Helix with a 68008 AND a 6809 in it!

: In other words, the 6809 wasn't comparable to the 8088, being an 8-bit


: processor with hardware multiply instead of a 16-bit processor with an
: 8-bit bus.

Perhaps not directly comparable, but to say that the 68008 and the 8088
were comparable because one was a crippled 68000 at 8 bits and the other a
cripple 8086 at 8 bits, data bus, simply ignores the overall superiority
of the Motorola chips as compared to Intel, and I'd stack any benchmark to
back me up on the claim.

The 6809 hold its own to an 8088, forget 2MHz vs. 4.88MHz. You'd have to
have an 8086 to beat it, and that would be due to 16 bit data bus, but
then you really are looking at comparing it to the 68000, which really
requires Intel's 286 for a comparison.

Eric

: John Savard

rpl

unread,
Jul 29, 2005, 4:06:30 PM7/29/05
to
Eric Chomko wrote:
> CBFalconer (cbfal...@yahoo.com) wrote:

> : Please fix your newsreader. It is emitting an unconventional quote
> : marker, which fouls up much software everywhere.
>
> Sorry, but some of the software, the ones that grab email addresses within
> the messages, is worth breaking. OTOH, I'll make a point of trimming the
> "references", as that seems to work.
>
> Where does the error occur?
>

I think CBF's referring to the ":" being used as a quote marker as
opposed to ">"; I don't think it's fouling up my machine though it makes
quotes a bit harder to parse.

rpl

Jonathan Griffitts

unread,
Jul 29, 2005, 4:41:10 PM7/29/05
to
In article <dce0os$176g$1...@news.ums.edu>, Eric Chomko writes

>
>A more basic question, CCSDS aside, is why interpret bits left to right
>and bytes right to left? Doesn't it make more sense to interpret bits and
>bytes in the same direction?


>Surely no one would interpret bits right to
>left, why bytes?

I guess the point that I have failed to convey is that addition
arithmetic interprets bits right to left because of the direction of
carry propagation. Hence a minimum gate-count hardware implementation
tends to reveal that bias.

ararghmai...@now.at.arargh.com

unread,
Jul 29, 2005, 4:56:26 PM7/29/05
to

After a while, I tend to not read messages with non standard quote
markers. It gets too hard to tell who said what when.

--
ArarghMail507 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

To reply by email, remove the garbage from the reply address.

Message has been deleted

CBFalconer

unread,
Jul 29, 2005, 5:13:22 PM7/29/05
to
Eric Chomko wrote:
>
... snip ...

>
> RS-232 is byte oriented even though it is a serial form of
> communciation. You get 1 start bit plus one or two stop bits, and
> 8 bits in between as a unit byte data. It is not bit oriented,
> nor has any notion of endian-ness as a single byte can't be
> interprtted as either big or little endian by itself.

You wouldn't say that if you had ever watched a serial line with a
scope.

CBFalconer

unread,
Jul 29, 2005, 5:13:21 PM7/29/05
to
Eric Chomko wrote:
> CBFalconer (cbfal...@yahoo.com) wrote:
>: Eric Chomko wrote:
>:> Jonathan Griffitts (jgrif...@spamcop.net) wrote:
>:>
>: ... snip ...
>:>
>:>: Maybe you're speaking of processing an alien bit stream that was
>:>: generated with different endian-ness than the CPU you're running
>:>: on? That's a whole different issue.
>:>
>:> As I was saying about CCSDS, it IS inherently big-endian. Nothing
>:> alien, just a blue book that describes it. See section 1.6, of:
>:> http://public.ccsds.org/publications/archive/701x0b3.pdf
>
>: Please fix your newsreader. It is emitting an unconventional quote
>: marker, which fouls up much software everywhere.
>
> Sorry, but some of the software, the ones that grab email addresses
> within the messages, is worth breaking. OTOH, I'll make a point of
> trimming the "references", as that seems to work.
>
> Where does the error occur?

For example, xnews can intelligently reformat the quotes to
preserve a sensible right margin, iff the quote marks are
standard. Other systems can present the quotes in varying colors,
to tie them to the attribution lines.

You should not fool with the references. The algorithms for
trimming them are specified by some RFC.

ararghmai...@now.at.arargh.com

unread,
Jul 29, 2005, 5:38:21 PM7/29/05
to
On Fri, 29 Jul 2005 19:18:17 +0000 (UTC), echom...@polaris.umuc.edu
(Eric Chomko) wrote:

<snip>


>RS-232 is byte oriented even though it is a serial form of communciation.
>You get 1 start bit plus one or two stop bits, and 8 bits in between as a
>unit byte data.

Actually, you can have 5 or 6 or 7 or 8 data bits. Maybe 9, I don't
remember.

If fact, there is no reason that you couldn't have 16 or 32 bits of
data. However, at some point in the data size, you will run into
clocking problems.

>It is not bit oriented, nor has any notion of endian-ness
>as a single byte can't be interprtted as either big or little endian by
>itself.

AFAIK, RS232 is a bit oriented protocol, that just happens to mostly
get used for transmitting byte data.

Joe Pfeiffer

unread,
Jul 29, 2005, 5:08:30 PM7/29/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

If you're the person trying to translate that bit stream into bytes,
or trying to put bytes out onto that bit stream, it is most assuredly
little-endian.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
skype: jjpfeifferjr

Joe Pfeiffer

unread,
Jul 29, 2005, 5:07:40 PM7/29/05
to
echom...@polaris.umuc.edu (Eric Chomko) writes:

> Peter Flass (Peter...@Yahoo.com) wrote:
> : Jonathan Griffitts wrote:
>
> : > In article <1bfyu0y...@viper.cs.nmsu.edu>, Joe Pfeiffer writes
> : > . . .
> : >
> : >>Now I'm *really* curious -- what's going on that endianness is *that*
> : >>important? I can't remember every being in a situation where htonl
> : >>and friends couldn't cure endianess quickly and easily...
> : >
> : >
> : > I have run into people who are unaccountably passionate about
> : > endian-ness of their computers.
> : >
>
> : I've gotten used to little-endian, though I still prefer the "correct"
> : alternative. My annoyance with intel is that, while the bytes in a word
> : (dword, etc) are little-endian, the bits in a byte are documented as big
> : endian, so I sometimes have to draw a picture to determine where a
> : particular bit is located.
>
> Yep, the bit interpretation in a byte is big-endian. Why would you want
> your bytes to be opposite that?!

No, the least significant bits have lower bit numbers so it's little
endian. The least significant bytes in a word have lower byte
addresses. This is consistent, and little-endian. A big-endian bit
order would have bit 0 on the left side of the word, and bit 31 (if a
four-byte word) on the right side.

> Here, the sign bit of that 4 byte integer is bit 24 in the word! Say what?
> That as opposed to being in bit 0, or the leftmost bit. Makes more
> sense...

No, the sign bit is bit 31. Byte 0 has bits 0-7, byte 1 has bits
8-15, byte 2 has bits 16-23, and byte 3 has bits 24-31.

mensa...@aol.com

unread,
Jul 29, 2005, 6:14:42 PM7/29/05
to

ararghmai...@NOW.AT.arargh.com wrote:
> On Fri, 29 Jul 2005 19:18:17 +0000 (UTC), echom...@polaris.umuc.edu
> (Eric Chomko) wrote:
>
> <snip>
> >RS-232 is byte oriented even though it is a serial form of communciation.
> >You get 1 start bit plus one or two stop bits, and 8 bits in between as a
> >unit byte data.
> Actually, you can have 5 or 6 or 7 or 8 data bits. Maybe 9, I don't
> remember.

Doesn't RS-232 define the signal levels, control signals, protocols
and connectors but not the data?

>
> If fact, there is no reason that you couldn't have 16 or 32 bits of
> data. However, at some point in the data size, you will run into
> clocking problems.

There is, of course, synchronous as well as asynchronous RS-232. If I
remember correctly, all that start/stop bit nonsense is eliminated
in synchronous modems.

ararghmai...@now.at.arargh.com

unread,
Jul 29, 2005, 6:21:42 PM7/29/05
to
On 29 Jul 2005 15:14:42 -0700, "mensa...@aol.com"
<mensa...@aol.com> wrote:

>
>ararghmai...@NOW.AT.arargh.com wrote:
>> On Fri, 29 Jul 2005 19:18:17 +0000 (UTC), echom...@polaris.umuc.edu
>> (Eric Chomko) wrote:
>>
>> <snip>
>> >RS-232 is byte oriented even though it is a serial form of communciation.
>> >You get 1 start bit plus one or two stop bits, and 8 bits in between as a
>> >unit byte data.
>> Actually, you can have 5 or 6 or 7 or 8 data bits. Maybe 9, I don't
>> remember.
>
>Doesn't RS-232 define the signal levels, control signals, protocols
>and connectors but not the data?

I would have to find a copy of the spec, and read it.

>
>>
>> If fact, there is no reason that you couldn't have 16 or 32 bits of
>> data. However, at some point in the data size, you will run into
>> clocking problems.
>
>There is, of course, synchronous as well as asynchronous RS-232. If I
>remember correctly, all that start/stop bit nonsense is eliminated
>in synchronous modems.

Direct wire, also.

Correct. However, the rules for clocking data are somewhat changed,
IIRC. Also, if there is no data to be sent, the transmitter has to
provide filler data.

<snip>

Charlie Gibbs

unread,
Jul 29, 2005, 7:35:09 PM7/29/05
to
In article <1b8xzp5...@viper.cs.nmsu.edu>, pfei...@cs.nmsu.edu
(Joe Pfeiffer) writes:

> echom...@polaris.umuc.edu (Eric Chomko) writes:
>
>> Yep, the bit interpretation in a byte is big-endian. Why would you
>> want your bytes to be opposite that?!
>
> No, the least significant bits have lower bit numbers so it's little
> endian. The least significant bytes in a word have lower byte
> addresses. This is consistent, and little-endian. A big-endian bit
> order would have bit 0 on the left side of the word, and bit 31 (if a
> four-byte word) on the right side.

Beware of bit number assignments! IBM 360 documentation numbers bits
from high to low. Check the POO - in a 32-bit word, the high-order
(sign) bit is bit 0, and the low-order bit is bit 31. In a 16-bit
half-word, the low-order bit is bit 15. Et cetera.

--
/~\ cgi...@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!

Charlie Gibbs

unread,
Jul 29, 2005, 7:39:04 PM7/29/05
to
In article <GTwGe.42215$zY4....@tornado.ohiordc.rr.com>,
for...@dev.nul (Colonel Forbin) writes:

> In article <ot5le112quus5doc0...@4ax.com>,


> <ararghmai...@NOW.AT.arargh.com> wrote:
>
>> On Fri, 29 Jul 2005 16:06:30 -0400, rpl
>> <plinnan...@NOSPAMyahoo.com> wrote:
>>
>>> Eric Chomko wrote:
>>>
>>>> CBFalconer (cbfal...@yahoo.com) wrote:
>>>
>>>>: Please fix your newsreader. It is emitting an unconventional
>>>>: quote marker, which fouls up much software everywhere.
>>>>
>>>> Sorry, but some of the software, the ones that grab email addresses
>>>> within the messages, is worth breaking. OTOH, I'll make a point of
>>>> trimming the "references", as that seems to work.
>>>>
>>>> Where does the error occur?
>>>
>>> I think CBF's referring to the ":" being used as a quote marker as
>>> opposed to ">"; I don't think it's fouling up my machine though it
>>> makes quotes a bit harder to parse.
>>
>> After a while, I tend to not read messages with non standard quote
>> markers. It gets too hard to tell who said what when.
>

> Not all that much unless they're really bizarre like doing >|> for >>.
>
> The worst are people who start a post with "you said..." and don't
> quote *anything*.

No, the worst are top-posters. :-)

Peter Flass

unread,
Jul 29, 2005, 8:32:30 PM7/29/05
to
Roger Ivie wrote:
> On 2005-07-28, Peter Flass <Peter...@Yahoo.com> wrote:
>
>>Another intel stupidity. Little-endian within the number, but
>>big-endian within a byte, thus the digits appear:
>> 2-1 | 4-3 | 6-5 ...
>>At least they could have been consistent!
>
>
> That's not a problem if you print your dumps out right to left.

I don't see how that's relevant. 123,456,789 is stored as
214365870900000000*0 where '*' is the sign '80'x or '00'x. I guess
they're consistent with the bit-ordering where the leftmost bit ('80'x)
is the high-order, but it's *really* not user-friendly.

Charles Richmond

unread,
Jul 30, 2005, 3:37:14 AM7/30/05
to
Charlie Gibbs wrote:
>
> In article <GTwGe.42215$zY4....@tornado.ohiordc.rr.com>,
> for...@dev.nul (Colonel Forbin) writes:
>
> > In article <ot5le112quus5doc0...@4ax.com>,
> > <ararghmai...@NOW.AT.arargh.com> wrote:
> >
> >> On Fri, 29 Jul 2005 16:06:30 -0400, rpl
> >> <plinnan...@NOSPAMyahoo.com> wrote:
> >>
> >>> Eric Chomko wrote:
> >>>
> >>>> CBFalconer (cbfal...@yahoo.com) wrote:
> >>>
> >>>>: Please fix your newsreader. It is emitting an unconventional
> >>>>: quote marker, which fouls up much software everywhere.
> >>>>
> >>>> Sorry, but some of the software, the ones that grab email addresses
> >>>> within the messages, is worth breaking. OTOH, I'll make a point of
> >>>> trimming the "references", as that seems to work.
> >>>>
> >>>> Where does the error occur?
> >>>
> >>> I think CBF's referring to the ":" being used as a quote marker as
> >>> opposed to ">"; I don't think it's fouling up my machine though it
> >>> makes quotes a bit harder to parse.
> >>
> >> After a while, I tend to not read messages with non standard quote
> >> markers. It gets too hard to tell who said what when.
> >
> > Not all that much unless they're really bizarre like doing >|> for >>.
> >
> > The worst are people who start a post with "you said..." and don't
> > quote *anything*.
>
> No, the worst are top-posters. :-)
>
No, the worst are spammers who top-post and mis-attribute things...

--
+----------------------------------------------------------------+
| Charles and Francis Richmond It is moral cowardice to leave |
| undone what one perceives right |
| richmond at plano dot net to do. -- Confucius |
+----------------------------------------------------------------+

Charles Richmond

unread,
Jul 30, 2005, 3:40:47 AM7/30/05
to
CBFalconer wrote:
>
> Charlie Gibbs wrote:
> >
> > In article <42e869f4...@news.usenetzone.com>,
> > jsa...@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
> >
> > > Decimal, of course, is *better* than binary, because it makes it
> > > easier to explain how the computer works to someone who never worked
> > > with one before; thus, it is good for the same reason that big-endian
> > > is good; it makes sense to the naive person.
> >
> > Ah, but are the decimal numbers stored big-endian or little-endian?
> > If they're packed two digits per byte, will we have the NUXI problem?
> >
> > And, most importantly - is it time to discuss excess-3 yet?
>
> Works very well with nines-complement and one bit serial adders.
> Four inverters in the bit lines to the multiplexed nixie drivers
> allow propers display of all signs of numbers. All embodied in:
>
> <http://cbfalconer.home.att.net/firstpc/>
>
Hey, decimal was good enough for ENIAC, and where would be all be
without ENIAC??? Also let's *not* forget the CADET. Can you say
sixteen-twenty??? Sure...sure you can.

(It's a beautiful day in the neighborhood.)

Charles Richmond

unread,
Jul 30, 2005, 3:44:37 AM7/30/05
to
CBFalconer wrote:
>
> Anne & Lynn Wheeler wrote:
> >
> ... snip ...

> >
> > i've observed my first personal computer was a 360/30 ... the univ.
> > would shutdown the machine room over the weekend and i got the
> > whole place to myself from 8am sat. until 8am monday (it sometimes
> > made it a little difficult going to monday classes after being up
> > for 48hrs straight).
> >
> > in anycase, the 360/30 personal computer had 64kbytes of memory
> > ... 2**16 ... and nearly 40 years later ... i have a personal
> > computer with 4gbytes memory ... 2**32 ... an increase of 2**16 in
> > memory size over a period of nearly 40 years.
>
Yeah, and *you* are forty years older. Damn!!! I knew there was a
trick here!!!
>
> You forget the factors for selling price, memory cycle time, disk
> storage space, i/o speed. I could make a fortune with a time
> machine.
>
IMHO one would have to be pretty dense *not* to make a fortune
with a time machine. If I could just *look* ahead and see what
the price of pork bellies are next month, that might be enough.

CBFalconer

unread,
Jul 30, 2005, 5:23:20 AM7/30/05
to
Charles Richmond wrote:
> CBFalconer wrote:
>>
... snip ...

>>
>> You forget the factors for selling price, memory cycle time,
>> disk storage space, i/o speed. I could make a fortune with a
>> time machine.
>>
> IMHO one would have to be pretty dense *not* to make a fortune
> with a time machine. If I could just *look* ahead and see what
> the price of pork bellies are next month, that might be enough.

You want the expensive, buggy, and complicated anticipatory
machine. I am willing to settle for a backscanning type.

Message has been deleted

Charles Richmond

unread,
Jul 30, 2005, 8:27:32 PM7/30/05
to
CBFalconer wrote:
>
> Charles Richmond wrote:
> > CBFalconer wrote:
> >>
> ... snip ...
> >>
> >> You forget the factors for selling price, memory cycle time,
> >> disk storage space, i/o speed. I could make a fortune with a
> >> time machine.
> >>
> > IMHO one would have to be pretty dense *not* to make a fortune
> > with a time machine. If I could just *look* ahead and see what
> > the price of pork bellies are next month, that might be enough.
>
> You want the expensive, buggy, and complicated anticipatory
> machine. I am willing to settle for a backscanning type.
>
Same difference. We know what happened to stock prices over the
past few decades. Just go back to before a stock began to soar,
and buy a boatload of shares. QED: Same result.
Message has been deleted

CBFalconer

unread,
Jul 30, 2005, 9:32:25 PM7/30/05
to
Charles Richmond wrote:
> CBFalconer wrote:
>> Charles Richmond wrote:
>>> CBFalconer wrote:
>>>>
>> ... snip ...
>>>>
>>>> You forget the factors for selling price, memory cycle time,
>>>> disk storage space, i/o speed. I could make a fortune with a
>>>> time machine.
>>>>
>>> IMHO one would have to be pretty dense *not* to make a fortune
>>> with a time machine. If I could just *look* ahead and see what
>>> the price of pork bellies are next month, that might be enough.
>>
>> You want the expensive, buggy, and complicated anticipatory
>> machine. I am willing to settle for a backscanning type.
>>
> Same difference. We know what happened to stock prices over the
> past few decades. Just go back to before a stock began to soar,
> and buy a boatload of shares. QED: Same result.

Now it is time to get into the discussion of the stability of
timelines, presence of alternate lines etc. and the effects of such
on machines and vice-versa. No fallacies allowed.

More useful would be URLs for time-machine vendors.

John Savard

unread,
Jul 31, 2005, 1:20:00 AM7/31/05
to
On Sat, 30 Jul 2005 00:40:47 -0700, Charles Richmond
<rich...@comcast.net> wrote, in part:

>Hey, decimal was good enough for ENIAC, and where would be all be
>without ENIAC??? Also let's *not* forget the CADET. Can you say
>sixteen-twenty??? Sure...sure you can.

Aye, that was proper sixteen-twenty, that was! Oops... that's
fourteen-twenty that was the vintage year.

But remember, it was only the less expensive model that couldn't add.
The fancier model only needed the _multiplication_ table in memory.

Polymath

unread,
Jul 31, 2005, 7:33:51 AM7/31/05
to
Reminds one of the reputed derivation of Britland's indigenous
computers from the former ICL.....

1900 was when the hardware was designed.

2900 is when the operating system will be bug-free.

"John Savard" <jsa...@excxn.aNOSPAMb.cdn.invalid> wrote in message
news:42ec5f0...@news.usenetzone.com...

jmfb...@aol.com

unread,
Jul 31, 2005, 6:28:26 AM7/31/05
to
In article <42EC1AF4...@comcast.net>,

Charles Richmond <rich...@comcast.net> wrote:
>CBFalconer wrote:
>>
>> Charles Richmond wrote:
>> > CBFalconer wrote:
>> >>
>> ... snip ...
>> >>
>> >> You forget the factors for selling price, memory cycle time,
>> >> disk storage space, i/o speed. I could make a fortune with a
>> >> time machine.
>> >>
>> > IMHO one would have to be pretty dense *not* to make a fortune
>> > with a time machine. If I could just *look* ahead and see what
>> > the price of pork bellies are next month, that might be enough.
>>
>> You want the expensive, buggy, and complicated anticipatory
>> machine. I am willing to settle for a backscanning type.
>>
>Same difference. We know what happened to stock prices over the
>past few decades. Just go back to before a stock began to soar,
>and buy a boatload of shares. QED: Same result.

Nope. Think of all the extra data that now has to be stored
over those 40 years. For all you know, some other company
would have sold the gear to store the bits that kept recording
his stock ownership. The other ripples caused by his bias would
be even more interesting.

/BAH

Subtract a hundred and four for e-mail.

grey...@gmaildo.ttocom

unread,
Jul 31, 2005, 11:11:16 AM7/31/05
to
On Sat, 30 Jul 2005 17:27:32 -0700, Charles Richmond wrote:
> CBFalconer wrote:
>>
>> Charles Richmond wrote:
>> > CBFalconer wrote:
>> >>
>> ... snip ...
>> >>
>> >> You forget the factors for selling price, memory cycle time,
>> >> disk storage space, i/o speed. I could make a fortune with a
>> >> time machine.
>> >>
>> > IMHO one would have to be pretty dense *not* to make a fortune
>> > with a time machine. If I could just *look* ahead and see what
>> > the price of pork bellies are next month, that might be enough.
>>
>> You want the expensive, buggy, and complicated anticipatory
>> machine. I am willing to settle for a backscanning type.
>>
> Same difference. We know what happened to stock prices over the
> past few decades. Just go back to before a stock began to soar,
> and buy a boatload of shares. QED: Same result.
>

Wouldn't that cause the stock to soar?... Thereby changing the future,
and leaving you exposed ( as was Stuart?) to insider dealing.

Much, much, better, is to start a rumor on the style of `Stock A is in
trouble'[1]. Buy the stock when it falls, sell it then when it recovers.
There MAY be a problem if that company IS in trouble, but trying to
bluff its way out. Then the rumor will cause its debtors to check
their securities, and then they will all arrive to get in for cash
before the company collapses, thereby collapsing the company. I knew
of one company that got used to its suppliers holding cheques for a
while, for tax and other reasons. They got dependent on this for cheap
credit, and when a rumor started, goodbye. Now one person worked for
the meanest, greediest financier around. No, Not W. Gates, harder
still. Friday afternoon was taken up by checking where money was owed
and due.

--
greymaus


Walter Bushell

unread,
Jul 31, 2005, 10:38:08 PM7/31/05
to
In article <42e869f4...@news.usenetzone.com>,
jsa...@excxn.aNOSPAMb.cdn.invalid (John Savard) wrote:
<snip>

> Decimal, of course, is *better* than binary, because it makes it easier
> to explain how the computer works to someone who never worked with one
> before; thus, it is good for the same reason that big-endian is good; it
> makes sense to the naive person.

<snip>

Binary logic is much simpler.

But showing the hardware for simple arithmatic in binary is _much_
simpler, imagine the nand (or nor) gates for a packed decimal add or
<shudder> a packed decimal divide.

In any case the quantum physics behind semiconductor logic is still a
little erudite. And even vacuum tubes depend on quantum effects.

--
Guns don't kill people; automobiles kill people.

John Savard

unread,
Aug 1, 2005, 12:27:22 AM8/1/05
to
On Sun, 31 Jul 2005 12:33:51 +0100, "Polymath" <inv...@invalid.invalid>
wrote, in part:

>Reminds one of the reputed derivation of Britland's indigenous
>computers from the former ICL.....
>
>1900 was when the hardware was designed.
>
>2900 is when the operating system will be bug-free.

You realize, of course, that the year 1420 wasn't from the Christian
era, though...

CBFalconer

unread,
Aug 1, 2005, 12:53:12 AM8/1/05
to
Walter Bushell wrote:
>
... snip ...

>
> But showing the hardware for simple arithmatic in binary is _much_
> simpler, imagine the nand (or nor) gates for a packed decimal add
> or <shudder> a packed decimal divide.

That's what excess3 coding is for.

--
Chuck F (cbfal...@yahoo.com) (cbfal...@worldnet.att.net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net> USE worldnet address!

Steve O'Hara-Smith

unread,
Jul 31, 2005, 3:29:59 AM7/31/05
to
On Sat, 30 Jul 2005 17:27:32 -0700
Charles Richmond <rich...@comcast.net> wrote:

> Same difference. We know what happened to stock prices over the
> past few decades. Just go back to before a stock began to soar,
> and buy a boatload of shares. QED: Same result.

One problem - What do you use to buy said boatload of shares ?
You need wealth at the time of purchase for which even cash in circulation
today will not suffice (except perhaps for picking up a big block of
some dotcom survivor at the low point).

--
C:>WIN | Directable Mirror Arrays
The computer obeys and wins. | A better way to focus the sun
You lose and Bill collects. | licences available see
| http://www.sohara.org/

Charles Richmond

unread,
Aug 1, 2005, 8:27:24 AM8/1/05
to
Steve O'Hara-Smith wrote:
>
> On Sat, 30 Jul 2005 17:27:32 -0700
> Charles Richmond <rich...@comcast.net> wrote:
>
> > Same difference. We know what happened to stock prices over the
> > past few decades. Just go back to before a stock began to soar,
> > and buy a boatload of shares. QED: Same result.
>
> One problem - What do you use to buy said boatload of shares ?
> You need wealth at the time of purchase for which even cash in circulation
> today will not suffice (except perhaps for picking up a big block of
> some dotcom survivor at the low point).
>
Beg a couple of bucks from a passer by. Make a few propitious bets
on the right horses. Bingo!!! Instant wealth. A small bet can let
you win the trifecta, which can be hundreds of thousands. Even just
a long-shot horse can turn two dollars into a few hundred.

K Williams

unread,
Aug 1, 2005, 1:13:03 PM8/1/05
to
In article <1082.71T4...@kltpzyxm.invalid>,
cgi...@kltpzyxm.invalid says...

> In article <1b8xzp5...@viper.cs.nmsu.edu>, pfei...@cs.nmsu.edu
> (Joe Pfeiffer) writes:
>
> > echom...@polaris.umuc.edu (Eric Chomko) writes:
> >
> >> Yep, the bit interpretation in a byte is big-endian. Why would you
> >> want your bytes to be opposite that?!
> >
> > No, the least significant bits have lower bit numbers so it's little
> > endian. The least significant bytes in a word have lower byte
> > addresses. This is consistent, and little-endian. A big-endian bit
> > order would have bit 0 on the left side of the word, and bit 31 (if a
> > four-byte word) on the right side.
>
> Beware of bit number assignments! IBM 360 documentation numbers bits
> from high to low. Check the POO - in a 32-bit word, the high-order
> (sign) bit is bit 0, and the low-order bit is bit 31. In a 16-bit
> half-word, the low-order bit is bit 15. Et cetera.
>
That's true of PowerPC too. My first project in IBM got all the bits
backwards and upside down. No one told me IBM numbered the busses
backwards, just "here's the spec, go design this clock control thingy".
'Twas a good thing the interface card was wire-wrapped MECL 10K (with
dual phase outputs on the receivers/drivers). Hardly anyone was the
wiser. ;-)

--
Keith

K Williams

unread,
Aug 1, 2005, 1:27:58 PM8/1/05
to
In article <20050731082959....@eircom.net>,
ste...@eircom.net says...

> On Sat, 30 Jul 2005 17:27:32 -0700
> Charles Richmond <rich...@comcast.net> wrote:
>
> > Same difference. We know what happened to stock prices over the
> > past few decades. Just go back to before a stock began to soar,
> > and buy a boatload of shares. QED: Same result.
>
> One problem - What do you use to buy said boatload of shares ?
> You need wealth at the time of purchase for which even cash in circulation
> today will not suffice (except perhaps for picking up a big block of
> some dotcom survivor at the low point).
>
Short at the top of the dot-bomb bubble.

--
Keith

Eric Chomko

unread,
Aug 1, 2005, 2:52:55 PM8/1/05
to
Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
: echom...@polaris.umuc.edu (Eric Chomko) writes:

: > CBFalconer (cbfal...@yahoo.com) wrote:
: > : Eric Chomko wrote:
: > : >
: > : > Joe Pfeiffer (pfei...@cs.nmsu.edu) wrote:
: > : >
: > : ... snip ...
: > : >

: > : >> Now I'm *really* curious -- what's going on that endianness is
: > : >> *that* important? I can't remember every being in a situation
: > : >> where htonl and friends couldn't cure endianess quickly and
: > : >> easily...
: > : >

: > : > Think of an unsigned char array that is 1024 bytes long. It was
: > : > once a bit stream that is now a byte stream. I then process every
: > : > field by using a memcpy from the unsigned char array into either
: > : > char arrays (ASCII) or into 2, 4 or 8 byte integers.
: > : >
: > : > In a big endian computer this is straight forward as all is laid
: > : > out left to right. In a little endian computer you must first
: > : > reverse all the bytes of each integer before the move. That
: > : > reverse process, though it does work, slows down processing
: > : > noticably after about 1000 to 10000 or more frames (1024 byte
: > : > array).
: >
: > : Now consider that bit stream that it once was. If it went over a
: > : RS232 line, or a modem that signals individual bits, or many other
: > : systems, it went least significant bit first. That is little
: > : endian from where I stand.
: >
: > Actually it is a transfer frame from a remote sensing satellite. A 8192
: > bit stream (1024 bytes), of which 892 bytes are actual data. Four bytes in
: > the front as a frame sync and 128 byte trailer used as error correction
: > symbols. Like internet packets, this data has all sorts of headers as
: > well, so the real data portion is even smaller than the 892 bytes, but
: > that is another story.
: >
: > RS-232 is byte oriented even though it is a serial form of communciation.


: > You get 1 start bit plus one or two stop bits, and 8 bits in between as a

: > unit byte data. It is not bit oriented, nor has any notion of endian-ness


: > as a single byte can't be interprtted as either big or little endian by
: > itself.

: If you're the person trying to translate that bit stream into bytes,
: or trying to put bytes out onto that bit stream, it is most assuredly
: little-endian.

It is anyway you want it to be.

I think little endian likes to label the MSB with a higher number (i.e. 31
instead of 0).

Eric

: --

Eric Chomko

unread,
Aug 1, 2005, 2:53:46 PM8/1/05
to
CBFalconer (cbfal...@yahoo.com) wrote:
: Eric Chomko wrote:
: > CBFalconer (cbfal...@yahoo.com) wrote:
: >: Eric Chomko wrote:
: >:> Jonathan Griffitts (jgrif...@spamcop.net) wrote:
: >:>
: >: ... snip ...
: >:>
: >:>: Maybe you're speaking of processing an alien bit stream that was
: >:>: generated with different endian-ness than the CPU you're running
: >:>: on? That's a whole different issue.
: >:>
: >:> As I was saying about CCSDS, it IS inherently big-endian. Nothing
: >:> alien, just a blue book that describes it. See section 1.6, of:
: >:> http://public.ccsds.org/publications/archive/701x0b3.pdf
: >
: >: Please fix your newsreader. It is emitting an unconventional quote

: >: marker, which fouls up much software everywhere.
: >
: > Sorry, but some of the software, the ones that grab email addresses
: > within the messages, is worth breaking. OTOH, I'll make a point of
: > trimming the "references", as that seems to work.
: >
: > Where does the error occur?

: For example, xnews can intelligently reformat the quotes to
: preserve a sensible right margin, iff the quote marks are
: standard. Other systems can present the quotes in varying colors,
: to tie them to the attribution lines.

: You should not fool with the references. The algorithms for
: trimming them are specified by some RFC.

I use 'rtin', and don't believe in GUI-based newsreaders.

Eric

: --

: "If you want to post a followup via groups.google.com, don't use
: the broken "Reply" link at the bottom of the article. Click on

: "show options" at the top of the article, then click on the

Eric Chomko

unread,
Aug 1, 2005, 2:57:28 PM8/1/05
to
CBFalconer (cbfal...@yahoo.com) wrote:
: Eric Chomko wrote:
: >
: ... snip ...

: >
: > RS-232 is byte oriented even though it is a serial form of
: > communciation. You get 1 start bit plus one or two stop bits, and
: > 8 bits in between as a unit byte data. It is not bit oriented,
: > nor has any notion of endian-ness as a single byte can't be
: > interprtted as either big or little endian by itself.

: You wouldn't say that if you had ever watched a serial line with a
: scope.


I have and all I have seen is pulse waves for voltage, even RS-232 swings
between -12 and +12 and associating bits to waves is arbitrary.

Eric Chomko

unread,
Aug 1, 2005, 3:01:11 PM8/1/05
to
ararghmai...@NOW.AT.arargh.com wrote:
: On Fri, 29 Jul 2005 19:18:17 +0000 (UTC), echom...@polaris.umuc.edu
: (Eric Chomko) wrote:

: <snip>


: >RS-232 is byte oriented even though it is a serial form of communciation.
: >You get 1 start bit plus one or two stop bits, and 8 bits in between as a
: >unit byte data.

: Actually, you can have 5 or 6 or 7 or 8 data bits. Maybe 9, I don't
: remember.

I'm only familar with the 8 bit type, where 7 bits are data and 1 is
parity.

: If fact, there is no reason that you couldn't have 16 or 32 bits of
: data. However, at some point in the data size, you will run into
: clocking problems.

I guess sender and receiver have to agree.

: >It is not bit oriented, nor has any notion of endian-ness


: >as a single byte can't be interprtted as either big or little endian by
: >itself.

: AFAIK, RS232 is a bit oriented protocol, that just happens to mostly
: get used for transmitting byte data.

Which makes sense given that 8 bit bytes are universal these days. And
before I get a lectured on 36 bit and others that aren't divisible by 8,
been there done that.

Eric

: --
: ArarghMail507 at [drop the 'http://www.' from ->] http://www.arargh.com
: BCET Basic Compiler Page: http://www.arargh.com/basic/index.html

: To reply by email, remove the garbage from the reply address.

ararghmai...@now.at.arargh.com

unread,
Aug 1, 2005, 5:50:50 PM8/1/05
to
On Mon, 1 Aug 2005 19:01:11 +0000 (UTC), echom...@polaris.umuc.edu
(Eric Chomko) wrote:

>I'm only familar with the 8 bit type, where 7 bits are data and 1 is
>parity.

I used to use that format, back in the 300 baud acoustic coupler days.
Ever since error correcting modems cane about, I use 8N1 rather than
7P1.

>: If fact, there is no reason that you couldn't have 16 or 32 bits of
>: data. However, at some point in the data size, you will run into
>: clocking problems.
>
>I guess sender and receiver have to agree.

It helps. But you can determine what the other side is sending by
inspection of the signal. Remember typing AT<cr> several times for a
modem? Some modems would do an autosense on that.


>
>: >It is not bit oriented, nor has any notion of endian-ness
>: >as a single byte can't be interprtted as either big or little endian by
>: >itself.
>: AFAIK, RS232 is a bit oriented protocol, that just happens to mostly
>: get used for transmitting byte data.
>
>Which makes sense given that 8 bit bytes are universal these days. And
>before I get a lectured on 36 bit and others that aren't divisible by 8,
>been there done that.

--
ArarghMail508 at [drop the 'http://www.' from ->] http://www.arargh.com

David Scheidt

unread,
Aug 1, 2005, 6:22:10 PM8/1/05
to
Eric Chomko <echom...@polaris.umuc.edu> wrote:
:
:I use 'rtin', and don't believe in GUI-based newsreaders.
:

Just a conspiracy of developers, you mean?

It is loading more messages.
0 new messages