Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

png data format

74 views
Skip to first unread message

ruben safir

unread,
Dec 6, 2016, 9:05:49 AM12/6/16
to
Hello

I'm having trouble with this imput of data from a PNG image. The specification says that "chunks" have a 4 byte field that is the length of the attached data segment. I tried to read the length in for a chunk that has a length of 13, which was confirmed in a hexdump

0000000 211 120 116 107 015 012 032 012 -->>000 000 000 015<<-- 111 110 104 122
0000010 000 000 041 215 000 000 007 165 010 006 000 000 001 206 055 074
0000020 336 000 000 000 004 147 101 115 101 000 000 261 217 013 374 141

I am storing the data in a uint32_t variable using the following code, but the value keeps showing up with a huge number 218103808 which happens to be the number that is evaluated by iostream for the value of the whole chunk


done reading header



Sizeof Chunk 4
Raw Chunk Number 0: 218103808
***LENGTH****
Length value => 218103808
Sizeof Byte 1
Character 0::
^@
Byte 0::
0
Character 1::
^@
Byte 1::
0
Character 2::
^@
Byte 2::
0
Character 3::

Byte 3::
13


As yet, when I break it down by single bytes, it returns 0 0 0 13, which is correct.
ddd seems to say the same thing, and I don't know why. When evaluated as 4 bytes,
you get this large number, but when you evaluate them seperately, each byte, it
comes out right.

The code snippet I'm using looks like this

in the .h file
#ifndef PNGPRJ
#define PNGPRJ
#include <inttypes.h>
namespace png_proj{
typedef uint32_t CHUNK;



In the .cpp file
void Image::read_chunk()
{
char * cur = get_index();
CHUNK * tmp = reinterpret_cast<CHUNK *>(cur);
std::cout << std::endl << "Sizeof Chunk " << sizeof(*tmp) << std::endl;
for(int j = 0; j<4; j++){
std::cout << "Raw Chunk Number " << j << ": " << *tmp << std::endl;


switch ( j ) {
case 0:
std::cout << "***LENGTH****" << std::endl;
set_length(static_cast<int32_t>(*tmp));
std::cout << "Length value => " << static_cast<int>(*tmp) << std::endl;
break;

case 1:
std::cout << "***TYPE****" << std::endl;
set_type(static_cast<int32_t>(*tmp));
break;

case 2:
{
std::cout << "***DATA****" << std::endl;
unsigned long int l = static_cast<unsigned long int>(get_length());
std::cout << "buffer size should be " << get_length() << std::endl;
int8_t * buffer = new int8_t[l];
std::cout << "buffer element size is " << *buffer << std::endl;
std::cout << "buffer size is " << l << std::endl;
for(unsigned int k = 0; k < get_length(); k++){
buffer[k] = static_cast<int8_t>(tmp[k]);
std::cout << "data " << *buffer << std::endl;
}
set_data(buffer);
}
break;

case 3:
std::cout << "***CRC****" << std::endl;
set_crc(static_cast<int32_t>(*tmp));
break;

default:
std::cout << "***NOMANDSLAND****" << std::endl;
break;
} /* ----- end switch ----- */

char * tmp2 = reinterpret_cast<char *>(tmp); //reading each byte
std::cout << "Sizeof Byte " << sizeof(*tmp2) << std::endl;
//std::cout << "Mark ==>>" << __LINE__ << std::endl;
for(int i=0; i<4; i++){
std::cout << "Character " << i << "::" << std::endl << "\t" << *tmp2 << std::endl;
std::cout << "Byte " << i << "::" << std::endl << "\t" << static_cast<unsigned long int>(*tmp2) << std::endl;
tmp2++;
}
std::cout<<std::endl;
std::cout<<std::endl;
tmp++;
cur = ( reinterpret_cast<char*>(tmp) );
}
set_index(cur);
}



I dug through libpng since this seems to not being doing what I expected. They seem to set it up as 4 byte array

void /* PRIVATE */
png_push_read_chunk(png_structrp png_ptr, png_inforp info_ptr)
{
png_uint_32 chunk_name;
#ifdef PNG_HANDLE_AS_UNKNOWN_SUPPORTED
int keep; /* unknown handling method */
#endif

/* First we make sure we have enough data for the 4-byte chunk name
* and the 4-byte chunk length before proceeding with decoding the
* chunk data. To fully decode each of these chunks, we also make
* sure we have enough data in the buffer for the 4-byte CRC at the
* end of every chunk (except IDAT, which is handled separately).
*/
if ((png_ptr->mode & PNG_HAVE_CHUNK_HEADER) == 0)
{
png_byte chunk_length[4];
png_byte chunk_tag[4];

PNG_PUSH_SAVE_BUFFER_IF_LT(8)
png_push_fill_buffer(png_ptr, chunk_length, 4);
png_ptr->push_length = png_get_uint_31(png_ptr, chunk_length);
png_reset_crc(png_ptr);
png_crc_read(png_ptr, chunk_tag, 4);
png_ptr->chunk_name = PNG_CHUNK_FROM_STRING(chunk_tag);
png_check_chunk_name(png_ptr, png_ptr->chunk_name);
png_ptr->mode |= PNG_HAVE_CHUNK_HEADER;
}


I'm obviously not understanding something I'm evaluation here. So I'm wondering if anyone can shed light on this.
http://www.nylxs.com/docs/grad_school/parallel/src/png/png_proj.h
http://www.nylxs.com/docs/grad_school/parallel/src/png/png_proj.cpp
http://www.nylxs.com/docs/grad_school/parallel/src/png/main_png.cpp
http://www.nylxs.com/docs/grad_school/parallel/src/png/makefile

ruben

let.me.in


Ruben

Scott Lurndal

unread,
Dec 6, 2016, 10:10:43 AM12/6/16
to
ruben safir <ru...@mrbrklyn.com> writes:
>Hello
>
>I'm having trouble with this imput of data from a PNG image. The specification says that "chunks" have a 4 byte field that is the length of the attached data segment. I tried to read the length in for a chunk that has a length of 13, which was confirmed in a hexdump
>
>0000000 211 120 116 107 015 012 032 012 -->>000 000 000 015<<-- 111 110 104 122
>0000010 000 000 041 215 000 000 007 165 010 006 000 000 001 206 055 074
>0000020 336 000 000 000 004 147 101 115 101 000 000 261 217 013 374 141
>
>I am storing the data in a uint32_t variable using the following code, but the value keeps showing up with a huge number 218103808 which happens to be the number that is evaluated by iostream for the value of the whole chunk
>

https://en.wikipedia.org/wiki/Endianness

ruben safir

unread,
Dec 6, 2016, 1:42:30 PM12/6/16
to
that doesn't help

Scott Lurndal

unread,
Dec 6, 2016, 2:05:21 PM12/6/16
to
And nobody here is obligated to help you - you should learn to
help yourself, and the link referenced above should be your starting
point.

ruben safir

unread,
Dec 6, 2016, 2:33:51 PM12/6/16
to
no it is not really. Like most wikipedia articles it is written poorly
and leaves of coherent details. Your under no obligation to post, if
you don't want to contribute. Being an ass isn't helpful though and
treating me like I'm stupid makes me resentful

Luuk

unread,
Dec 6, 2016, 2:51:16 PM12/6/16
to
Basically the wikipeadia page explains what is stated in the docs here:
http://www.libpng.org/pub/png/spec/1.2/PNG-DataRep.html#DR.Integers-and-byte-order


David Brown

unread,
Dec 6, 2016, 3:41:35 PM12/6/16
to
The Wikipedia article there is reasonably written, and full of useful
information. But you may not have made the connection as to why it is
relevant to your problem.

Numbers bigger than single bytes in computing can be stored in two basic
formats - big endian with the most significant byte first, and little
endian with the least significant byte first. Some processors use one
format, other processors use the other. Some file formats and protocols
use one format, others use the other. If the processor and the file
format do not match, then you need to convert when reading or writing
the format.

x86 uses little endian format, so 13 is stored as 0b 00 00 00 as a
32-bit integer. PNG, like many network-related formats, uses big
endian. So it stores 32-bit 13 as 00 00 00 0b. (Incidentally, use hex
for this sort of thing - octal had no place in computing outside of
"chmod" since the 1970's.)

Assuming you are trying to learn and understand this, rather than
copy-and-paste working code, then this should be enough to get you going.

ruben safir

unread,
Dec 6, 2016, 5:12:29 PM12/6/16
to
On 12/06/2016 03:41 PM, David Brown wrote:
> The Wikipedia article there is reasonably written

no it isn't. But I'll show you a means of properly answering a question
like this

http://www.nylxs.com/messages.html?id=543540&archive_learn=2016-12-01


All it takes is a basic assumption that your not talking to an idiot.

Specifically that wikipedea article, and really they all suck, don't
explain how the intel hardware is set up and who to evaluate the and
learn the problem solving mechanism, so that one can learn to evaluate
these kinds of problems in the future.



ruben safir

unread,
Dec 6, 2016, 5:14:32 PM12/6/16
to
On 12/06/2016 03:41 PM, David Brown wrote:
> x86 uses little endian format, so 13 is stored as 0b 00 00 00 as a
> 32-bit integer. PNG, like many network-related formats, uses big
> endian. So it stores 32-bit 13 as 00 00 00 0b. (Incidentally, use hex
> for this sort of thing - octal had no place in computing outside of
> "chmod" since the 1970's.)
>
> Assuming you are trying to learn and understand this, rather than
> copy-and-paste working code, then this should be enough to get you going.


thanks, excellent. What I don't understand though is why when I set up
a loop and take it by the byte that the order is correct. It gets 00 00
00 and then 0d

(which is 13)


Jorgen Grahn

unread,
Dec 6, 2016, 6:24:56 PM12/6/16
to
I can't answer that, and I didn't read the code. However, treating
the file as a series of bytes /is/ the right thing to do, so it
doesn't surprise me if the result is correct. If the file looks like

f0 0d 12 34 00 00 00 0d 47 11
-----------

and the file format specification says "we store an integer in
big-endian form in the marked area", I'd read it using a function
similar to this one:

static unsigned get_bigendian32(const uint8_t* p)
{
unsigned n = 0;
n = (n<<8) | *p++;
n = (n<<8) | *p++;
n = (n<<8) | *p++;
n = (n<<8) | *p++;
return n;
}

(You also have to watch out for buffer overflows.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Paavo Helde

unread,
Dec 6, 2016, 6:25:59 PM12/6/16
to
What you mean by "correct"? That the big-endian convention follows the
convention for hand-written numbers, at least in the Western cultural
tradition? This was probably the initial motivation behind the
big-endian format (?)

By the way, this is all explained on the Wikipedia page with graphs!

nrhth
Paavo

ruben safir

unread,
Dec 6, 2016, 9:14:15 PM12/6/16
to
On 12/06/2016 06:25 PM, Paavo Helde wrote:
> What you mean by "correct"? That the big-endian convention follows the
> convention for hand-written numbers, at least in the Western cultural
> tradition?

No it is correct because that is how 13 would be written

00 00 00 0d

> This was probably the initial motivation behind the
> big-endian format (?)
>
> By the way, this is all explained on the Wikipedia page with graphs!

it sucks, and so do its graphs. In fact, I'm at a point with wikipedea
that it might be /dev/nulled the ip addresses to it and the dns entries.


ruben safir

unread,
Dec 6, 2016, 9:15:26 PM12/6/16
to

Scott Lurndal

unread,
Dec 7, 2016, 8:35:41 AM12/7/16
to
I'd read it as a 32-bit int then byteswap it:

static inline uint32
swap32(uint32 value)
{
__asm__ __volatile__ ("bswap %0": "=a"(value): "0"(value));
return value;
}

or

static inline uint32
swap32(uint32 value)
{
return __builtin_bswap32(value);
}

Gareth Owen

unread,
Dec 7, 2016, 1:02:12 PM12/7/16
to
ruben safir <ru...@mrbrklyn.com> writes:

> On 12/06/2016 06:25 PM, Paavo Helde wrote:
>> What you mean by "correct"? That the big-endian convention follows the
>> convention for hand-written numbers, at least in the Western cultural
>> tradition?
>
> No it is correct because that is how 13 would be written
>
> 00 00 00 0d

Sheesh.

Richard

unread,
Dec 7, 2016, 1:46:46 PM12/7/16
to
[Please do not mail me a copy of your followup]

David Brown <david...@hesbynett.no> spake the secret code
<o277n6$7ak$1...@dont-email.me> thusly:

>[...] (Incidentally, use hex
>for this sort of thing - octal had no place in computing outside of
>"chmod" since the 1970's.)

As with chmod, it only has a place when groups of 3 bits are
significant.

You're right that it's awkward to express bytes as octal because
bytes are 8 bits, not 6 or 9.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Scott Lurndal

unread,
Dec 7, 2016, 2:49:08 PM12/7/16
to
legaliz...@mail.xmission.com (Richard) writes:
>[Please do not mail me a copy of your followup]
>
>David Brown <david...@hesbynett.no> spake the secret code
><o277n6$7ak$1...@dont-email.me> thusly:
>
>>[...] (Incidentally, use hex
>>for this sort of thing - octal had no place in computing outside of
>>"chmod" since the 1970's.)
>
>As with chmod, it only has a place when groups of 3 bits are
>significant.
>

Not to mention that chmod has supported human readable arguments
for the last two decades plus:

chmod u+w,g-w,o=x file

David Brown

unread,
Dec 7, 2016, 5:59:33 PM12/7/16
to
On 06/12/16 23:12, ruben safir wrote:
> On 12/06/2016 03:41 PM, David Brown wrote:
>> The Wikipedia article there is reasonably written
>
> no it isn't. But I'll show you a means of properly answering a question
> like this
>
> http://www.nylxs.com/messages.html?id=543540&archive_learn=2016-12-01
>
>
> All it takes is a basic assumption that your not talking to an idiot.

I don't assume you are an idiot (though your posting style does not do
wonders for the impression you give). I assume you want to learn and
understand what you are doing - otherwise you would simply use a
pre-written png library.

>
> Specifically that wikipedea article, and really they all suck, don't
> explain how the intel hardware is set up and who to evaluate the and
> learn the problem solving mechanism, so that one can learn to evaluate
> these kinds of problems in the future.
>

The Wikipedia article is about Endianness, not Intel hardware, or png
file formats. It has a clear enough explanation about what endianness
means, a bit of history and examples, some reasons for choosing one
endianness type over another, and example code of a way to swap
endianness. What more do you want? A special section entitled "why
your png decoder is not working on an Intel cpu"?


David Brown

unread,
Dec 7, 2016, 6:04:45 PM12/7/16
to
That /might/ be acceptable, assuming you have control of aligned or
unaligned accesses, as well as aliasing issues.

> then byteswap it:
>
> static inline uint32
> swap32(uint32 value)
> {
> __asm__ __volatile__ ("bswap %0": "=a"(value): "0"(value));
> return value;
> }
>
> or
>
> static inline uint32
> swap32(uint32 value)
> {
> return __builtin_bswap32(value);
> }
>

That's okay when you have something working and are looking for greater
optimisation - or if you are familiar enough with the endianness issues
that you are happy to jump straight to an endianness swap routine. But
one step at a time is best until the OP is confident in what he is doing
here.

ruben safir

unread,
Dec 7, 2016, 11:24:19 PM12/7/16
to
On 12/07/2016 05:59 PM, David Brown wrote:
> would simply use a pre-written png library.


thats 100% true. But along with that means I would do a huge research
first through the dozen C++ and PNG texts I have and a duckduckgo
search, BEFORE posting.


ruben safir

unread,
Dec 7, 2016, 11:24:43 PM12/7/16
to
On 12/07/2016 05:59 PM, David Brown wrote:
> clear enough explanation about what endianness means,


no, it doesn't. Maybe you can reedit it.

Robert Wessel

unread,
Dec 8, 2016, 1:01:46 AM12/8/16
to
On Wed, 7 Dec 2016 23:24:35 -0500, ruben safir <ru...@mrbrklyn.com>
wrote:

>On 12/07/2016 05:59 PM, David Brown wrote:
>> clear enough explanation about what endianness means,
>
>
>no, it doesn't. Maybe you can reedit it.


As a regular WP editor (although I don't believe I've ever edited that
particular article), I may be biased, but it looks like a pretty good
article to me. Clear, concise, complete and well referenced - you
really can't ask for much more (the article's "B" class rating
suggests that I'm not the only one with that opinion). The basic
concept is explained in the first introductory paragraph, and then
nicely illustrated in two illustrations in the immediately following
"Illustration" section.

OTOH, I do know what endianness is, which may be leading me to making
incorrect assumptions about the context in which someone without that
knowledge would be reading the article, leading to some significant
omitted information. If you have some constructive criticism
regarding how the article failed you, I can certainly take a look at
improving it.

Gareth Owen

unread,
Dec 8, 2016, 2:26:14 AM12/8/16
to
Maybe the problem is not with the article? Try this one.
https://simple.wikipedia.org/wiki/Endianness

Paavo Helde

unread,
Dec 8, 2016, 3:08:48 AM12/8/16
to
"For each hard question there is a simple, easily understandable wrong
answer" - except that this article is IMO wrong, but still not easily
understandable. What is "hexadecimal data"? Is AB12 really encoded in 4
bits?

Why they want to couple endianness with some binary data representation?
In a "simple pedia" like that I would stick to our common decimal
representation and just say that the number ten is written by two
digits: 0 and 1, which can be ordered either as 01 or 10. I also would
talk something about Gulliver and about the egg-eater wars and who has won.





Alf P. Steinbach

unread,
Dec 8, 2016, 3:35:33 AM12/8/16
to
Not everybody knows that Jonathan Swift wrote the original
recursion-poem, that Augustus De Morgan based his more well-known
variant on. So, first of all, Jonathan Swift → Augustus De Morgan.

Then there's the connection George Boole ← (friendly
article/book-publishing competitor with) Augustus DeMorgan → (private
math tutor to) Lady Augusta Ada → (coder and sort of secretary for)
Charles Babbage → first general programmable computer.

I remember getting almost angry when some American professor wrote an
article about the history of computers in Scientific American, and
managed to omit all that crucial English history. He started with
something about Napoleon, skipped the German/English part entirely, and
ended with the US developments after WWII. I think he even managed to
omit that John von Neumann was Hungarian, like, he was an American.

Not sure how much of this to include in an article about endianness, though.

But it would be nice with a discussion of the endianness of Babbage's
analytical engine.


Cheers!,

- Alf

Jorgen Grahn

unread,
Dec 8, 2016, 6:41:33 AM12/8/16
to
Here he also has to know he's on a little-endian machine, and with
certain compilers. IMO, a high price to pay to avoid byte-level
reads.

There is ntohl() if you're on Unix.

>> static inline uint32
>> swap32(uint32 value)
>> {
>> __asm__ __volatile__ ("bswap %0": "=a"(value): "0"(value));
>> return value;
>> }
>>
>> or
>>
>> static inline uint32
>> swap32(uint32 value)
>> {
>> return __builtin_bswap32(value);
>> }
>>
>
> That's okay when you have something working and are looking for greater
> optimisation - or if you are familiar enough with the endianness issues
> that you are happy to jump straight to an endianness swap routine. But
> one step at a time is best until the OP is confident in what he is doing
> here.

Yes. Part of the point with my get_bigendian32() above is that it
shows[0] that there /are/ no 32-bit integers in binary files, not in
the C++ sense. There are just bytes, and your code is responsible for
the conversion (according to the rules set by whoever created the file
format).

/Jorgen

[0] Slightly exaggerated, perhaps.

Richard

unread,
Dec 8, 2016, 1:02:29 PM12/8/16
to
[Please do not mail me a copy of your followup]

Paavo Helde <myfir...@osa.pri.ee> spake the secret code
<zvCdnRIGdMmViNTF...@giganews.com> thusly:

>"For each hard question there is a simple, easily understandable wrong
>answer" - except that this article is IMO wrong, but still not easily
>understandable. What is "hexadecimal data"? Is AB12 really encoded in 4
>bits?

What is hexadecimal data? Seriously? Are you unable to google or
use WP's search box?

Never mind that "hexadecimal" is linked right in the article.

Nowhere in that article could I find the data 0xAB12.

>Why they want to couple endianness with some binary data representation?

Because the discussion of endianness is meaningless without
discussing how things are stored as binary values in memory or on a
communication stream.

If you don't care how things are stored or transmitted in some binary
form, then you don't care about endianness.

BGB

unread,
Dec 8, 2016, 1:24:49 PM12/8/16
to
it also exists on Windows if using Winsock.
though, it is not as good, as it is a function call into a DLL, so
faster options are possible.


>>> static inline uint32
>>> swap32(uint32 value)
>>> {
>>> __asm__ __volatile__ ("bswap %0": "=a"(value): "0"(value));
>>> return value;
>>> }
>>>
>>> or
>>>
>>> static inline uint32
>>> swap32(uint32 value)
>>> {
>>> return __builtin_bswap32(value);
>>> }
>>>
>>
>> That's okay when you have something working and are looking for greater
>> optimisation - or if you are familiar enough with the endianness issues
>> that you are happy to jump straight to an endianness swap routine. But
>> one step at a time is best until the OP is confident in what he is doing
>> here.
>

those are not exactly portable options though.

also possible could be:
uint32 bswap32(uint32 v0)
{
uint32 v1, v2;
v1=((v0&0xFF00FF00U)>> 8)|((v0&0x00FF00FFU)<< 8);
v2=((v1&0xFFFF0000U)>>16)|((v1&0x0000FFFFU)<<16);
return(v2);
}

with more specialized options based on combination of arch and compiler.


> Yes. Part of the point with my get_bigendian32() above is that it
> shows[0] that there /are/ no 32-bit integers in binary files, not in
> the C++ sense. There are just bytes, and your code is responsible for
> the conversion (according to the rules set by whoever created the file
> format).
>

yep.

file formats get fun, endianess is variable, non-power-of-2 integer
sizes are common, bitstreams are also common (with multiple variations
thereof), ...


wrote various stuff about having multiple variations of an LZ compressed
bitstream format intended for large numbers of small buffers (where
minimizing constant factors becomes a bigger issue), but decided to
leave this out as it drifts a bit far from the topic at hand.

but, yes, entropy coded bitstreams are also fun...

Paavo Helde

unread,
Dec 8, 2016, 2:30:44 PM12/8/16
to
On 8.12.2016 20:02, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Paavo Helde <myfir...@osa.pri.ee> spake the secret code
> <zvCdnRIGdMmViNTF...@giganews.com> thusly:
>
>> "For each hard question there is a simple, easily understandable wrong
>> answer" - except that this article is IMO wrong, but still not easily
>> understandable. What is "hexadecimal data"? Is AB12 really encoded in 4
>> bits?
>
> What is hexadecimal data? Seriously? Are you unable to google or
> use WP's search box?

Tell me, what is "hexadecimal data"? I can understand what is
"hexadecimal representation of data", but "hexadecimal data"? If a
weather station reports temperature 25°C, is this "hexadecimal data" or not?

>
> Nowhere in that article could I find the data 0xAB12.

Are you sure you looked at the same article?
https://simple.wikipedia.org/wiki/Endianness

It appears user Dearingj has changed 'bits' to 'pieces' today, so maybe
all my nitpicking is not in vain ;-)

Cheers
Paavo

Gareth Owen

unread,
Dec 8, 2016, 3:00:53 PM12/8/16
to
Paavo Helde <myfir...@osa.pri.ee> writes:

> On 8.12.2016 20:02, Richard wrote:
>> [Please do not mail me a copy of your followup]
>>
>> Paavo Helde <myfir...@osa.pri.ee> spake the secret code
>> <zvCdnRIGdMmViNTF...@giganews.com> thusly:
>>
>>> "For each hard question there is a simple, easily understandable wrong
>>> answer" - except that this article is IMO wrong, but still not easily
>>> understandable. What is "hexadecimal data"? Is AB12 really encoded in 4
>>> bits?
>>
>> What is hexadecimal data? Seriously? Are you unable to google or
>> use WP's search box?
>
> Tell me, what is "hexadecimal data"? I can understand what is
> "hexadecimal representation of data", but "hexadecimal data"? If a
> weather station reports temperature 25°C, is this "hexadecimal data"
> or not?

You are technically correct, the best sort of correct.

However, you *don't* really not understand what "hexadecimal data"
means, you're just being an annoying pedant.

Pedantry and simplicity are often at odds. Can you guess which way
Simple Wikipedia tends to lean?

Christian Gollwitzer

unread,
Dec 8, 2016, 3:37:11 PM12/8/16
to
Am 08.12.2016 um 21:00 schrieb Gareth Owen:
> Pedantry and simplicity are often at odds. Can you guess which way
> Simple Wikipedia tends to lean?

Maybe, but that particular article is both not simple and incorrect.
A) Not simple: The second sentence has nested parataxes (indicated by
brackets)

"In computer coding, certain numbers, [usually two bytes long (1 byte
= 8 bits) [ that are called "words",] ] can be written or input in two ways"

B) Incorrect: It says that the number 0x12AB in big endian is 12|AB,
because "the bigger number comes at the end", explaining the hexadecimal
digits 0..9A..F. So it is big endian, because AB > 12 ? I can't seem to
understand the sentence in the correct way, it is definitely incorrect


Christian

Paavo Helde

unread,
Dec 8, 2016, 3:40:40 PM12/8/16
to
That's what I said, "simple and wrong". The notion of 'endianness' and
the notion of 'data' have nothing whatsoever to do with 'hexadecimal',
so why the Wikipedia article starts with "Endianness refers to how
hexadecimal data is ordered ..."?

I can understand that people like simple and wrong explanations. I just
don't approve it. But in this case, I feel that 'hexadecimal' is
actually complicating the things, instead of making them simpler. This
article should not contain 'hexadecimal' at all.

Cheers
Paavo


Juha Nieminen

unread,
Dec 12, 2016, 8:37:50 AM12/12/16
to
The original answer given to you, ie. that you are interpreting the
bytes with the wrong endianess, was correct. Rather than understand
this, you decided to act like an ass instead.

Popping mad

unread,
Dec 12, 2016, 10:17:42 AM12/12/16
to
On Mon, 12 Dec 2016 13:37:39 +0000, Juha Nieminen wrote:

> The original answer given to you, ie. that you are interpreting the
> bytes with the wrong endianess, was correct. Rather than understand
> this, you decided to act like an ass instead.

I understand this is correct, but I'm struggling with the nature of that
answer.

I know that was the answer prior to posting the question. I got an
explanation elsewhere.
0 new messages