Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

memcpy

309 views
Skip to first unread message

ghada glissa

unread,
Feb 4, 2015, 10:49:40 AM2/4/15
to
Dear all,

I'm trying to use memcpy to get some kind of information copied into a buffer. My problem is that the order of the copied bytes seems to be inverted from the original order.

exp:

long unsigned int address=0xacde480000000001;
uint8_t *nonce;
memcpy(nonce,&address,8);

return the nonce value is : 1000048deac

So is the there any solution to keep the same order.

Regards.







Christopher Pisz

unread,
Feb 4, 2015, 11:13:30 AM2/4/15
to
Where in the code are you getting "the value of nonce?" and how do you
know it is reversed?

What is a uint8_t and what's wrong with unsigned int?

I get an access violation when running your listing and rightfully so,
nothing has been allocated anywhere, much less memory at 0xacde480000000001.












Scott Lurndal

unread,
Feb 4, 2015, 11:15:15 AM2/4/15
to
use a MIPS (big endian) processor.

Christopher Pisz

unread,
Feb 4, 2015, 11:17:33 AM2/4/15
to
On 2/4/2015 10:13 AM, Christopher Pisz wrote:
> What is a uint8_t and what's wrong with unsigned int?

Correction: What is wrong with unsigned long long?
See how confusing your silly typedefs are? Use the primitives C++ gave you.




Victor Bazarov

unread,
Feb 4, 2015, 11:30:51 AM2/4/15
to
On 2/4/2015 10:49 AM, ghada glissa wrote:
> I'm trying to use memcpy to get some kind of information copied into
> a
buffer. My problem is that the order of the copied bytes seems to be
inverted from the original order.
>
> exp:
>
> long unsigned int address=0xacde480000000001;
> uint8_t *nonce;
> memcpy(nonce,&address,8);

Uh... Did you actually allocate the memory before copying to it? I
recommend using

uint8_t nonce[sizeof(address)];

and instead of copying exactly 8, also use 'sizeof(address)'.

>
> return the nonce value is : 1000048deac
>
> So is the there any solution to keep the same order.

It does. You probably don't understand how the 'address' is organized.

V
--
I do not respond to top-posted replies, please don't ask

Mr Flibble

unread,
Feb 4, 2015, 1:09:53 PM2/4/15
to
The only typedef I can see is uint8_t which *is* part of C++ and is
*not* "silly".

I use typedefs *a lot* in my code and as long as the names are carefully
chosen the resultant code is actually *less* confusing.

/Flibble

ghada glissa

unread,
Feb 4, 2015, 2:28:16 PM2/4/15
to

I did :

long unsigned int address=0xacde480000000001;
uint8_t *nonce;
nonce= new uint8_t;
memcpy(nonce,&address,8);

for(int i=0;i<16;i++){

printf("%x",nonce[i]);
}

and i got as a result: 1000048deac

}

Vlad from Moscow

unread,
Feb 4, 2015, 2:35:29 PM2/4/15
to
This statement

nonce= new uint8_t;

allocates only pne byte. So nonce can not accomodate an object of type long insigned.

Vlad from Moscow

unread,
Feb 4, 2015, 2:39:04 PM2/4/15
to
Though it is not necessary that there will be access violation because usually the minimum number of bytes that are allocated by default are equal to 16.:)

Also it seems that the LSB is stored at the start address. You could reverse the array o=if you want to see an equivalent representation of the number when the array is outputed.

Christopher Pisz

unread,
Feb 4, 2015, 2:50:58 PM2/4/15
to
friend functions are also part of C++, but that doesn't mean you need to
make every function you ever create a friend. It is a blatant misuse in
modern C++ programming carried over from ancient times.

Calling everything a uint8_t rather than an unsigned long long
accomplishes what in 2015?!

Are you going to change it to be a float later? Then it wouldn't make
much sense anymore to have a uint8_t m_foo that is actually a float
would it?

Is it because you want to remind yourself that an unsigned long long is
8 bytes? Is it guaranteed to be 8 bytes by the standard anyway?

So, ok you want to be able to compile your source somewhere where a
unsigned long long is no longer 8 bytes. You're gonna change your
typedef and hope and pray someone never assigned a uint8_t to something
else that counted on it being an unsigned long long?! Thanks for wasting
6 months worth of man hours!

What in the nine hells does calling and int int4_t accomplish?

Having had to debug projects where silly authors made 5000 line header
files to rename every fricken type in the langauge, seeing a typedef
like that makes me vomit.









Christopher Pisz

unread,
Feb 4, 2015, 3:03:47 PM2/4/15
to
Again, please post complete compilable minimal _and consistent_ examples.

Why are you using long unsigned int, uint8_t * from C99 stdint.h, and int?

Use C++ primitive types, go read about each type, read about big endian
and little endian, and try again.








Barry Schwarz

unread,
Feb 4, 2015, 3:14:54 PM2/4/15
to
You are laboring under a false assumption. Just because your source
statement has the bytes listed in a certain order does not mean that
the data is stored in memory in that order.

In your source code, reading from left to right, you start with the
most significant byte and proceed to the least significant. This is
the way PEOPLE read hexadecimal values.

On a little endian computer (as most desktop systems are), the MACHINE
stores bytes with the least significant one in the lower address and
proceed to the most significant byte in the higher address. If you
had examined the contents of the variable address, you would have seen
that the bytes are in the same order as the memory pointed to by
nonce.

And the data pointed to by nonce should be 8 bytes long, not the 5-1/2
you show. You are missing five nybbles of 0.

--
Remove del for email

Jens Thoms Toerring

unread,
Feb 4, 2015, 3:15:13 PM2/4/15
to
ghada glissa <ghada...@gmail.com> wrote:
>
> I did :
>
> long unsigned int address=0xacde480000000001;
> uint8_t *nonce;
> nonce= new uint8_t;

As already has been pointer out this isn't enough memory
for holding a long unsigned int, you're writing past the
end of the memory you allocated.

Use instead

uint8_t * nonce = new uint_t[ sizesof address ];

> memcpy(nonce,&address,8);

> for(int i=0;i<16;i++){
>
> printf("%x",nonce[i]);
> }

> and i got as a result: 1000048deac

Obviously, you're using a "little-endian" system. On such
systems the least-significant byte is stored in the lowest
memort address and the most significant byte at the highest
address. Thus the number you store in the 'address' variable
will appear in memory like this

01 00 00 00 00 48 de ac

with memory addresses increasing from left to right. And
that's exactly what you get printed out since you print
out the value stored in the lowest memory address first
and the one in the highest last. You'd get the same ef-
fect if you did e.g

uint8_t * nonce = reinterpret_cast< int8_t >( &address );

i.e. if you made point 'nonce' to the location of 'address'
without any copying. Thus memcpy() plays no role in this.

There' also a problem with the code for printing out the
bytes in 'nonce': by using '%x' you will only get a single
'0' for a byte with value 0, thus making the result look
wrong (you get only 4 '0' in a row where there should be 8).
Use instead

printf( "%02x", nonce[ i ] );

The "%02x" tells the printf() function to always print
out two (hexadecimal) digits and "fill up" those two
characters with '0' if the value is less than 0x10.

There are also systems which store values the other way
round, "big-endian" systems. That's why you shouldn't
write out binary data into a file when you want to be
able to read that file also on other systems (well, un-
less you also store somewhere what endian-ness was used
for writing the data - other possible issues like different
sizes for int etc. also need to be addressed) or why there
is a "network order" (big-endian) for transfering binary
(integer) data over the net.
Regards, Jens
--
\ Jens Thoms Toerring ___ j...@toerring.de
\__________________________ http://toerring.de

Robert Wessel

unread,
Feb 4, 2015, 3:32:54 PM2/4/15
to
Eh? unit8_t is an exact 8-bit, unsigned, 2's complement, type. Or an
unsigned char in most implementations. stdint.h has been defined for
quite a while now, although I think its presence in the C++ standard
is rather newer (although support has been pretty universal).

Whether it's an appropriate type to use in this instance is a
different question, although the (limited) context does not make it
unreasonable (it appears to be something that's headed out to the
network, and an array of exact 8-bit bytes seems a reasonable
requirement). Although it's probably unlikely that the OP actually
understands the issue well enough to have picked uint8_t for a good
reason.

ghada glissa

unread,
Feb 4, 2015, 4:28:47 PM2/4/15
to

>
> printf( "%02x", nonce[ i ] );
>
> The "%02x" tells the printf() function to always print
> out two (hexadecimal) digits and "fill up" those two
> characters with '0' if the value is less than 0x10.
>
> There are also systems which store values the other way
> round, "big-endian" systems. That's why you shouldn't
> write out binary data into a file when you want to be
> able to read that file also on other systems (well, un-
> less you also store somewhere what endian-ness was used
> for writing the data - other possible issues like different
> sizes for int etc. also need to be addressed) or why there
> is a "network order" (big-endian) for transfering binary
> (integer) data over the net.
>

Thank you Mr Jend for your help, i used printf( "%02x", nonce[ i ] ) and it improve the display of values less than 0x10,i get this result 010000000048deac but i can't resolve the problem of order with memcpy.

Regards.

Christopher Pisz

unread,
Feb 4, 2015, 4:32:07 PM2/4/15
to
It is a relic from C99. It is also not guaranteed to exist. It even says so.

Support is not universal at all.
Try to compile that listing on Visual Studio 2003-2008 for example.
Being part of C and not C++ Microsoft also deemed it a relic and dropped
it. Later they brought it back. Granted MS loves to make their own rules.

What traditionally happens is an old C programmer comes along on a C++
project somewhere between 2003 and 2008 and adds 5000 lines of his own
typedefs recreating this header, because he is just used to it being
there. Then some other programmer comes along and directly or indirectly
assigns some typdefed type to another type that is not typedefed, 8
years later someone decided to change a type being used somewhere,
because they were typedefed anyway, and bam! Welcome to 6 months of
monotonous type domino fixing.

If you want to use a unsigned int then use an unsigned int. There is no
purpose at all to use a typedefed renaming when you intend on it being
an specific type anyway. If you want to know the size, then use sizeof.
If the size doesn't match, because Joe Blow created
MyNonExistantOSThatWillNeverExist, then throw an exception and deal with
it when you make a project wide decision to support Joe's new OS.
Otherwise, be a realist and know that you are targeting Windows, Linux,
OSX, Android, or just one of the above.

stdint.h was a feeble attempt at cross-platform guarentees that aren't
guarentees at all and the idea is a source of bugs time and again.

Worse, I see a project that has in _black and white_ in its requirements
that its sole target is Windows and will always be Windows, yet some C
guy comes along and decides it is a great idea to make a typedef for
every single type in the language. Which is ridiculous. Then a Windows
CLR/.NET/whatever else, trained guy comes along and uses all the Windows
types, then a real C++ guy comes along and has a hissy fit much like I
am having.

I don't want to see or maintain uint8_t a = std::wstring(((wchar
*)__bstr_t_("text")))[0];

It's presence in C++ is only because C++ supports worthless error prone
relics from C that have no business being put into _new_ code, but are
only there to keep legacy code working....and even then, I bet you have
a related debugging session it its future.

When you start counting on types being of a certain size without
checking sizeof, and start shuffling bits or bytes around, you are
asking for trouble.


> Whether it's an appropriate type to use in this instance is a
> different question, although the (limited) context does not make it
> unreasonable (it appears to be something that's headed out to the
> network, and an array of exact 8-bit bytes seems a reasonable
> requirement).

If your target platform doesn't specifically guarantee it for you, which
it usually does. Then I'd still rather see:

if( sizeof(unsigned char) != 1 )
{
// Throw your favorite exception here
throw std::exception("Expected one byte unsigned char. Unsupported
OS or compiler options.");
}

// carry on using unsigned char for your project
// without modern programmers having to sort through your 5000 typedefs.

and you only need to do that on types where you are doing some kind of
manipulation of bits or bytes, which hopefully, you rarely need do
anyway in the modern day.

People who grab a byte of some other type or bitmask or count on bitwise
operators just cause, pee me off. There had better be a specific and
reason for it.

> Although it's probably unlikely that the OP actually
> understands the issue well enough to have picked uint8_t for a good
> reason.
>

I'd bet a dozen muffins that is the case and why I am making my point
strongly. If he can't get memset to work, I very much doubt the target
OS, C++ implementation, or maintainability of the code in the future has
even been a consideration, much less what the type is in relation to
what he wants it to be and how it related to the other types he is using.




Luca Risolia

unread,
Feb 4, 2015, 4:42:37 PM2/4/15
to
Il 04/02/2015 22:31, Christopher Pisz ha scritto:

> If your target platform doesn't specifically guarantee it for you, which
> it usually does. Then I'd still rather see:
>
> if( sizeof(unsigned char) != 1 )
> {
> // Throw your favorite exception here
> throw std::exception("Expected one byte unsigned char. Unsupported
> OS or compiler options.");
> }

?! you want to use a static assertion for that..


Ben Bacarisse

unread,
Feb 4, 2015, 4:44:34 PM2/4/15
to
ghada glissa <ghada...@gmail.com> writes:
<snip>
> Thank you Mr Jend for your help, i used printf( "%02x", nonce[ i ] )
> and it improve the display of values less than 0x10,i get this result
> 010000000048deac but i can't resolve the problem of order with memcpy.

There isn't really a "problem of order with memcpy". The problem is
most likely to do with what you expect. Try this and see if you are
happy with (and understand) the result:

union rep {
unsigned long as_ulong;
uint8_t as_bytes[sizeof (unsigned long)];
} address = { 0xacde480000000001 };

int main()
{
for (int i = 0; i < sizeof address; i++)
printf("%02x", address.as_bytes[i]);
putchar('\n');
return 0;
}

No memcpy, just a print of the bytes in an object in sequence, from
lowest to highest address.

--
Ben.

Christopher Pisz

unread,
Feb 4, 2015, 4:53:23 PM2/4/15
to
I wasn't aware of static assertion's existence yet. It's a C++11
feature? Compile time?

I am still catching up to the latest C++ standard. Doesn't appear to be
in my particular compiler at work. Will try when I get home. I can like
to learn new stuff too!




Luca Risolia

unread,
Feb 4, 2015, 4:58:02 PM2/4/15
to
Il 04/02/2015 22:53, Christopher Pisz ha scritto:
> On 2/4/2015 3:42 PM, Luca Risolia wrote:
>> Il 04/02/2015 22:31, Christopher Pisz ha scritto:
>>
>>> If your target platform doesn't specifically guarantee it for you, which
>>> it usually does. Then I'd still rather see:
>>>
>>> if( sizeof(unsigned char) != 1 )
>>> {
>>> // Throw your favorite exception here
>>> throw std::exception("Expected one byte unsigned char. Unsupported
>>> OS or compiler options.");
>>> }
>>
>> ?! you want to use a static assertion for that..
>>
>>
>
> I wasn't aware of static assertion's existence yet. It's a C++11
> feature? Compile time?

It's more a concept than a feature. There are many pre-C++11
implementations of static assertions.

Victor Bazarov

unread,
Feb 4, 2015, 4:58:28 PM2/4/15
to
On 2/4/2015 4:31 PM, Christopher Pisz wrote:
> [..]
> If your target platform doesn't specifically guarantee it for you, which
> it usually does. Then I'd still rather see:
>
> if( sizeof(unsigned char) != 1 )

You basically wrote

if (false)

because by definitnion sizeof(char) == 1. You probably meant something like

if (std::numeric_limits<unsigned char>::digits != 8)

> {
> // Throw your favorite exception here
> throw std::exception("Expected one byte unsigned char. Unsupported
> OS or compiler options.");
> }
>[..]

Christopher Pisz

unread,
Feb 4, 2015, 5:08:41 PM2/4/15
to
Yea, I see boost has it. I'll read up on it there. Maybe I can use that
at work.

ghada glissa

unread,
Feb 4, 2015, 5:41:04 PM2/4/15
to

> There isn't really a "problem of order with memcpy". The problem is
> most likely to do with what you expect. Try this and see if you are
> happy with (and understand) the result:
>
> union rep {
> unsigned long as_ulong;
> uint8_t as_bytes[sizeof (unsigned long)];
> } address = { 0xacde480000000001 };
>
> int main()
> {
> for (int i = 0; i < sizeof address; i++)
> printf("%02x", address.as_bytes[i]);
> putchar('\n');
> return 0;
> }
>
> No memcpy, just a print of the bytes in an object in sequence, from
> lowest to highest address.
>
> --
> Ben.

I'm obliged to use memcpy to set the nonce value, not only to print the bytes in order, as it is specified in this function

void set_nonce(uint8_t *nonce,
long unsigned int extended_source_address,
uint32_t counter,uint8_t seclevel)
{
/* 8 bytes || 4 bytes || 1 byte */
/* extended_source_address || frame_counter || sec_lvl */

memcpy(nonce,&extended_source_address,8);
memcpy(nonce+8,&counter,4);
nonce[12] = seclevel;

}

Jens Thoms Toerring

unread,
Feb 4, 2015, 5:43:16 PM2/4/15
to
Christopher Pisz <nos...@notanaddress.com> wrote:
> If you want to use a unsigned int then use an unsigned int. There is no
> purpose at all to use a typedefed renaming when you intend on it being
> an specific type anyway. If you want to know the size, then use sizeof.

Sounds like you hadn't to deal a lot with programming near
to the hardware where you often need to know exactly how many
bytes a type has (e.g. wou need to writee to a 16-bit wide
register of some memory-mapped device) and using a unsigned
short int, hoping that it will have 2 byte everywhere). And
that the relevant types in <stdint.h> aren't guaramteed to
exist everywhere is really a problem with some rather unusal
systems - there you have a number of extra problems. And if
there's no stdint.h file at all, using a replacement heaer
file, created before compilation with some helper program,
is much cleaner than sprinkling the whole code with checks
for sizeof.

And it's not only a problem with hardware-near programming
- I just had to write something for reading and writing
binary MATLAB (a popular prgramming system for numerical
computations in science) files. And all the types in there
are defined as being either 1,2, 4 or 8 bytes wide, so to
properly read and write them you need to know the exact sizes
of your types. Again, doing it all with endlessly repeated
checks of sizeof would have been a mess. So, obviouslym, I
don't agree with your statement that "there's no purpose at
all" for these types.

Of course, any idiot can mis-use them - but what feature of
C++ can't be mis-used? I've seen a lot of code where people
used operator overloading to produce an incomprehensible mess.
And there are a number of people that use this as an argument
that operator overloading is "evil" per se. But used reasonably
it can make code a lot simpler to write and understand and thus
to maintain.

I agree in that in normal run-of-the-mill applications, these
types are unnecessary - and their use could be be a warning
sign that the author didn't really understood what he or she
was doing. But there are lots of legitimate uses for them
in other fields.

> stdint.h was a feeble attempt at cross-platform guarentees that aren't
> guarentees at all and the idea is a source of bugs time and again.

I couldn't disagre more - I had hoped for something like
that quite some time before C99. <stdint.h> in itself, of
course, can't make a program magically cross-platform (or
more precisely, cross-architecture) safe. But judicious use
of it can help a lot in achieving this goal.

> If your target platform doesn't specifically guarantee it for you, which
> it usually does. Then I'd still rather see:

> if( sizeof(unsigned char) != 1 )
> {
> // Throw your favorite exception here
> throw std::exception("Expected one byte unsigned char. Unsupported
> OS or compiler options.");
> }

> // carry on using unsigned char for your project
> // without modern programmers having to sort through your 5000 typedefs.

Well, that's hardly an option when you have to support such
"strange" architectures...

> I'd bet a dozen muffins that is the case and why I am making my point
> strongly. If he can't get memset to work, I very much doubt the target
> OS, C++ implementation, or maintainability of the code in the future has
> even been a consideration, much less what the type is in relation to
> what he wants it to be and how it related to the other types he is using.

Perhaps you should consider that the OP is rather new to program-
ming (at least in this field) and is learning and experimenting
- I'd rather doubt that this is for some serious project you might
have to maintain some day;-) I've been there myself, made lots of
mistakes, but because of this I think that I nowadays know how to
use memcpy() (or std::copy()) properly. And I'd rather prefer
people who are making the effort of trying to understand what
that is all about over those that just blindly copy some (bad)
habits from some blogs, written by someone who doesn't under-
stand what he's writing about, and then do cargo-cult program-
ming for the rest of their lifes.

But don't let that get in the way of a good rant, I enjoyed it;-)
Best regards, Jens

Christian Gollwitzer

unread,
Feb 4, 2015, 5:56:18 PM2/4/15
to
Am 04.02.15 um 22:31 schrieb Christopher Pisz:
> If you want to use a unsigned int then use an unsigned int. There is no
> purpose at all to use a typedefed renaming when you intend on it being
> an specific type anyway. If you want to know the size, then use sizeof.

Fixed-width integers are very useful types in many fields of computing.
In fact, I claim that they are more useful then the "polymorphic" int,
long, short nonsense. For example, to implement the Mersenne twister you
need to use unsigned 64 bit arithmetics. Or to read image files from
disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
types. Extremely easy if you have either cstdint or inttypes, but
extremely annoying when you need to remember that long on Windows is
always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
Linux and 32 bit on a 32bit Linux etc. The names of these typedefs are
also widespread and there is cstdint, added in C++11 for that. I can't
understand why you think these are not useful, and I don't see why you
think it is a C problem rather than a C++ one.

OTOH, using the generic "int" makes the program behave differently on
different platforms. For example a program computing the factorial using
integer arithmetics overflows for different input at different
platforms, if just int or unsigned is used. "int" should REALLY mean an
integer, i.e. something that never overflows. Too late to get that into
the core C++, though.

Christian


Christopher Pisz

unread,
Feb 4, 2015, 6:07:37 PM2/4/15
to
On 2/4/2015 4:43 PM, Jens Thoms Toerring wrote:
> Christopher Pisz <nos...@notanaddress.com> wrote:
>> If you want to use a unsigned int then use an unsigned int. There is no
>> purpose at all to use a typedefed renaming when you intend on it being
>> an specific type anyway. If you want to know the size, then use sizeof.
>
> Sounds like you hadn't to deal a lot with programming near
> to the hardware where you often need to know exactly how many
> bytes a type has (e.g. wou need to writee to a 16-bit wide
> register of some memory-mapped device) and using a unsigned
> short int, hoping that it will have 2 byte everywhere). And
> that the relevant types in <stdint.h> aren't guaramteed to
> exist everywhere is really a problem with some rather unusal
> systems - there you have a number of extra problems. And if
> there's no stdint.h file at all, using a replacement heaer
> file, created before compilation with some helper program,
> is much cleaner than sprinkling the whole code with checks
> for sizeof.

No, thank goodness. That's where C programmers lurk and I don't get
along with them at all. Imagine that!

Even so, Don't you have an entry point where one check can be made for
the entire project, whether it be an executable, a dll, or a library?

You know the beforehand what architecture you are targeting I'm sure.



> And it's not only a problem with hardware-near programming
> - I just had to write something for reading and writing
> binary MATLAB (a popular prgramming system for numerical
> computations in science) files. And all the types in there
> are defined as being either 1,2, 4 or 8 bytes wide, so to
> properly read and write them you need to know the exact sizes
> of your types. Again, doing it all with endlessly repeated
> checks of sizeof would have been a mess. So, obviouslym, I
> don't agree with your statement that "there's no purpose at
> all" for these types.
>

Well, again, I am not seeing, why you need to use sizeof more than once.

Regardless. Do you really prefer to maintain and decypher lines like:

int4_t void(uint8_t x)
{
BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
unsigned char y = (byte[1] & x);
return (int)y;
}

?

because that is inevitably what I've ended up having to sort out when
more than one programmer get's his hands on it over the years. Granted,
it isn't as directly obvious as the example. It's usually quite indirect.

> Of course, any idiot can mis-use them - but what feature of
> C++ can't be mis-used? I've seen a lot of code where people
> used operator overloading to produce an incomprehensible mess.
> And there are a number of people that use this as an argument
> that operator overloading is "evil" per se. But used reasonably
> it can make code a lot simpler to write and understand and thus
> to maintain.

Given the option to rename any type 3 or more times vs the option for it
to always have the same name, I think the latter is without question
simpler. If there is a specific reason to do otherwise, then that's
another matter. Like we said, the OP has no specific reason.
He isn't learning, because he has over and over posted c-style code
chalk full of bad habits and errors as a result, with no justification
for their use. If the OP has any reason at all to use uint8_t then I'd
love for him to say what it is.




Mr Flibble

unread,
Feb 4, 2015, 6:15:02 PM2/4/15
to
You have just made the age old Usenet mistake of rather than admitting
your mistake you instead rant writing a load of absolute bollocks
digging an even bigger hole for yourself. Sorry mate but you are a n00b.

/Flibble

Jens Thoms Toerring

unread,
Feb 4, 2015, 6:19:12 PM2/4/15
to
ghada glissa <ghada...@gmail.com> wrote:
> Thank you Mr Jend for your help, i used printf( "%02x", nonce[ i ] ) and it
> improve the display of values less than 0x10,i get this result
> 010000000048deac but i can't resolve the problem of order with memcpy.

There is no problem with memcpy(), as others and I have pointed
out, it's behaving exactly as it should. The problem is with what
you expect. Let's look at it again. Consider a system where
an int has two bytes (as was not uncommon a few years ago).
Now you write

unsigned int x = 256 + 5; // or 0x0105

There are obviously two bytes in that value, a more-significant
byte, 256 represented by 0x01, and a less-significant byte, 5 =
0x05 (where "more-significant" means it has a larger impact on
the resulting value - if you add or subtract 1 from the less-
significant byte, the value of 'x' changes by just 1, but if
you add or subtract 1 from the more-significant byte, 'x'
changes by +/-256).

Now: how is this stored in memory? There are two possibilities.
You can have a macine where the more-significant byte (0x01) is
storeed in a lower address than the less-significant byte (called
"big-endian"). And you can have a system where it's done just the
other way round.

If you now do

unsigned char *z = reinterpret_cast< unsigned char * >( &x );

then 'z' points to the address where 'x' starts. On a big-
endian system doing

printf( "%02x", z[ 0 ] );

prints out "01". On a little-endian system the exact same code
will print out "05". And, of course,

printf( "%02x", z[ 1 ] );

prints either "05" (big-endian) or "01" (little-endian).

So what you see here is not a defect of memcpy() or anything
else going wrong but purely the effect of how integers with
more than a single byte get stored on your machine. Obviously,
you're using a little-endian machine, and thus the byte with
the least significance (the "lowest digits" of the value)
get stored in memory at the lowest address. Thus that's the
one you get first when you look at the individual bytes via
your 'nonce' pointer. And when you then look at the higher
addresses via 'nonce' you then get the more and more signi-
ficant bytes. But if you'd have a computer with a different
CPU the result would be the exact reverse if it were a big-
endian CPU.

Christopher Pisz

unread,
Feb 4, 2015, 6:22:46 PM2/4/15
to
On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
> Fixed-width integers are very useful types in many fields of computing.
> In fact, I claim that they are more useful then the "polymorphic" int,
> long, short nonsense. For example, to implement the Mersenne twister you
> need to use unsigned 64 bit arithmetics. Or to read image files from
> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
> types. Extremely easy if you have either cstdint or inttypes, but
> extremely annoying when you need to remember that long on Windows is
> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
> Linux and 32 bit on a 32bit Linux etc.

How is remembering an int is 32bits on windows and 64bit on 64bit linux
harder than remembering what the hell a uint8_t is and that you better
be damn sure that _noone ever_ assigns anything but another uint8_t to
it directly or indirectly?

> The names of these typedefs are
> also widespread and there is cstdint, added in C++11 for that. I can't
> understand why you think these are not useful, and I don't see why you
> think it is a C problem rather than a C++ one

I've already explained why more than once.
Hours and hours of debugging.

It's a C problem because those who program C-style are the ones whom
use it. It's on my list of craptastic habits I run across on the job
from bug generating programmers...that and the header says so.






Mr Flibble

unread,
Feb 4, 2015, 6:24:52 PM2/4/15
to
I couldn't agree with you more. The MISRA safety critical C++ coding
standard even bans the use of 'int', 'char' etc and enforces the use of
the sized integer typedefs to help ensure that there are no nasty
surprises causing planes to fall out of the sky etc.

Personally when I am not using iterators to loop I use std::size_t to
index (not 'int') and the sized integer typedefs nearly everywhere else.
IMO tt is important to know what the valid range of values for a
scalar type is in any algorithm and the sized integer typedefs allow
this. IMO 'int' should really only be used as the return type for main()!

/Flibble

Christopher Pisz

unread,
Feb 4, 2015, 6:41:19 PM2/4/15
to
On 2/4/2015 5:25 PM, Mr Flibble wrote:
> On 04/02/2015 22:56, Christian Gollwitzer wrote:
>> Am 04.02.15 um 22:31 schrieb Christopher Pisz:
>>> If you want to use a unsigned int then use an unsigned int. There is no
>>> purpose at all to use a typedefed renaming when you intend on it being
>>> an specific type anyway. If you want to know the size, then use sizeof.
>>
>> Fixed-width integers are very useful types in many fields of computing.
>> In fact, I claim that they are more useful then the "polymorphic" int,
>> long, short nonsense. For example, to implement the Mersenne twister you
>> need to use unsigned 64 bit arithmetics. Or to read image files from
>> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
>> types. Extremely easy if you have either cstdint or inttypes, but
>> extremely annoying when you need to remember that long on Windows is
>> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
>> Linux and 32 bit on a 32bit Linux etc. The names of these typedefs are
>> also widespread and there is cstdint, added in C++11 for that. I can't
>> understand why you think these are not useful, and I don't see why you
>> think it is a C problem rather than a C++ one.
>>
>> OTOH, using the generic "int" makes the program behave differently on
>> different platforms. For example a program computing the factorial using
>> integer arithmetics overflows for different input at different
>> platforms, if just int or unsigned is used.

Why are you computing factorials when there are well tested, documented,
and widely used libraries already to do this?

Why don't you know what platform you are on?


>> "int" should REALLY mean an
>> integer, i.e. something that never overflows. Too late to get that into
>> the core C++, though.
>
> I couldn't agree with you more. The MISRA safety critical C++ coding
> standard even bans the use of 'int', 'char' etc and enforces the use of
> the sized integer typedefs to help ensure that there are no nasty
> surprises causing planes to fall out of the sky etc.

I can find you a standard that says use GOTO as well, if you like.

> Personally when I am not using iterators to loop I use std::size_t to
> index (not 'int') and the sized integer typedefs nearly everywhere else.
> IMO tt is important to know what the valid range of values for a
> scalar type is in any algorithm and the sized integer typedefs allow
> this.

No, they don't.
As stated before, the types in stdint.h _are not guaranteed_

and no, it isn't
Who gives a flying poop what how many bits unsigned int count is in:

for(unsigned count = 0; count < 100; ++count)
{
}

as long as 100 fits, and 100 _will always_ fit.

and as stated before, _you already know your architecture_ for your project.

People who do silly things to be "platform independent" when platform
independence is not even necessary are silly.

> IMO 'int' should really only be used as the return type for main()!
>
> /Flibble

I hope I never ever have to work on your code.





Melzzzzz

unread,
Feb 4, 2015, 6:42:46 PM2/4/15
to
On Wed, 4 Feb 2015 07:49:17 -0800 (PST)
ghada glissa <ghada...@gmail.com> wrote:

> Dear all,
>
> I'm trying to use memcpy to get some kind of information copied into
> a buffer. My problem is that the order of the copied bytes seems to
> be inverted from the original order.
>
> exp:
>
> long unsigned int address=0xacde480000000001;
> uint8_t *nonce;
> memcpy(nonce,&address,8);
>
> return the nonce value is : 1000048deac
>
> So is the there any solution to keep the same order.

void* p = (void*)0xacde480000000001;

>
> Regards.
>
>
>
>
>
>
>


Ben Bacarisse

unread,
Feb 4, 2015, 6:51:27 PM2/4/15
to
ghada glissa <ghada...@gmail.com> writes:

>> There isn't really a "problem of order with memcpy". The problem is
>> most likely to do with what you expect. Try this and see if you are
>> happy with (and understand) the result:
>>
>> union rep {
>> unsigned long as_ulong;
>> uint8_t as_bytes[sizeof (unsigned long)];
>> } address = { 0xacde480000000001 };
>>
>> int main()
>> {
>> for (int i = 0; i < sizeof address; i++)
>> printf("%02x", address.as_bytes[i]);
>> putchar('\n');
>> return 0;
>> }
>>
>> No memcpy, just a print of the bytes in an object in sequence, from
>> lowest to highest address.
>
> I'm obliged to use memcpy to set the nonce value, not only to print
> the bytes in order,

My example was only supposed to shed some light on what you might be
puzzled by. If it does help you understand, just ignore it.

<snip>
--
Ben.

Mr Flibble

unread,
Feb 4, 2015, 6:58:29 PM2/4/15
to
On 04/02/2015 23:41, Christopher Pisz wrote:
>
> I hope I never ever have to work on your code.

The feeling is most definitely mutual mate; I wouldn't want you anywhere
near my code.

/Flibble

Jens Thoms Toerring

unread,
Feb 4, 2015, 7:11:18 PM2/4/15
to
Christopher Pisz <nos...@notanaddress.com> wrote:
> You know the beforehand what architecture you are targeting I'm sure.

For the few of Linux device drivers I've written I definitely
had not the advantage of knowing what architecture they might
end up being used on;-) That's one of the fields were you de-
finitely have to aim at being architecture-agnostic.

> > And it's not only a problem with hardware-near programming
> > - I just had to write something for reading and writing
> > binary MATLAB (a popular prgramming system for numerical
> > computations in science) files. And all the types in there
> > are defined as being either 1,2, 4 or 8 bytes wide, so to
> > properly read and write them you need to know the exact sizes
> > of your types. Again, doing it all with endlessly repeated
> > checks of sizeof would have been a mess. So, obviouslym, I
> > don't agree with your statement that "there's no purpose at
> > all" for these types.

> Well, again, I am not seeing, why you need to use sizeof more than once.

Consider the following: in that MATLAB file there's an uint16_t
field that tells me the type of the next field (and thus the
number of bytes it occupies). Now I have to decide on the type
what I'm going to store it in (as a kind of 'variant') and how
to read it from the file. Here I've got to deal with a slew of
different types (beside 1, 2, 4, 8 byte signed and unsigned in-
tegers also float, double, complex). Ugly enough. Now, if I'd
also have another, orthogonal matching for the proper type of
integer that fits in (lots of "if (sizeof(int) ==1)",
"if (sizeof(int) ==2)" etc.) this would become such an unholy
mess that I wouldn't understand it anymore 10 minutes lates.
But, I guess, that description isn't clear enough - I'd have
to show you - in the end I had the feeling that proper use of
the types I knew where in the file saved me from a lot of
grieve;-) But, perhaps, if I look at it again in a few weeks
or months I'll see the errors of my way.

> Regardless. Do you really prefer to maintain and decypher lines like:

> int4_t void(uint8_t x)
> {
> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
> unsigned char y = (byte[1] & x);
> return (int)y;
> }

> ?

As I wrote, stupid use of all you can do isn't something I
consider a good idea. In your example there are several things
I haven't seen before 'int4_t", "BYTE" and "T_" (in front
of the string "A"). So I wouldn't like too much having to
maintain that code;-)

Just out of curiosity: is this something for Windows? My
private conviction (call it prejudice if you like) is that
this kind of use of made-up types and (often completely
unnecessary) casts etc. is some Windows-thing where people
actually haven't been exposed much to the existence of dif-
ferent CPU-architectures and types and casts have become
some kind of cargo-cult thingy, which they believe makes
them look smarter. I consider every cast in a program as
a warning flag that there's something going one that
needs close attention because the author basically says
"I'm know better than the compiler" - which, unfortunately,
often isn't the case. So, I agree that this is ugly code where
someone obviously didn't know what he (or, maybe, she) was
doing. But I still am convinced that the types from stdint.h
exist for a good reason - it's just that you shouldn't use a
knife as a srewdriver unless there's no alternative.

> He isn't learning, because he has over and over posted c-style code
> chalk full of bad habits and errors as a result, with no justification
> for their use. If the OP has any reason at all to use uint8_t then I'd
> love for him to say what it is.

I haven't noticed him before so I gave him the benefit of
the doubt. Wouldn't be the first time I've got suckered;-)

David Brown

unread,
Feb 4, 2015, 7:20:11 PM2/4/15
to
It looks like you have a lot of catching up to do. Static assertions
have been a useful tool for C and C++ programming since the languages
were created - the only difference with C++11 is that static_assert() is
now part of the C++ standards, rather than having to be defined by a macro.

And sizeof(unsigned char) is always 1 - it is guaranteed by the
standards, even if a char is 16-bit or anything else. "sizeof" returns
the size of something in terms of char, not in terms of 8-bit bytes.


David Brown

unread,
Feb 4, 2015, 7:26:38 PM2/4/15
to
On 05/02/15 00:07, Christopher Pisz wrote:

> Regardless. Do you really prefer to maintain and decypher lines like:
>
> int4_t void(uint8_t x)
> {
> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
> unsigned char y = (byte[1] & x);
> return (int)y;
> }
>
> ?
>

Is it really so difficult to comprehend that the standard size-specific
types defined in <cstdint> (or <stdint.h>) are called "standard" because
they are standardised by the language standards? Types like uint8_t are
almost universally understood by C and C++ programmers alike. People
who write things like "BYTE" are either dealing with ancient code (from
the pre-C99 days, before the types were standardised), or using Windows
illogical and inconsistent set of typedefs for the Windows APIs.


David Brown

unread,
Feb 4, 2015, 7:35:41 PM2/4/15
to
On 05/02/15 00:22, Christopher Pisz wrote:
> On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
>> Fixed-width integers are very useful types in many fields of computing.
>> In fact, I claim that they are more useful then the "polymorphic" int,
>> long, short nonsense. For example, to implement the Mersenne twister you
>> need to use unsigned 64 bit arithmetics. Or to read image files from
>> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
>> types. Extremely easy if you have either cstdint or inttypes, but
>> extremely annoying when you need to remember that long on Windows is
>> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
>> Linux and 32 bit on a 32bit Linux etc.
>
> How is remembering an int is 32bits on windows and 64bit on 64bit linux
> harder than remembering what the hell a uint8_t is and that you better
> be damn sure that _noone ever_ assigns anything but another uint8_t to
> it directly or indirectly?
>

The fact that with every post you are inventing a new way to get things
wrong demonstrates /exactly/ why the fixed-size types were standardised.

Just to save you from further embarrassment, the fixed-width types are
defined as:

int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t, uint32_t, uint64_t

(There are additional types defined too, but let's keep it simple for now.)

These are extremely simple to understand - each is a signed (two's
complement) or unsigned integer of the given bit size. If the hardware
does not support such a integer type, the typedef does not exist on that
platform.

So any time you need to use an integer that is exactly 32 bits in size,
you use an "int32_t" from <cstdint> (or <stdint.h>). The code will work
on any platform from the smallest 8-bit device to the largest 64-bit
system - or if the hardware is not able to support such integers, you
get a nice clear compile-time error. And since they are so clear and
simple, they also clearly document your intentions.

With modern C++, there is little use for "int" any more - if you are
working with some sort of structure or file format, and therefore need
to know the /correct/ size rather than guessing, you should use the
/standard/ fixed-size types. Apart from that, "auto" can replace a good
many uses of "int".

Christopher Pisz

unread,
Feb 4, 2015, 7:41:16 PM2/4/15
to
On 2/4/2015 6:19 PM, David Brown wrote:
> On 04/02/15 22:53, Christopher Pisz wrote:
>> On 2/4/2015 3:42 PM, Luca Risolia wrote:
>>> Il 04/02/2015 22:31, Christopher Pisz ha scritto:
>>>
>>>> If your target platform doesn't specifically guarantee it for you,
>>>> which
>>>> it usually does. Then I'd still rather see:
>>>>
>>>> if( sizeof(unsigned char) != 1 )
>>>> {
>>>> // Throw your favorite exception here
>>>> throw std::exception("Expected one byte unsigned char. Unsupported
>>>> OS or compiler options.");
>>>> }
>>>
>>> ?! you want to use a static assertion for that..
>>>
>>>
>>
>> I wasn't aware of static assertion's existence yet. It's a C++11
>> feature? Compile time?
>>
>> I am still catching up to the latest C++ standard. Doesn't appear to be
>> in my particular compiler at work. Will try when I get home. I can like
>> to learn new stuff too!
>>
>
> It looks like you have a lot of catching up to do. Static assertions
> have been a useful tool for C and C++ programming since the languages
> were created - the only difference with C++11 is that static_assert() is
> now part of the C++ standards, rather than having to be defined by a macro.

I wouldn't think so. C++ is a big huge changing beast. No-one has
everything memorized. That's why we get the big bucks, eh?

Never had to use it, or went looking for such a mechanism, because,
again, never needed it. But sure, I come across things all the time,
especially with C++11, that sound neat.

> And sizeof(unsigned char) is always 1 - it is guaranteed by the
> standards, even if a char is 16-bit or anything else. "sizeof" returns
> the size of something in terms of char, not in terms of 8-bit bytes.

I fully realize this. I was demonstrating how silly the concern was
about whether or not a char is one byte. I was saying, if you really
feel the need to check if it is one byte, then do sizeof. I may have
misunderstood earlier that the OP wants to check for bits, but I still
don't see anywhere in his code where the number of bits matters. It sure
looks like he is concerned with bytes or nibbles, but who knows...his
code is nowhere near a working example.

This entire topic has degenerated into hedge cases about mutant
platforms where types can be different lengths and contain different
ranges. The OP stated no such requirement and is, has, and I bet will
continue to, write code in this style without and logical reason.

I still say, and always will say that you know your architecture up
front. In most cases, you have to in order to even compile with the
correct options.

To truly be "platform independent" you have to jump through all manner
of disgusting hoops and I have yet in 20 years ever come across any real
life code, no matter how trivial, that was truly platform independent. I
have however come across several ugly projects littered with #if define
of the year, do windows #if define of the year, do linux, or #if define
of the year, do windows version X, but again you know your architecture.

If you are creating something so entirely trivial and amazingly small
that you never need to call another library or call the OS, then I still
hold fast to my claim that you can check sizeof at your entry point, and
as Victor pointed out earlier get the number of bits using numeric limits.

You will, I guarantee, break your project at some point using typedefs
for primitive types. I've seen it and wasted hours upon hours on it,
when the target platform was known, singular, and Windows no less.



















David Brown

unread,
Feb 4, 2015, 7:42:25 PM2/4/15
to
Your problem is that you don't understand about endianness:

<http://en.wikipedia.org/wiki/Endianness>

Once you have read that, you should understand why you are getting the
results you get.

The next step is to figure out what results you actually /want/, and if
that is different from what you get, you need to figure out how to get
it. People will be happy to help with that too, but you have to tell us
what you are really trying to do rather than posting small
non-compilable lines of code.

Christopher Pisz

unread,
Feb 4, 2015, 8:01:24 PM2/4/15
to
Yea, it isn't exactly what I came across, but I think gets the point
across. I dunno if that would even compile.

Windows made a define for BYTE. Some use it some don't. I'd rather never
see it and use unsigned char, because unsigned char is a byte and is
part of the language itself already. Similar to the stdint.h argument.

Windows also defines all manner of other types sometimes in more than 5
different ways. I suppose there was historical need to do so.

_bstr_t come's from COM (now with 'safety' pfft) and often I run across
programmers who mistakenly think that _bstr_t is some Windows type that
should always be used to represent a string, "because we're going to
need to turn it into one anyway when we go over COM!", which is poopy
doopy. Separate the Windows specific crap, separate out the COM crap,
into tidy interfaces, and convert before you go over the wire and when
you get things back, to keep things maintainable and testable. there are
probably ten other types defines for strings. MFC programmers love to
use all of them and make my life hell where 90% of processing is dealing
with blasted strings.

T_ is a Windows macro that resolves to, LPCSTR I think? or LPWCSTR
depending whether or not you set your compiler for unicode or multibyte.
I have a hard time arguing this, but I hate that macro. I want to deal
with std::string and std::wstring respectively. I'll be more than happy
to explicitly call the A or W version of a Windows API method in the
implementation of my interface that does the windows specific stuff.
It's there for convenience so you can switch your compiler options.
Problem is, just like the others, that someone somewhere is going to
count on it being what the define resolves to at the time and it isn't
going to work anyway when you switch from multibyte to unicode, should
that rare occasion occur, and you're probably going to have more
problems if it does.


I consider every cast in a program as
> a warning flag that there's something going one that
> needs close attention because the author basically says
> "I'm know better than the compiler" - which, unfortunately,
> often isn't the case. So, I agree that this is ugly code where
> someone obviously didn't know what he (or, maybe, she) was
> doing. But I still am convinced that the types from stdint.h
> exist for a good reason - it's just that you shouldn't use a
> knife as a srewdriver unless there's no alternative.

Now agreed.

Robert Wessel

unread,
Feb 4, 2015, 8:13:15 PM2/4/15
to
On Wed, 04 Feb 2015 15:31:55 -0600, Christopher Pisz
The header is guaranteed* to exist , but the *type* is not, which is
the point (in this case, since the output is apparently being sent
over the network), it's a away of enforcing that that's actually the
format being prepared - and failure to compile on a platform not
supporting the type is certainly an improvement over producing
incorrect results.


*FSVO "guaranteed"


>Support is not universal at all.
>Try to compile that listing on Visual Studio 2003-2008 for example.
>Being part of C and not C++ Microsoft also deemed it a relic and dropped
>it. Later they brought it back. Granted MS loves to make their own rules.


It's been back since at least VS2010, and it was made part of C++11
(and in almost all C++ compilers since before that).


>What traditionally happens is an old C programmer comes along on a C++
>project somewhere between 2003 and 2008 and adds 5000 lines of his own
>typedefs recreating this header, because he is just used to it being
>there. Then some other programmer comes along and directly or indirectly
>assigns some typdefed type to another type that is not typedefed, 8
>years later someone decided to change a type being used somewhere,
>because they were typedefed anyway, and bam! Welcome to 6 months of
>monotonous type domino fixing.
>
>If you want to use a unsigned int then use an unsigned int. There is no
>purpose at all to use a typedefed renaming when you intend on it being
>an specific type anyway. If you want to know the size, then use sizeof.
>If the size doesn't match, because Joe Blow created
>MyNonExistantOSThatWillNeverExist, then throw an exception and deal with
>it when you make a project wide decision to support Joe's new OS.
>Otherwise, be a realist and know that you are targeting Windows, Linux,
>OSX, Android, or just one of the above.


Again, it's not an unsigned int, it's an unsigned char.

FWIW, we don't make much use of stdint.h, although we do usually have
a project wide header file that defines various types for use in
places where layout and sizes are important, including exact layout of
structures headed for other platforms, sizes of elementary items in
the bignum library, the line ending pattern for the platform, some
code assumes ASCII, etc... We do try to limit the use of those to
places where they're strictly needed, and we have some other platform
constraints that we require (8 bit chars, two's-complement, no padding
in a structure between arrays of chars, conventional short/int/long
sizes etc.), and we have code that checks those (as well as providing
validation for the types header). We'd probably have been able to
base some of that on stdint.h, but much of that code predates that.


>stdint.h was a feeble attempt at cross-platform guarentees that aren't
>guarentees at all and the idea is a source of bugs time and again.


In this case it guarantees that there is an 8-bit, two's-complement
type (exactly). It's not like the unit_fast or uint_least types.


>Worse, I see a project that has in _black and white_ in its requirements
>that its sole target is Windows and will always be Windows, yet some C
>guy comes along and decides it is a great idea to make a typedef for
>every single type in the language. Which is ridiculous. Then a Windows
>CLR/.NET/whatever else, trained guy comes along and uses all the Windows
>types, then a real C++ guy comes along and has a hissy fit much like I
>am having.
>
>I don't want to see or maintain uint8_t a = std::wstring(((wchar
>*)__bstr_t_("text")))[0];
>
>It's presence in C++ is only because C++ supports worthless error prone
>relics from C that have no business being put into _new_ code, but are
>only there to keep legacy code working....and even then, I bet you have
>a related debugging session it its future.
>
>When you start counting on types being of a certain size without
>checking sizeof, and start shuffling bits or bytes around, you are
>asking for trouble.


The whole point of the exact sized type is that it does that check
implicitly. If the platform does not have a two's-complement, 8-bit
(exactly) type, uint8_t will not be defined, and a program using it
will not compile. And while I mostly agree with you, this question is
clearly not about a Windows platform (where longs are not 64 bits).


>> Whether it's an appropriate type to use in this instance is a
>> different question, although the (limited) context does not make it
>> unreasonable (it appears to be something that's headed out to the
>> network, and an array of exact 8-bit bytes seems a reasonable
>> requirement).
>
>If your target platform doesn't specifically guarantee it for you, which
>it usually does. Then I'd still rather see:
>
>if( sizeof(unsigned char) != 1 )
>{
> // Throw your favorite exception here
> throw std::exception("Expected one byte unsigned char. Unsupported
>OS or compiler options.");
>}
>
>// carry on using unsigned char for your project
>// without modern programmers having to sort through your 5000 typedefs.


sizeof(unsigned char) is *always* 1. You'd probably want to check
that CHAR_BIT and UCHAR_MAX are exactly 8 and 255 instead. Although
I'm not sure both are actually required.


>and you only need to do that on types where you are doing some kind of
>manipulation of bits or bytes, which hopefully, you rarely need do
>anyway in the modern day.
>
>People who grab a byte of some other type or bitmask or count on bitwise
>operators just cause, pee me off. There had better be a specific and
>reason for it.


By definition, something going out on a TCP/IP network involves the
manipulation of bytes, although that may be hidden in a library you're
using. And my interpretation of the presented code is that is what's
intended. I may, of course, be wrong, but it looks like the setup for
an encryption* header, at which point the exact byte format will be an
issue. But one would hope that the OP's knowledge of encryption is
better than his knowledge of C++!


*I can't think of a single case where I've encountered the term
"nonce" that was not in a security/encryption context. Although a 64
bit nonce is on the short side in some cases.



>> Although it's probably unlikely that the OP actually
>> understands the issue well enough to have picked uint8_t for a good
>> reason.
>>
>
>I'd bet a dozen muffins that is the case and why I am making my point
>strongly. If he can't get memset to work, I very much doubt the target
>OS, C++ implementation, or maintainability of the code in the future has
>even been a consideration, much less what the type is in relation to
>what he wants it to be and how it related to the other types he is using.


Obviously I agree, I just think your rant against uint8_t is a bit
stronger than justified.

Christopher Pisz

unread,
Feb 4, 2015, 8:13:29 PM2/4/15
to
On 2/4/2015 6:26 PM, David Brown wrote:
> On 05/02/15 00:07, Christopher Pisz wrote:
>
>> Regardless. Do you really prefer to maintain and decypher lines like:
>>
>> int4_t void(uint8_t x)
>> {
>> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
>> unsigned char y = (byte[1] & x);
>> return (int)y;
>> }
>>
>> ?
>>
>
> Is it really so difficult to comprehend that the standard size-specific
> types defined in <cstdint> (or <stdint.h>) are called "standard" because
> they are standardised by the language standards?

Nope.
Is it difficult to understand that they aren't guaranteed by that very
standard?

Is it also difficult to understand that they aren't needed without a
_very_ specific scenario?

I mean, by your and other poster's logic, we might as well take unsigned
char, char, int, unsigned int, and all the other primitives right out of
the language, no?

> Types like uint8_t are
> almost universally understood by C and C++ programmers alike.

Nope. I've worked with an aweful lot of C++ programmers. Somewhere
around 5k, maybe? Never seen one use any of these types. Not once.

I've also worked with a lesser amount of C programmer on C++ projects,
whom inevitably created 80% of the bugs. They didn't use these either,
but they sure like to create 5000 line headers to make their own.

Perhaps, if I wrote drivers for a living, I'd run across it more, but
guess what? I don't, nor do hundreds of thousands of other C++ programmers.


> People
> who write things like "BYTE" are either dealing with ancient code (from
> the pre-C99 days, before the types were standardised), or using Windows
> illogical and inconsistent set of typedefs for the Windows APIs.

Oh, you mean the typdefs they invented to alias already existing types?
You are correct sir, quite illogical!




Robert Wessel

unread,
Feb 4, 2015, 8:30:15 PM2/4/15
to
How does that help?

Christopher Pisz

unread,
Feb 4, 2015, 8:30:31 PM2/4/15
to
On 2/4/2015 6:35 PM, David Brown wrote:
> On 05/02/15 00:22, Christopher Pisz wrote:
>> On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
>>> Fixed-width integers are very useful types in many fields of computing.
>>> In fact, I claim that they are more useful then the "polymorphic" int,
>>> long, short nonsense. For example, to implement the Mersenne twister you
>>> need to use unsigned 64 bit arithmetics. Or to read image files from
>>> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
>>> types. Extremely easy if you have either cstdint or inttypes, but
>>> extremely annoying when you need to remember that long on Windows is
>>> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
>>> Linux and 32 bit on a 32bit Linux etc.
>>
>> How is remembering an int is 32bits on windows and 64bit on 64bit linux
>> harder than remembering what the hell a uint8_t is and that you better
>> be damn sure that _noone ever_ assigns anything but another uint8_t to
>> it directly or indirectly?
>>
>
> The fact that with every post you are inventing a new way to get things
> wrong demonstrates /exactly/ why the fixed-size types were standardised.

It does?

> Just to save you from further embarrassment, the fixed-width types are
> defined as:
>
> int8_t, int16_t, int32_t, int64_t, uint8_t, uint16_t, uint32_t, uint64_t
>
> (There are additional types defined too, but let's keep it simple for now.)

I'm not embarassed. Never used them, never will, and if I see them they
are going to be deleted. I work on a Windows platform and I write
standard native C++.

> These are extremely simple to understand - each is a signed (two's
> complement) or unsigned integer of the given bit size. If the hardware
> does not support such a integer type, the typedef does not exist on that
> platform.
>
> So any time you need to use an integer that is exactly 32 bits in size,
> you use an "int32_t" from <cstdint> (or <stdint.h>). The code will work
> on any platform from the smallest 8-bit device to the largest 64-bit
> system

Wrong.

The code will cease to behave as expected as soon as someone assigns a
type that does not match what the define resolves to at the time. Which
may or may not result in a compiler warning, which may or may not be
completely ignored, which may or may not cause countless hours of
debugging, which may or may not result in finding out the C-style
ex-driver writing guy put this crap in your code without any need for it.

- or if the hardware is not able to support such integers, you
> get a nice clear compile-time error. And since they are so clear and
> simple, they also clearly document your intentions.
>
> With modern C++, there is little use for "int" any more - if you are
> working with some sort of structure or file format, and therefore need
> to know the /correct/ size rather than guessing, you should use the
> /standard/ fixed-size types. Apart from that, "auto" can replace a good
> many uses of "int".

When they remove int, unsigned int, float, and double from the language,
I'll be happy to use your stdint.h types.

If I am working with a file format, I let the standard file IO take care
of it for me. If I need to do something fancy, I'll still use the
standard file IO and write my own classes ala "Standard C++ IOStreams
and Locales: Advanced."

If I need to work with a structure that needs a fixed number of bytes,
then I am going to question what I am doing that for in 2015. Given that
I do not work on hardware drivers and there are all manner of
standardized communication mechanisms out here.

If I am doing anything that requires me to know the exact size in bits,
I am going to stop and ask myself, what the hell I am doing and if it is
indeed necessary in 2015. Most likely I am trying to reinvent something
that is already done, well tested, and widely used, and am going to
create more unmaintainable buggy messes. Again, given that I do not work
on hardware drivers.

Christopher Pisz

unread,
Feb 4, 2015, 8:36:25 PM2/4/15
to
I think he is mocking :P

Ian Collins

unread,
Feb 4, 2015, 8:57:48 PM2/4/15
to
Christopher Pisz wrote:
>
> I'm not embarassed. Never used them, never will, and if I see them they
> are going to be deleted. I work on a Windows platform and I write
> standard native C++.

So how do you interface with networking functions in your perfect, pure
windows world?

> The code will cease to behave as expected as soon as someone assigns a
> type that does not match what the define resolves to at the time. Which
> may or may not result in a compiler warning, which may or may not be
> completely ignored, which may or may not cause countless hours of
> debugging, which may or may not result in finding out the C-style
> ex-driver writing guy put this crap in your code without any need for it.

How then is this different form someone assigning a type that does not
match a naked type?

--
Ian Collins

Öö Tiib

unread,
Feb 4, 2015, 10:08:03 PM2/4/15
to
On Thursday, 5 February 2015 03:13:29 UTC+2, Christopher Pisz wrote:
> On 2/4/2015 6:26 PM, David Brown wrote:
> > On 05/02/15 00:07, Christopher Pisz wrote:
> >
> >> Regardless. Do you really prefer to maintain and decypher lines like:
> >>
> >> int4_t void(uint8_t x)
> >> {
> >> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
> >> unsigned char y = (byte[1] & x);
> >> return (int)y;
> >> }
> >>
> >> ?
> >>
> >
> > Is it really so difficult to comprehend that the standard size-specific
> > types defined in <cstdint> (or <stdint.h>) are called "standard" because
> > they are standardised by the language standards?
>
> Nope.
> Is it difficult to understand that they aren't guaranteed by that very
> standard?

That "not guaranteed" means that when you have <cstdint> included but
'uint8_t' does not compile then the system does not have 8 bit unsigned
bytes. You may achieve same effect with:

#include <limits>
static_assert( std::numeric_limits<unsigned char>::digits == 8, "should have 8 bit unsigned bytes");

I prefer to use 'uint8_t' where I need 8 bit bytes, achieves same,
documents it and is lot shorter to type.

> Is it also difficult to understand that they aren't needed without a
> _very_ specific scenario?

In practice you need them always when reading or storing data in some
binary format or communicating over some binary protocol.

> I mean, by your and other poster's logic, we might as well take unsigned
> char, char, int, unsigned int, and all the other primitives right out of
> the language, no?

Silver bullets are always wrong.

Let me look. I use 'int' when value can't logically exceed 30 000, 'long'
I use when value can't logically exceed 2 000 000 000, 'long long' I don't
use because 'int64_t' is slightly less to type.
I never use 'signed char' or 'short' because those make sense only when
I care about bits.
I never use any 'unsigned' variants. Lot to type and also if I don't need
bits but it is unsigned then it is either 'size_t' or 'wchar_t' anyway.
On rest of the cases I care about bits and so the 'int8_t' or 'uint32_t'
document it lot better.

The types that I never use do not bother me, maybe someone else needs
them and so let them be in language, they may do what they want.

> > Types like uint8_t are
> > almost universally understood by C and C++ programmers alike.
>
> Nope. I've worked with an aweful lot of C++ programmers. Somewhere
> around 5k, maybe? Never seen one use any of these types. Not once.
>
> I've also worked with a lesser amount of C programmer on C++ projects,
> whom inevitably created 80% of the bugs. They didn't use these either,
> but they sure like to create 5000 line headers to make their own.
>
> Perhaps, if I wrote drivers for a living, I'd run across it more, but
> guess what? I don't, nor do hundreds of thousands of other C++ programmers.

Argumentum ad populum still remains logically fallacious by definition.
I do not care how many millions do it. If it does not suit my needs
or style then I don't do it.

Christian Gollwitzer

unread,
Feb 5, 2015, 1:52:18 AM2/5/15
to
Am 05.02.15 um 00:22 schrieb Christopher Pisz:
> On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
>> Fixed-width integers are very useful types in many fields of computing.
>> In fact, I claim that they are more useful then the "polymorphic" int,
>> long, short nonsense. For example, to implement the Mersenne twister you
>> need to use unsigned 64 bit arithmetics. Or to read image files from
>> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
>> types. Extremely easy if you have either cstdint or inttypes, but
>> extremely annoying when you need to remember that long on Windows is
>> always 32bit regardless of pointer size, whereas it is 64 bit on a 64bit
>> Linux and 32 bit on a 32bit Linux etc.
>
> How is remembering an int is 32bits on windows and 64bit on 64bit linux
> harder than remembering what the hell a uint8_t is and that you better
> be damn sure that _noone ever_ assigns anything but another uint8_t to
> it directly or indirectly?

Do you have the same problem remembering what the hell the sin()
function from cmath means and how you assure that nobody ever reassigns
it to the cosine function? unit8_t etc. are defined in the standard C++,
they are part of the language. And they are defined with a precise meaning.


> It's a C problem because those who program C-style are the ones whom
> use it. It's on my list of craptastic habits I run across on the job
> from bug generating programmers...that and the header says so.

Those who need fixed-width integers are the ones who use it. If you
don't have fixed-width arithmetics but need it, it's very ugly to
express with lots of &0xFF etc. littered in the code. Why the hell, if
every current platform provides hardware support for these types anyway?

Christian

Ian Collins

unread,
Feb 5, 2015, 2:25:09 AM2/5/15
to
Christopher Pisz wrote:
>
> It's a C problem because those who program C-style are the ones whom
> use it. It's on my list of craptastic habits I run across on the job
> from bug generating programmers...that and the header says so.

To paraphrase Flibble: utter bollocks mate.

A good percentage of the C++ code I write relies on fixed with types and
the code is about as far from "C-style" as you can get. Maybe you are
lucky enough to code in a bubble that doesn't interface with the real
world. Many of us don't and that most certainly does not make us
"C-style" programmers.

--
Ian Collins

David Brown

unread,
Feb 5, 2015, 4:02:27 AM2/5/15
to
No arguments there - C++ is big, and getting bigger.

But static assertions are such an important part of good development
practice (as I see it) that many people have used them heavily long
before they became part of the standard. There are lots of pre-C++11
implementations around (google is your friend) - a simple one like this
works fine for most uses:

#define STATIC_ASSERT_NAME_(line) STATIC_ASSERT_NAME2_(line)
#define STATIC_ASSERT_NAME2_(line) assertion_failed_at_line_##line
#define static_assert(claim, warning) \
typedef struct { \
char STATIC_ASSERT_NAME_(__COUNTER__) [(claim) ? 2 : -2]; \
} STATIC_ASSERT_NAME_(__COUNTER__)


The key difference with the language support for static_assert is that
error messages are clearer.

> Never had to use it, or went looking for such a mechanism, because,
> again, never needed it. But sure, I come across things all the time,
> especially with C++11, that sound neat.

Static assertions let you make your assumptions explicit, and lets the
compiler check those assumptions at zero cost.

>
>> And sizeof(unsigned char) is always 1 - it is guaranteed by the
>> standards, even if a char is 16-bit or anything else. "sizeof" returns
>> the size of something in terms of char, not in terms of 8-bit bytes.
>
> I fully realize this. I was demonstrating how silly the concern was
> about whether or not a char is one byte. I was saying, if you really
> feel the need to check if it is one byte, then do sizeof.

Unfortunately for you, with all the other mistakes you have been making
with these posts, the assumption was that you were making another one.

> I may have
> misunderstood earlier that the OP wants to check for bits, but I still
> don't see anywhere in his code where the number of bits matters. It sure
> looks like he is concerned with bytes or nibbles, but who knows...his
> code is nowhere near a working example.

No, it does not look like anything of the sort. It looks like he wants
to examine the individual 8-bit bytes that make up the long integer he
has. And when you want to look at 8-bit bytes, uint8_t is the /correct/
type to use - anything else is wrong.

It is true that there were several errors in his code. One of them is
the use of "long unsigned int" instead of "uint64_t", since the size of
"long unsigned int" varies between platforms, and of the common PC
systems it is only 64-bit Linux (and other *nix) that have 64-bit longs.

>
> This entire topic has degenerated into hedge cases about mutant
> platforms where types can be different lengths and contain different
> ranges. The OP stated no such requirement and is, has, and I bet will
> continue to, write code in this style without and logical reason.

No, it is about being explicit and using the types you want, rather than
following bad Windows programmers' practice of picking a vaguely defined
type that looks like it works today, and ignoring any silent corruption
you will get in the future.

>
> I still say, and always will say that you know your architecture up
> front. In most cases, you have to in order to even compile with the
> correct options.

An increasing proportion of code is used on different platforms. Even
if you stick to the PC world, there are four major targets - Win32,
Win64, Linux32 and Linux64, which can have subtle differences. I agree
that one should not go overboard about portability - for example, code
that accesses the Windows API will only ever run on Windows. But it is
always better to be clear and explicit about what you are doing - if you
want a type that is 8 bits, or 64 bits, then say so.

The alternative is to say "I want a type to hold this information, but I
don't care about the details" - in C++11, that is written "auto". It is
not written "int".

When you write "int", you are saying "I want a type that can store at
least 16-bit signed integers, and is fast". Arguably you might know
your code will run on at least a 32-bit system, and then it means "at
least 32-bit signed integer" - but it does /not/ mean /exactly/ 32 bits.
"Long int" is worse - on some current and popular systems it is 32
bits, on others it is 64 bits.

>
> To truly be "platform independent" you have to jump through all manner
> of disgusting hoops and I have yet in 20 years ever come across any real
> life code, no matter how trivial, that was truly platform independent.

Perhaps you live in a little old-fashioned Windows world - in the *nix
world there is a vast array of code that is portable across a wide range
of systems. Many modern programs are portable across Linux, Windows and
Mac systems. And in the embedded world, there is a great deal of code
that can work fine on cpus from 8 bits to 64 bits, with big and little
endian byte ordering.

Of course, portable code like this depends (to a greater or lesser
extent) on non-portable abstraction layers to interact with hardware or
the OS.

And there are usually limits in portability - it is common to have some
basic requirements such as 8-bit chars (though there are real-world
systems with 16-bit and 32-bit chars) or two's compliment arithmetic.
Writing code that is portable across systems that fall outside of those
assumptions is often challenging, interesting, and pointless.

> I
> have however come across several ugly projects littered with #if define
> of the year, do windows #if define of the year, do linux, or #if define
> of the year, do windows version X, but again you know your architecture.
>
> If you are creating something so entirely trivial and amazingly small
> that you never need to call another library or call the OS, then I still
> hold fast to my claim that you can check sizeof at your entry point, and
> as Victor pointed out earlier get the number of bits using numeric limits.
>
> You will, I guarantee, break your project at some point using typedefs
> for primitive types. I've seen it and wasted hours upon hours on it,
> when the target platform was known, singular, and Windows no less.
>

You can guarantee that you will break your project when you use the
/wrong/ types. And this attitude is particularly prevalent amongst
Windows programmers - it's just "64K should be enough for anyone" all
over again. It means that programs are pointlessly broken when changes
are made or the target /does/ change - such as from Win32 to Win64 -
because some amateur decided that they were always going to work on
Windows, and you can always store a pointer in a "long int".


David Brown

unread,
Feb 5, 2015, 4:41:49 AM2/5/15
to
On 05/02/15 02:13, Christopher Pisz wrote:
> On 2/4/2015 6:26 PM, David Brown wrote:
>> On 05/02/15 00:07, Christopher Pisz wrote:
>>
>>> Regardless. Do you really prefer to maintain and decypher lines like:
>>>
>>> int4_t void(uint8_t x)
>>> {
>>> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
>>> unsigned char y = (byte[1] & x);
>>> return (int)y;
>>> }
>>>
>>> ?
>>>
>>
>> Is it really so difficult to comprehend that the standard size-specific
>> types defined in <cstdint> (or <stdint.h>) are called "standard" because
>> they are standardised by the language standards?
>
> Nope.
> Is it difficult to understand that they aren't guaranteed by that very
> standard?

They are guaranteed by the standards to exist and function exactly the
way they say, as long as the hardware supports it. So by using them,
you get what you want - or you get a clear compile-time failure. And
unless you are working with portability across rather specialised
architectures, the types /will/ exists. There are very few modern cpus
that don't use two's compliment integers, and the only modern cpus that
don't have 8-bit chars are specialised DSP's. So for almost all
practical purposes, the types /do/ exist.

>
> Is it also difficult to understand that they aren't needed without a
> _very_ specific scenario?

The OP needed a type big enough to hold 0xacde480000000001. Using "long
unsigned int" is wrong, because it is not big enough on many platforms.
Using "uint64_t" is correct, because it will always do the job.

The fixed-size integers are not specialised - they are standard, and
form part of good programming practice when you need a particular size.

<http://www.cplusplus.com/reference/cstdint/>

>
> I mean, by your and other poster's logic, we might as well take unsigned
> char, char, int, unsigned int, and all the other primitives right out of
> the language, no?

Yes.

If C and C++ (this is common to both languages) were being re-created
today, types such as "unsigned char" and "long int" /would/ be taken out
of the language. The fundamental types would be defined as fixed size,
two's complement - support for other types of signed integer would not
exist in a language designed today. It is likely that they would be
defined with some general mechanism (looking a bit like templates), but
that's just a detail.

Types named "int", "long_int", etc., might be defined with typedefs for
convenience. The definitions would be:

typedef int_fast16_t int;
typedef int_fast32_t long_int;

etc.

Personally, I would not bother with those in a new language - especially
when "auto" exists.

I would also include ranged types as a key feature - so that you would
use something like "range<0, 999_999>" for an integer of a specific
range, which would be implemented (in this case) as a uint32_t with
compile-time checking of values when possible.


(There are complications involving type compatibility that I am not
considering here - such as the difference between an int* and a long
int* even on platforms with 32-bit int and long int.)


>
>> Types like uint8_t are
>> almost universally understood by C and C++ programmers alike.
>
> Nope. I've worked with an aweful lot of C++ programmers. Somewhere
> around 5k, maybe? Never seen one use any of these types. Not once.

I think you work with Windows C++ programmers, who are used to the mess
that MS has made of types. Outside that, the types will be more common.

I also said that the types would be understood - that does not mean they
are commonly used (in the Windows world).

>
> I've also worked with a lesser amount of C programmer on C++ projects,
> whom inevitably created 80% of the bugs. They didn't use these either,
> but they sure like to create 5000 line headers to make their own.
>

Bad programming practice is not restricted to C programmers - C++
programmers do it too. And the inability of some C programmers to write
good C++ code does not mean there is a problem with an important part of
the C++ language standards just because it was first used in C.

> Perhaps, if I wrote drivers for a living, I'd run across it more, but
> guess what? I don't, nor do hundreds of thousands of other C++ programmers.
>

Certainly you would come across such types more in low-level programming
- drivers, interfaces, embedded systems, etc.

But if you have ever used a binary file format, or any other situation
where you transferred data into or out of your program in a binary
format, then you should have used these types.

>
>> People
>> who write things like "BYTE" are either dealing with ancient code (from
>> the pre-C99 days, before the types were standardised), or using Windows
>> illogical and inconsistent set of typedefs for the Windows APIs.
>
> Oh, you mean the typdefs they invented to alias already existing types?
> You are correct sir, quite illogical!
>

I mean code that uses home-made, non-standard typedefs that are often
vague (such as "WORD") - instead of using the well-known, well-defined,
well-supported standard fixed-size types that are part of the C and C++
languages.


David Brown

unread,
Feb 5, 2015, 4:49:35 AM2/5/15
to
On 05/02/15 02:30, Christopher Pisz wrote:
> On 2/4/2015 6:35 PM, David Brown wrote:
>> On 05/02/15 00:22, Christopher Pisz wrote:
>>> On 2/4/2015 4:56 PM, Christian Gollwitzer wrote:
>>>> Fixed-width integers are very useful types in many fields of computing.
>>>> In fact, I claim that they are more useful then the "polymorphic" int,
>>>> long, short nonsense. For example, to implement the Mersenne twister
>>>> you
>>>> need to use unsigned 64 bit arithmetics. Or to read image files from
>>>> disk into memory, you need unsigned 8 bit or unsigned 16 bit integer
>>>> types. Extremely easy if you have either cstdint or inttypes, but
>>>> extremely annoying when you need to remember that long on Windows is
>>>> always 32bit regardless of pointer size, whereas it is 64 bit on a
>>>> 64bit
>>>> Linux and 32 bit on a 32bit Linux etc.
>>>
>>> How is remembering an int is 32bits on windows and 64bit on 64bit linux
>>> harder than remembering what the hell a uint8_t is and that you better
>>> be damn sure that _noone ever_ assigns anything but another uint8_t to
>>> it directly or indirectly?
>>>
>>
>> The fact that with every post you are inventing a new way to get things
>> wrong demonstrates /exactly/ why the fixed-size types were standardised.
>
> It does?
>

Yes.

Rather than making random and incorrect guesses as to the size of an
"int" on different platforms as you did here, people interested in
writing clear, correct and portable code will write "int32_t" when they
want a 32-bit signed integer, and "int64_t" when they want a 64-bit
signed integer. Then they won't have trouble with real differences
(such as the size of "long int" on different platforms) or imagined
differences (such as your mistake about "int" on 64-bit Linux).


Juha Nieminen

unread,
Feb 5, 2015, 6:28:37 AM2/5/15
to
ghada glissa <ghada...@gmail.com> wrote:
> but i can't resolve the problem of order with memcpy.

You seem to think that memcpy is reversing your bytes. It doesn't do
anything like that. It just copies bytes verbatim.

Your "problem" of little-endianess is in the original value as well.
Just try printing its bytes directly and you'll see. memcpy isn't
changing anything.

The "problem" is in your hardware, not in memcpy or the compiler.
The hardware internally represents integral values with least-significant
bytes first, and that's what you are seeing.

If you want to print the bytes in the order you want, you have to
print them in reverse. (Although if you want your program to be
portable, you'll have to check the endianess first. This is an
easy check with a simple trick.)

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Chris Vine

unread,
Feb 5, 2015, 7:53:06 AM2/5/15
to
On Thu, 05 Feb 2015 10:02:17 +0100
David Brown <david...@hesbynett.no> wrote:
> No, it does not look like anything of the sort. It looks like he
> wants to examine the individual 8-bit bytes that make up the long
> integer he has. And when you want to look at 8-bit bytes, uint8_t is
> the /correct/ type to use - anything else is wrong.

Well, sort of. He appeared to be using uint8_t* to alias individual
bytes within an unsigned long. Such aliasing is only standard
conforming if in fact uint8_t is a typedef for unsigned char, which
gives you something of a chicken and egg situation.

That does not mean that I do not agree with the general point that,
subject to aliasing issues, if you mean "my code only works with 8-bit
2's complement char types without padding", then int8_t is a perfectly
reasonable way of documenting the fact and obtaining useful compiler
errors (namely, a missing type) if the requirements are not met.

Chris

David Brown

unread,
Feb 5, 2015, 8:07:05 AM2/5/15
to
On 05/02/15 13:52, Chris Vine wrote:
> On Thu, 05 Feb 2015 10:02:17 +0100
> David Brown <david...@hesbynett.no> wrote:
>> No, it does not look like anything of the sort. It looks like he
>> wants to examine the individual 8-bit bytes that make up the long
>> integer he has. And when you want to look at 8-bit bytes, uint8_t is
>> the /correct/ type to use - anything else is wrong.
>
> Well, sort of. He appeared to be using uint8_t* to alias individual
> bytes within an unsigned long. Such aliasing is only standard
> conforming if in fact uint8_t is a typedef for unsigned char, which
> gives you something of a chicken and egg situation.
>

The only possible way to implement uint8_t (on an architecture that
supports it, of course) is as a "#define" or typedef to either "char"
(if the target plain char is unsigned) or "unsigned char". There are no
other primitive types that it could be - and the standards require it to
be an alias for a primitive type.

Anyway, using memcpy works around aliasing - it always works as though
it copies the source to the destination using char* pointers.

In general, of course, aliasing can be an issue - and you have to
consider that an int32_t and an int may be subject to different aliasing
even if they are both 32 bits (since the int32_t could be a typedef for
"long int" rather than "int").

> That does not mean that I do not agree with the general point that,
> subject to aliasing issues, if you mean "my code only works with 8-bit
> 2's complement char types without padding", then int8_t is a perfectly
> reasonable way of documenting the fact and obtaining useful compiler
> errors (namely, a missing type) if the requirements are not met.
>

It is also suitable for the weaker use "I am only considering simple
8-bit two's complement bytes without padding" - maybe it could work with
other sized bytes, but it's not something you care about.


Chris Vine

unread,
Feb 5, 2015, 9:03:26 AM2/5/15
to
On Thu, 05 Feb 2015 14:06:53 +0100
David Brown <david...@hesbynett.no> wrote:
> On 05/02/15 13:52, Chris Vine wrote:
> > On Thu, 05 Feb 2015 10:02:17 +0100
> > David Brown <david...@hesbynett.no> wrote:
> >> No, it does not look like anything of the sort. It looks like he
> >> wants to examine the individual 8-bit bytes that make up the long
> >> integer he has. And when you want to look at 8-bit bytes, uint8_t
> >> is the /correct/ type to use - anything else is wrong.
> >
> > Well, sort of. He appeared to be using uint8_t* to alias individual
> > bytes within an unsigned long. Such aliasing is only standard
> > conforming if in fact uint8_t is a typedef for unsigned char, which
> > gives you something of a chicken and egg situation.
> >
>
> The only possible way to implement uint8_t (on an architecture that
> supports it, of course) is as a "#define" or typedef to either "char"
> (if the target plain char is unsigned) or "unsigned char". There are
> no other primitive types that it could be - and the standards require
> it to be an alias for a primitive type.

I guess that must follow from the fact that C requires char to be at
least 8 bits in size.

Given that identity, I guess it depends on what you are documenting with
your use of either char or int8_t in any particular piece of code - your
conformity with size expectations or with aliasing requirements.

> Anyway, using memcpy works around aliasing - it always works as though
> it copies the source to the destination using char* pointers.

Good point.

Chris

Chris Vine

unread,
Feb 5, 2015, 9:15:51 AM2/5/15
to
Actually it occurs to me that I am not correct with respect to int8_t,
because that could be typedef'ed to signed char (even on an
implementation where char is signed), whereas the aliasing exception
only applies to the char or unsigned char types. So if you want to
alias, stick with char, unsigned char and uint8_t.

Chris


David Brown

unread,
Feb 5, 2015, 10:09:56 AM2/5/15
to
On 05/02/15 15:15, Chris Vine wrote:
> On Thu, 5 Feb 2015 14:03:05 +0000
> Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote:
>> On Thu, 05 Feb 2015 14:06:53 +0100
>> David Brown <david...@hesbynett.no> wrote:
>>> On 05/02/15 13:52, Chris Vine wrote:
>>>> On Thu, 05 Feb 2015 10:02:17 +0100
>>>> David Brown <david...@hesbynett.no> wrote:
>>>>> No, it does not look like anything of the sort. It looks like he
>>>>> wants to examine the individual 8-bit bytes that make up the long
>>>>> integer he has. And when you want to look at 8-bit bytes,
>>>>> uint8_t is the /correct/ type to use - anything else is wrong.
>>>>
>>>> Well, sort of. He appeared to be using uint8_t* to alias
>>>> individual bytes within an unsigned long. Such aliasing is only
>>>> standard conforming if in fact uint8_t is a typedef for unsigned
>>>> char, which gives you something of a chicken and egg situation.
>>>>
>>>
>>> The only possible way to implement uint8_t (on an architecture that
>>> supports it, of course) is as a "#define" or typedef to either
>>> "char" (if the target plain char is unsigned) or "unsigned char".
>>> There are no other primitive types that it could be - and the
>>> standards require it to be an alias for a primitive type.
>>
>> I guess that must follow from the fact that C requires char to be at
>> least 8 bits in size.

For standard integer types, char must be at least 8 bits, while short
must be at least 16 bits - thus if there is a standard integer type that
uses 8 bits, it must be char (plus signed char and unsigned char).

I think in theory it is possible for an implementation to define an
additional unsigned 8-bit extended integer type and use that for uint8_t
rather than unsigned char - but I cannot imagine a compiler doing so.

>>
>> Given that identity, I guess it depends on what you are documenting
>> with your use of either char or int8_t in any particular piece of
>> code - your conformity with size expectations or with aliasing
>> requirements.

Documentation by stating your intentions clearly in the code, is one of
the important reasons for using fixed-size types. If you use "char",
you are saying "something like a simple 7-bit ASCII character or letter,
or a little piece of memory". If you use "uint8_t", you are saying "A
standard 8-bit byte of memory following sensible rules for memory access
- or alternatively an unsigned integer between 0 and 255". "char" is a
vague, sort-of type whose details can vary between targets - it's
signedness can even vary according to command-line options for some
compilers. "uint8_t" is clear and tightly defined.

I would not be adverse to a standard type named something like "mem8_t"
as a type specifically aimed at treating data as a raw memory contents,
thus leaving "uint8_t" for numbers (and with mem8_t, mem16_t, and
mem32_t, etc., having the "alias everything" feature, which would then
not apply to uint8_t). But until the standards folk add something like
that to the standard headers, "uint8_t" is the best type for the purpose.

>
> Actually it occurs to me that I am not correct with respect to int8_t,
> because that could be typedef'ed to signed char (even on an
> implementation where char is signed), whereas the aliasing exception
> only applies to the char or unsigned char types. So if you want to
> alias, stick with char, unsigned char and uint8_t.
>

That is correct. "int8_t" is an integer number between -128 and 127,
taking exactly one standard byte of storage using two's complement
arithmetic. Oddly, this is slightly different from C (C11 standard),
where the aliasing exception also applies signed char.



Christopher Pisz

unread,
Feb 5, 2015, 11:19:15 AM2/5/15
to
Because it is hidden. It matched at one time and not at another.

Scott Lurndal

unread,
Feb 5, 2015, 12:19:31 PM2/5/15
to
Christopher Pisz <nos...@notanaddress.com> writes:

>It is a relic from C99. It is also not guaranteed to exist. It even says so.
>
[mindless rude rant elided]

>
>If you want to use a unsigned int then use an unsigned int. There is no
>purpose at all to use a typedefed renaming when you intend on it being

You have no clue about what requirements exist in programming,
do you?

There are many reasons to expect a specific size. Matching hardware
registers is a common reason in simulations, in operating systems,
in hypervisors, in embedded systems. That's why the explicitly sized
typedefs were added to the standards in the first place.

[more mindless elitist ranting elided]

Scott Lurndal

unread,
Feb 5, 2015, 12:22:30 PM2/5/15
to
Christopher Pisz <nos...@notanaddress.com> writes:
>On 2/4/2015 4:43 PM, Jens Thoms Toerring wrote:

>> Sounds like you hadn't to deal a lot with programming near
>> to the hardware where you often need to know exactly how many
>> bytes a type has (e.g. wou need to writee to a 16-bit wide
>> register of some memory-mapped device) and using a unsigned
>> short int, hoping that it will have 2 byte everywhere).
>
>No, thank goodness. That's where C programmers lurk and I don't get
>along with them at all. Imagine that!

There are millions of lines of such code written in C++. I've personally
written two hypervisors, two full-system simulations and one
distributed operating system (for a massively parallel machine) in C++.

Scott Lurndal

unread,
Feb 5, 2015, 12:27:15 PM2/5/15
to
Mr Flibble <flibbleREM...@i42.co.uk> writes:
>On 04/02/2015 23:41, Christopher Pisz wrote:
>>
>> I hope I never ever have to work on your code.
>
>The feeling is most definitely mutual mate; I wouldn't want you anywhere
>near my code.

I'll second this. Anyone who changes working existing code to match their
preferences (i.e. the comment about deleting usage of stdint.h types)
should be fired in any respectable development environment.

Christopher Pisz

unread,
Feb 5, 2015, 12:40:04 PM2/5/15
to
That's the thing. IT DOESN'T WORK!

Anyone who feels the need to use typedefed primitives on a project where
the target platform is known, and has been specified in the
requirements, and will never change, alongside programmers who will not,
should get hit by a bus.

Fibble clearly stated he will _always_ use typedefed primitives _everywhere_









Mr Flibble

unread,
Feb 5, 2015, 1:15:31 PM2/5/15
to
Not everywhere: I also use the C++11 auto keyword.

/Flibble

Ian Collins

unread,
Feb 5, 2015, 2:13:45 PM2/5/15
to
Ah, so you do programme inside a bubble.

--
Ian Collins

Ian Collins

unread,
Feb 5, 2015, 2:18:26 PM2/5/15
to
Christopher Pisz wrote:
> On 2/4/2015 7:57 PM, Ian Collins wrote:
>> Christopher Pisz wrote:
>>>
>>> I'm not embarassed. Never used them, never will, and if I see them they
>>> are going to be deleted. I work on a Windows platform and I write
>>> standard native C++.
>>
>> So how do you interface with networking functions in your perfect, pure
>> windows world?

Care to address this question?

>>> The code will cease to behave as expected as soon as someone assigns a
>>> type that does not match what the define resolves to at the time. Which
>>> may or may not result in a compiler warning, which may or may not be
>>> completely ignored, which may or may not cause countless hours of
>>> debugging, which may or may not result in finding out the C-style
>>> ex-driver writing guy put this crap in your code without any need for it.
>>
>> How then is this different form someone assigning a type that does not
>> match a naked type?
>
> Because it is hidden. It matched at one time and not at another.

Exactly.

Many days of programmer time have been wasted porting code to different
bit sized versions of the same platform because the original programmers
made assumptions about the sizes of primitive types.

--
Ian Collins

Öö Tiib

unread,
Feb 5, 2015, 2:22:07 PM2/5/15
to
What is the point to write app that compiles only for windows 7?
Ok, 4 years later it will perhaps still run with odd quirks
in some sort of oldie crap compatibility pane of Windows 10.

> Fibble clearly stated he will _always_ use typedefed primitives _everywhere_

So what? It may be even *required* by coding standard:
http://www.codingstandard.com/rule/7-1-6-use-class-types-or-typedefs-to-abstract-scalar-quantities-and-standard-integer-types/

If the owners of code base have decided compliance with such then
*nothing* to do. Your code will not pass review and will be reverted.
If you rant how lot wiser you are then you will be fired as incompetent.

Christopher Pisz

unread,
Feb 5, 2015, 2:29:44 PM2/5/15
to
Well you aren't most programmers. You are Ian Collins, guru #2 on the
newsgroup, and one could have some faith that you know what you are
doing. That isn't the case with the OP and code like what the OP wrote
is what I get to fix all day. Where you look at it and say to yourself,
"What in the hell?!"

All rants aside, what are you going to do when you've used these fixed
types and have to call code not in your control that returns types that
aren't fixed? Or when perhaps third party code calls your code?
The whole point is now negated, no? There is a boundry somewhere, what
do you do at that boundry?

You'd have to ensure the entire execution path, every developer,
everywhere is using these same types, 100% contained, no?



















Ian Collins

unread,
Feb 5, 2015, 2:38:50 PM2/5/15
to
Christopher Pisz wrote:
> On 2/5/2015 1:24 AM, Ian Collins wrote:
>> Christopher Pisz wrote:
>>>
>>> It's a C problem because those who program C-style are the ones whom
>>> use it. It's on my list of craptastic habits I run across on the job
>>> from bug generating programmers...that and the header says so.
>>
>> To paraphrase Flibble: utter bollocks mate.
>>
>> A good percentage of the C++ code I write relies on fixed with types and
>> the code is about as far from "C-style" as you can get. Maybe you are
>> lucky enough to code in a bubble that doesn't interface with the real
>> world. Many of us don't and that most certainly does not make us
>> "C-style" programmers.
>>
>
> Well you aren't most programmers.

I'd consider myself a typical non-windows programmer. Actually I'd go
further: I'm a typical programmer who has to deal with the real world,
both physical and networked.

> You are Ian Collins, guru #2 on the
> newsgroup, and one could have some faith that you know what you are
> doing. That isn't the case with the OP and code like what the OP wrote
> is what I get to fix all day. Where you look at it and say to yourself,
> "What in the hell?!"
>
> All rants aside, what are you going to do when you've used these fixed
> types and have to call code not in your control that returns types that
> aren't fixed? Or when perhaps third party code calls your code?
> The whole point is now negated, no? There is a boundry somewhere, what
> do you do at that boundry?

The boundaries are the hardware and operating system libraries which use
fixed width types extensively.

> You'd have to ensure the entire execution path, every developer,
> everywhere is using these same types, 100% contained, no?

That's a question for the OS developers not their users such as me.

--
Ian Collins

Christopher Pisz

unread,
Feb 5, 2015, 2:41:55 PM2/5/15
to
On 2/5/2015 1:21 PM, Öö Tiib wrote:
> On Thursday, 5 February 2015 19:40:04 UTC+2, Christopher Pisz wrote:
>> On 2/5/2015 11:27 AM, Scott Lurndal wrote:
>>> Mr Flibble <flibbleREM...@i42.co.uk> writes:
>>>> On 04/02/2015 23:41, Christopher Pisz wrote:
>>>>>
>>>>> I hope I never ever have to work on your code.
>>>>
>>>> The feeling is most definitely mutual mate; I wouldn't want you anywhere
>>>> near my code.
>>>
>>> I'll second this. Anyone who changes working existing code to match their
>>> preferences (i.e. the comment about deleting usage of stdint.h types)
>>> should be fired in any respectable development environment.
>>>
>>
>> That's the thing. IT DOESN'T WORK!
>>
>> Anyone who feels the need to use typedefed primitives on a project where
>> the target platform is known, and has been specified in the
>> requirements, and will never change, alongside programmers who will not,
>> should get hit by a bus.
>
> What is the point to write app that compiles only for windows 7?
> Ok, 4 years later it will perhaps still run with odd quirks
> in some sort of oldie crap compatibility pane of Windows 10.

You don't write an app for Windows 7. You write it for Windows version X
going forward for a target of y number of years. The size of primitive
types does not change on every single version of Windows. In fact, it
only changes when the _architecture_ changes, in which case you now have
a new target architecture and far far more to consider then the size of
primitive types. We're still sorting out the move from x86 to x64 how
many years later?

In reality, it is usually the case where App X supports customers using
XP through Windows 7. Software has a lifetime. It is supposed to be 10
years. Sadly, people are cheap, and they try to stretch it over 50,
which causes these problems. You can't guarantee your code will work in
50 years no matter how many mantras you follow.


>> Fibble clearly stated he will _always_ use typedefed primitives _everywhere_
>
> So what? It may be even *required* by coding standard:
> http://www.codingstandard.com/rule/7-1-6-use-class-types-or-typedefs-to-abstract-scalar-quantities-and-standard-integer-types/
>
> If the owners of code base have decided compliance with such then
> *nothing* to do. Your code will not pass review and will be reverted.
> If you rant how lot wiser you are then you will be fired as incompetent.


People are regurgitating what I've already said.
If every single developer working on your project uses the same types.
No problem! If you have a coding standard, then that is the case. No
problem!

My problem is, as always, mixing one thing with another and breaking
software as a result.







Martijn Lievaart

unread,
Feb 5, 2015, 4:20:38 PM2/5/15
to
On Wed, 04 Feb 2015 15:31:55 -0600, Christopher Pisz wrote:

> On 2/4/2015 2:33 PM, Robert Wessel wrote:

>> Whether it's an appropriate type to use in this instance is a different
>> question, although the (limited) context does not make it unreasonable
>> (it appears to be something that's headed out to the network, and an
>> array of exact 8-bit bytes seems a reasonable requirement).
>
> If your target platform doesn't specifically guarantee it for you, which
> it usually does. Then I'd still rather see:
>
> if( sizeof(unsigned char) != 1 )
> {
> // Throw your favorite exception here throw std::exception("Expected
> one byte unsigned char. Unsupported
> OS or compiler options.");
> }

You are aware that this will never throw, are you? A char (unsigned or
not) has by definition a sizeof 1, and is by (very confusing) definition
one byte. This has nothing to do with the number of bits in a (C) byte.

If you want to check wether a (C) byte has 8 bits, check if CHAR_BIT==8.

M4

Martijn Lievaart

unread,
Feb 5, 2015, 5:55:24 PM2/5/15
to
On Thu, 05 Feb 2015 13:41:42 -0600, Christopher Pisz wrote:

> We're still sorting out the move from x86 to x64 how
> many years later?

Bingo!

M4

Martijn Lievaart

unread,
Feb 5, 2015, 6:00:51 PM2/5/15
to
On Wed, 04 Feb 2015 19:01:11 -0600, Christopher Pisz wrote:

> Windows made a define for BYTE. Some use it some don't. I'd rather never
> see it and use unsigned char, because unsigned char is a byte and is
> part of the language itself already. Similar to the stdint.h argument.

Well, a BYTE is 8 bits on the Windows implementations I'm familiar with.
An unsigned char can be anything, as long as it is 8 bits or more.

I personally prefer uint8_t if I need 8 bits.

M4

Martijn Lievaart

unread,
Feb 5, 2015, 6:00:52 PM2/5/15
to
On Thu, 05 Feb 2015 00:11:07 +0000, Jens Thoms Toerring wrote:

>
>> Regardless. Do you really prefer to maintain and decypher lines like:
>
>> int4_t void(uint8_t x)
>> {
>> BYTE byte[2] = (wchar_t *)_bstr_t(T_"A");
>> unsigned char y = (byte[1] & x);
>> return (int)y;
>> }
>
>> ?
>
> As I wrote, stupid use of all you can do isn't something I consider a
> good idea. In your example there are several things I haven't seen
> before 'int4_t", "BYTE" and "T_" (in front of the string "A"). So I
> wouldn't like too much having to maintain that code;-)
>
> Just out of curiosity: is this something for Windows? My private
> conviction (call it prejudice if you like) is that this kind of use of
> made-up types and (often completely unnecessary) casts etc. is some
> Windows-thing where people actually haven't been exposed much to the
> existence of dif- ferent CPU-architectures and types and casts have
> become some kind of cargo-cult thingy, which they believe makes them
> look smarter. I consider every cast in a program as a warning flag that
> there's something going one that needs close attention because the
> author basically says "I'm know better than the compiler" - which,
> unfortunately, often isn't the case. So, I agree that this is ugly code
> where someone obviously didn't know what he (or, maybe, she) was doing.
> But I still am convinced that the types from stdint.h exist for a good
> reason - it's just that you shouldn't use a knife as a srewdriver unless
> there's no alternative.

Exactly. It's a POS not because of the standard types, but because it's a
POS. Could have done that with the Windows.h types too. Oh, wait, there
are Windows.h types in there.

M4

Ian Collins

unread,
Feb 5, 2015, 6:05:33 PM2/5/15
to
Christopher Pisz wrote:

> We're still sorting out the move from x86 to x64 how
> many years later?

In some parts of the windows world, maybe. In the land of POSIX these
problems were solved over a decade ago. Solved in large part due to the
adoption of fixed width types to remove ambiguities from interfaces.

--
Ian Collins

Jens Thoms Toerring

unread,
Feb 5, 2015, 6:14:56 PM2/5/15
to
Christopher Pisz <nos...@notanaddress.com> wrote:
> > What is the point to write app that compiles only for windows 7?
> > Ok, 4 years later it will perhaps still run with odd quirks
> > in some sort of oldie crap compatibility pane of Windows 10.

> You don't write an app for Windows 7. You write it for Windows version X
> going forward for a target of y number of years. The size of primitive
> types does not change on every single version of Windows. In fact, it
> only changes when the _architecture_ changes, in which case you now have
> a new target architecture and far far more to consider then the size of
> primitive types.

You're really making a mountain out of an mole hill. In all my
replies to you I've often used the word "architecture-agnostic"
and similar and I meant it. It's really not too complicated to
write code that works on a lot of architectures out of the box
without any fiddling. One of my pet prjects is a Perl module,
which needs some C code (could also be done in C++) for, you
may have guessed it, determine the exact sizes of some variables
that need to be passed to the OS and back. It has been become
used in the Debian installer, so it needs to work on, at the last
count, 14 different types of systems. And I haven't received a
single complaint about that (there were some other issues but
not related to the C part). Also, beside one idiotic mistake
I made, all my Linux device drivers did work without a hitch
when trying to compile and run them for x64 when they had been
written and tested only on x86. And if I can do it so can you.
But to achieve that I definitely needed to use types for which
I knew exactly what sizes they had, not some "could be that or
alsosomething else" types.

> We're still sorting out the move from x86 to x64 how
> many years later?

I always had the suspicion that much of the trouble Microsoft
had in porting to 64-bit systems (and a lot of related problems)
actually were a result of an attitude similar to yours, i.e.,
labouring under a "will never change" assumption. The only
port of Windows (of the back then rather fresh NT) before the
recent ARM port (which also seems to be a dead end) I am aware
of was for the Alpha processor around 1998 or so, and that ended
rather abruptly. So, everything done on Windows for a long time
was restricted to a single architecture. Contrast that to other
systems like Linux or the different flavour of BSD, which every
other month make it known that there's another supported archi-
tecture. If that "bunch of hobbyists" can do it can it really
be that hard?

My take on that is that people never having been exposed to
anything but Windows have a severe disadvantage because they
never have seen anything of the world except their own town and
believe that that's all there is. For them it's hard to under-
stand how one could do anything that's architecture-independent
and thus probably assume that it's rocket science. But it really
isn't, it's mostly revolves around a few things to keep in mind.

> In reality, it is usually the case where App X supports customers using
> XP through Windows 7. Software has a lifetime. It is supposed to be 10
> years. Sadly, people are cheap, and they try to stretch it over 50,
> which causes these problems. You can't guarantee your code will work in
> 50 years no matter how many mantras you follow.

Well, of course, I can't guarantee anything like that (and I for
sure will be dead in 50 years;-) but a lot of the stuff I wrote
20 years ago still compiles and works quite fine (without more
than a few minutes of maintainance - the worst problem is when a
library you used back than has been gone the way of the dodo and
needs replacing).

> If every single developer working on your project uses the same types.
> No problem! If you have a coding standard, then that is the case. No
> problem!

I understand your point - having to deal with code that is
littered with unnecessary types and casts and whatever is a
PITA, but with your "avoid everything but the most basic
types" approach you're throwing out the baby with the bath.

How many types are ok? In normal live I tend to use the basic
types, i.e. bool, char, the three or four types of ints plus
size_t and float and double. Then, I sometimes need exact sized
types and then a few additinal types from POSIX. And, of course,
those forced on me by libraries I need. But that's it. But when
I just look at the "Windows API Data Types" page

https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751%28v=vs.85%29.aspx

I see a lot more types (including a lot of exact sized types) than
I know from <cstdint> - but, of course, with different names;-).
I normally tend to see less different types in weeks of work. I
guess when you're working on Windows you'll have to have all of
them memorized and so shouldn't have much trouble with e.g 'int32_t'
instead of 'INT32'... Perhaps you should make a point of avoiding
those ugly Windows API types and instead get people to use just the
fewer relevant and easier to grok ones from <cstdint>;-)

Best regards, Jens
--
\ Jens Thoms Toerring ___ j...@toerring.de
\__________________________ http://toerring.de

Christopher Pisz

unread,
Feb 5, 2015, 6:46:10 PM2/5/15
to
I'm making the issue because I had the displeasure of debugging this
very problem for months. People on this newsgroup are not understanding
the problem.

The problem is not that you use them or that I don't. It isn't that you
program drivers and I program business apps on Windows. The problem is
what happens when two people work together where one uses it and one
does not, or similarly when you write your interface with it and my code
does not use it, but needs to call your interface or vica versa.

There has to be a boundary somewhere. I already asked Ian, but I'll ask
again. What are you going to do at that boundary? What is someone else
going to do?

>> We're still sorting out the move from x86 to x64 how
>> many years later?
>
> I always had the suspicion that much of the trouble Microsoft
> had in porting to 64-bit systems (and a lot of related problems)
> actually were a result of an attitude similar to yours, i.e.,
> labouring under a "will never change" assumption. The only
> port of Windows (of the back then rather fresh NT) before the
> recent ARM port (which also seems to be a dead end) I am aware
> of was for the Alpha processor around 1998 or so, and that ended
> rather abruptly. So, everything done on Windows for a long time
> was restricted to a single architecture. Contrast that to other
> systems like Linux or the different flavour of BSD, which every
> other month make it known that there's another supported archi-
> tecture. If that "bunch of hobbyists" can do it can it really
> be that hard?

It isn't hard for the developer, it's hard for his dependencies.
I mean really, all we have to do is change 32 bit pointers to 64 bit
pointers, check places where we are doing bitwise operations, and carry
on. That's been my experience.

It hasn't as much been a problem for my development as it has been
running software in general.

I mean, for example, Visual Studio itself is 32 bit, launches 32 bit
tools to debug your 64 bit projects, then it gives ridiculous error
messages about "bad image format" leading a person to think something is
wrong with his code, when its a silly msvc tool linking to your 64 bit
dll....

Or the fact that I can't find 64 bit versions of my favorite VSTs when I
want to write some Drum and Bass. :)


> My take on that is that people never having been exposed to
> anything but Windows have a severe disadvantage because they
> never have seen anything of the world except their own town and
> believe that that's all there is. For them it's hard to under-
> stand how one could do anything that's architecture-independent
> and thus probably assume that it's rocket science. But it really
> isn't, it's mostly revolves around a few things to keep in mind.

You're right. I've been in my Windows bubble. The employers and their
customers are the factor there. It certainly is hard for me to
understand anything *nix. I can't even wrap my head around why people
want to use text editors on the command line and have to memorize 125
different keyboard shortcuts to get the same thing done I can with a
mouse click. But I can accept that is their strange world and it must be
there for a good reason.

I did play with Red Hat for 2 years and got a small taste, but it
certainly wasn't fun.

>> In reality, it is usually the case where App X supports customers using
>> XP through Windows 7. Software has a lifetime. It is supposed to be 10
>> years. Sadly, people are cheap, and they try to stretch it over 50,
>> which causes these problems. You can't guarantee your code will work in
>> 50 years no matter how many mantras you follow.
>
> Well, of course, I can't guarantee anything like that (and I for
> sure will be dead in 50 years;-) but a lot of the stuff I wrote
> 20 years ago still compiles and works quite fine (without more
> than a few minutes of maintainance - the worst problem is when a
> library you used back than has been gone the way of the dodo and
> needs replacing).
>
>> If every single developer working on your project uses the same types.
>> No problem! If you have a coding standard, then that is the case. No
>> problem!
>
> I understand your point - having to deal with code that is
> littered with unnecessary types and casts and whatever is a
> PITA, but with your "avoid everything but the most basic
> types" approach you're throwing out the baby with the bath.

Then we can agree. Don't use typedefed types without reason and I won't
delete them until I get your reason from you.

But the question about what to do at the boundry still remains. I am
sincere in seeking the answer to this. None of the posts in this threads
are addressing that issue.

Do you want to stictly rely on compiler warnings about truncation?
What about the reverse case, where programmer X assigned your smaller
type to his larger, but someone wanted to check bit #x for some special
meaning.

You can blame developer A or B, but the code still won't work as expected.

> How many types are ok? In normal live I tend to use the basic
> types, i.e. bool, char, the three or four types of ints plus
> size_t and float and double. Then, I sometimes need exact sized
> types and then a few additinal types from POSIX. And, of course,
> those forced on me by libraries I need. But that's it. But when
> I just look at the "Windows API Data Types" page
>
> https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751%28v=vs.85%29.aspx
>
> I see a lot more types (including a lot of exact sized types) than
> I know from <cstdint> - but, of course, with different names;-).
> I normally tend to see less different types in weeks of work. I
> guess when you're working on Windows you'll have to have all of
> them memorized and so shouldn't have much trouble with e.g 'int32_t'
> instead of 'INT32'... Perhaps you should make a point of avoiding
> those ugly Windows API types and instead get people to use just the
> fewer relevant and easier to grok ones from <cstdint>;-)
>
> Best regards, Jens
>

The windows types poop is largely why I have issue with it. This idea of
using these typedef types seems like it is the same idea they had when
they screwed everything up. It has made life hell for years and pretty
much everyone thinks it was a horrible thing to do. I try, like I
believe any good programmer should, to tuck anything with Windows
specific types away and hide that from the main source.

I can however rely on the WORD or BYTE or whatever Windows type poo poo
the Windows API wants me to use, being a certain size, that MSDN will
tell me, and safely assign it to my corresponding C++ primitive. But
sure, I am working on Windows only, and expect my software to run on x86
only, for now.




Robert Wessel

unread,
Feb 5, 2015, 8:32:23 PM2/5/15
to
I will be astonished if unsigned char is ever not an 8 bit, two's
complement type on Windows.

Ian Collins

unread,
Feb 5, 2015, 8:32:27 PM2/5/15
to
Christopher Pisz wrote:
>
> There has to be a boundary somewhere. I already asked Ian, but I'll ask
> again. What are you going to do at that boundary? What is someone else
> going to do?

I already answered - the platform library interfaces use fixed width and
other typedef types. System data structures use fixed width types. The
standard library uses typedef types.
--
Ian Collins

Robert Wessel

unread,
Feb 5, 2015, 8:43:15 PM2/5/15
to
On 5 Feb 2015 23:14:44 GMT, j...@toerring.de (Jens Thoms Toerring)
wrote:

>I always had the suspicion that much of the trouble Microsoft
>had in porting to 64-bit systems (and a lot of related problems)
>actually were a result of an attitude similar to yours, i.e.,
>labouring under a "will never change" assumption. The only
>port of Windows (of the back then rather fresh NT) before the
>recent ARM port (which also seems to be a dead end) I am aware
>of was for the Alpha processor around 1998 or so, and that ended
>rather abruptly.


NT shipped on day one on MIPS and x86. Alpha and PPC were added,
although PPC was pretty short-lived, and MIPS died soon thereafter.
Windows was also ported to IPF fairly quickly, and that died only
recently. Something called "Windows" has been on ARM for quite a
while in the Windows CE branch of things, the "real" Windows on ARM is
more recent.


>So, everything done on Windows for a long time
>was restricted to a single architecture.


Most Windows software developers (and users) have ignored everything
but x86-32, x86-64 and ARM. So in practice, yes, Windows was a
roughly one platform OS for a long time.

And certainly anything in the Win16 or Win9x/Me branches of the world
was x86 only. Of the Win16 versions, only Win3.11 actually required a
386, Win3.1 would run on a 286 (and 386+), and Win3.0 in real mode
(and 286 and 386+ modes). Earlier versions were approximately real
mode, although some came with a DOS extender that required a 386.

Martijn Lievaart

unread,
Feb 6, 2015, 2:40:43 AM2/6/15
to
Why depend on that when it's completely unnecessary to do so?

M4

-- Haven't you heard about Win4DSP yet :-) --

David Brown

unread,
Feb 6, 2015, 2:56:55 AM2/6/15
to
You would not be alone in your astonishment. However, code - and to a
greater extent, habits - move around. It is better to use clear and
accurate coding rather than sloppy style, even if you know the sloppy
style works for the program you are working on at the time. Maybe your
code will be re-used later, or you will work on a different system at a
later time. "unsigned char" is usually the same as uint8_t, but the
principle applies more widely to other assumptions that are more often
wrong (such as assuming the signedness of plain char, or the size of
long ints).

Note that if you are using a Windows API that is defined using type BYTE
for a specifically 8-bit unsigned integer, then you have a choice of
good practices - you can use BYTE to be consistent with the OS and API,
or uint8_t to be consistent with the language standards and to be
explicit about your type requirements.

David Brown

unread,
Feb 6, 2015, 3:14:21 AM2/6/15
to
On 05/02/15 20:29, Christopher Pisz wrote:
> On 2/5/2015 1:24 AM, Ian Collins wrote:
>> Christopher Pisz wrote:
>>>
>>> It's a C problem because those who program C-style are the ones whom
>>> use it. It's on my list of craptastic habits I run across on the job
>>> from bug generating programmers...that and the header says so.
>>
>> To paraphrase Flibble: utter bollocks mate.
>>
>> A good percentage of the C++ code I write relies on fixed with types and
>> the code is about as far from "C-style" as you can get. Maybe you are
>> lucky enough to code in a bubble that doesn't interface with the real
>> world. Many of us don't and that most certainly does not make us
>> "C-style" programmers.
>>
>
> Well you aren't most programmers. You are Ian Collins, guru #2 on the
> newsgroup, and one could have some faith that you know what you are
> doing.

Then you would do well to listen to him and his advice, rather than
arguing that because the average windows programmers codes badly, you
should do so yourself.

> That isn't the case with the OP and code like what the OP wrote
> is what I get to fix all day. Where you look at it and say to yourself,
> "What in the hell?!"

The OP is a newbie who was confused and asking for help (I mean no
disrespect to the OP - we all start at the beginning). No one - except
teachers and helpful newsgroup posters - should be fixing code written
like that, because code written like that never makes it far. You
picked on a single, irrelevant aspect of his code - the use of "uint8_t"
- which you totally misunderstood, blamed on "C" programmers, and
declared it to be a great evil and the root of all coding problems.

The OP's code snippet had many errors and misunderstandings - the use of
uint8_t was not one of them.

>
> All rants aside, what are you going to do when you've used these fixed
> types and have to call code not in your control that returns types that
> aren't fixed? Or when perhaps third party code calls your code?
> The whole point is now negated, no? There is a boundry somewhere, what
> do you do at that boundry?

Boundaries should /always/ use fixed size and clearly defined types. If
they don't, they are not boundaries.

If a customer pays me to interface with code that doesn't have a clear
interface, then they must first pay me to create that clear interface.

>
> You'd have to ensure the entire execution path, every developer,
> everywhere is using these same types, 100% contained, no?
>

In the embedded programming world, yes, that is the case. But then,
embedded programmers write code that has to work, and keep working - we
don't get to pop up error boxes or install updates and service packs.

David Brown

unread,
Feb 6, 2015, 3:25:58 AM2/6/15
to
On 05/02/15 20:41, Christopher Pisz wrote:
> On 2/5/2015 1:21 PM, Öö Tiib wrote:

>> What is the point to write app that compiles only for windows 7?
>> Ok, 4 years later it will perhaps still run with odd quirks
>> in some sort of oldie crap compatibility pane of Windows 10.
>
> You don't write an app for Windows 7. You write it for Windows version X
> going forward for a target of y number of years. The size of primitive
> types does not change on every single version of Windows. In fact, it
> only changes when the _architecture_ changes, in which case you now have
> a new target architecture and far far more to consider then the size of
> primitive types. We're still sorting out the move from x86 to x64 how
> many years later?
>

No, typical windows programmers usually write apps for the particular
version of windows plus service packs plus fonts, dlls, etc., that they
happen to have on their machines. Then they test on a few related
systems, hope that it works for a wider audience, pray that it works on
the next windows release, are overjoyed if it works on the next
architecture, and are stunned if it can be recompiled natively on other
windows architectures.

In the *nix world, code regularly gets recompiled on different
architectures with different cpus, different primitive type sizes,
different endiannesses, different OS's - all with little extra effort
because the programmer knows the difference between pointless
assumptions such as primitive types, and useful assumptions such as
posix compliance.


Christian Gollwitzer

unread,
Feb 6, 2015, 3:40:46 AM2/6/15
to
Am 05.02.15 um 20:29 schrieb Christopher Pisz:
> All rants aside, what are you going to do when you've used these fixed
> types and have to call code not in your control that returns types that
> aren't fixed? Or when perhaps third party code calls your code?
> The whole point is now negated, no? There is a boundry somewhere, what
> do you do at that boundry?

There are several different types of boundaries that you may hit. Nobody
here has given the advice to store a HWND in an uint64_t, even if that
would work on all current Windows platforms. A HWND is an opaque type to
the programmer and could just as well be a void* or a struct. But since
there is no point in adding to HWNDs together, storing them in a file or
outputting them to the screen (debuggers aside), noone would ever feel
the need to convert them into an integer.

OTOH, there are boundaries across different computers and media. For
instance, reading binary file formats (image files, MATLAB as explained
above, HDF, SQLite), communicating over the network, calling CORBA,
loading a plugin to your program, all that requires you to specify the
interface in an unambigous way and often to the level of bits. Well yes,
there are libraries to do most of this stuff, but still they could
return a type to you at runtime. For instance, in TIFF you can store
1,2,4,8 bit signed and unsigned integers or 32bit and 64 bit floats (aka
double), maybe even other types. You can ask the libtiff library to
convert this to 8bit RGB for you, but if you want to do something else
then just previewing the image, you need to cope with that variety. And
having int16_t & friends comes in very handy here.

The real problem IMHO are type casts. If you don't have a very good
reason for a cast, you're doing something wrong.

Christian

David Brown

unread,
Feb 6, 2015, 3:43:47 AM2/6/15
to
On 06/02/15 02:43, Robert Wessel wrote:
> On 5 Feb 2015 23:14:44 GMT, j...@toerring.de (Jens Thoms Toerring)
> wrote:
>
>> I always had the suspicion that much of the trouble Microsoft
>> had in porting to 64-bit systems (and a lot of related problems)
>> actually were a result of an attitude similar to yours, i.e.,
>> labouring under a "will never change" assumption. The only
>> port of Windows (of the back then rather fresh NT) before the
>> recent ARM port (which also seems to be a dead end) I am aware
>> of was for the Alpha processor around 1998 or so, and that ended
>> rather abruptly.
>
>
> NT shipped on day one on MIPS and x86. Alpha and PPC were added,
> although PPC was pretty short-lived, and MIPS died soon thereafter.

I believe that on MIPS, Alpha and PPC, the windows ports were all 32-bit
- despite all three cpus having 64-bit versions as standard at the time.
The ports also used the cpus in little-endian mode, although the PPC at
least is more efficient in big-endian mode.

And while the Windows OS itself worked on these systems, applications
and drivers were almost non-existent, because windows programmers would
not or could not think about portability. Most were struggling with the
move to 32-bit x86 at the time, nearly a decade after 32-bit x86 had
become standard on PC's.

> Windows was also ported to IPF fairly quickly, and that died only
> recently.

The problems with Windows on the "Itanic" are also partly due to the
outstandingly bad processor - MS and Windows programmers are not to
blame for the chip being orders of magnitude hotter, costlier and slower
than other cpus.

> Something called "Windows" has been on ARM for quite a
> while in the Windows CE branch of things, the "real" Windows on ARM is
> more recent.

And Windows on ARM ("Windows RT") has just been killed off too. Though
if ARM servers take off, maybe we will see a Windows server version for ARM.

WinCE supported a number of other architectures too, not just ARM - but
it has never been successful or popular, and been a continuous financial
drain for MS.

Then there is Windows Phone, which runs on ARM - and has ambitions to
hopefully become a meaningful minor player in the mobile market.

Average windows developers do not make code for any non-x86 systems,
however.

David Brown

unread,
Feb 6, 2015, 4:01:48 AM2/6/15
to
On 06/02/15 00:45, Christopher Pisz wrote:
> On 2/5/2015 5:14 PM, Jens Thoms Toerring wrote:
>> Christopher Pisz <nos...@notanaddress.com> wrote:

>
> I'm making the issue because I had the displeasure of debugging this
> very problem for months. People on this newsgroup are not understanding
> the problem.

I expect that most people in this newsgroup don't have to deal with
crappy Windows programmers writing crappy code on a daily basis - and
you genuinely have my sympathies for having to work with this sort of
thing. I have seldom had to deal with Windows C++ code, but I have
sometimes had to work with other people's badly-written code (including
the problems caused by their use of primitive types and assumptions
about their sizes), and know how unpleasant it can be.
If you had been coding properly, using types like uint32_t, uintptr_t,
applying sizeof() to types, etc., then there is /nothing/ to change.
That is far better than going through the code and hoping you spot any
undocumented unwarranted assumptions about sizes.

>
> It hasn't as much been a problem for my development as it has been
> running software in general.
>
> I mean, for example, Visual Studio itself is 32 bit, launches 32 bit
> tools to debug your 64 bit projects, then it gives ridiculous error
> messages about "bad image format" leading a person to think something is
> wrong with his code, when its a silly msvc tool linking to your 64 bit
> dll....
>

Again, you have my sympathies for having to work with such crap tools.
In the *nix world, it is common for tools to work the same whether they
are compiled as 32-bit or 64-bit versions, and for development tools to
happily work with code for either architecture size (or
cross-development for completely different targets).

> Or the fact that I can't find 64 bit versions of my favorite VSTs when I
> want to write some Drum and Bass. :)
>
>
>> My take on that is that people never having been exposed to
>> anything but Windows have a severe disadvantage because they
>> never have seen anything of the world except their own town and
>> believe that that's all there is. For them it's hard to under-
>> stand how one could do anything that's architecture-independent
>> and thus probably assume that it's rocket science. But it really
>> isn't, it's mostly revolves around a few things to keep in mind.
>
> You're right. I've been in my Windows bubble. The employers and their
> customers are the factor there. It certainly is hard for me to
> understand anything *nix. I can't even wrap my head around why people
> want to use text editors on the command line and have to memorize 125
> different keyboard shortcuts to get the same thing done I can with a
> mouse click. But I can accept that is their strange world and it must be
> there for a good reason.

Um, you do know that *nix users have been using gui's for three decades
at least? Experienced *nix users often do things from the command line
- when it is easier and faster to do so. (Most Windows users don't,
because the command line "shell" in windows is so limited and there are
no command-line utilities to speak of. More serious Windows users make
use of msys or cygwin ports of *nix command line utilities.)

I thought it was really funny that Windows 10 is making a big thing
about their fantastic new "invention" of multiple virtual workspaces -
something *nix users have been using heavily for 25 years.

But this is perhaps straying from the topic of the thread...

>
> Do you want to stictly rely on compiler warnings about truncation?
> What about the reverse case, where programmer X assigned your smaller
> type to his larger, but someone wanted to check bit #x for some special
> meaning.
>
> You can blame developer A or B, but the code still won't work as expected.

Blame the developer who failed to use proper sized types - and make sure
the price tag on your bill reflects that.

>
>> How many types are ok? In normal live I tend to use the basic
>> types, i.e. bool, char, the three or four types of ints plus
>> size_t and float and double. Then, I sometimes need exact sized
>> types and then a few additinal types from POSIX. And, of course,
>> those forced on me by libraries I need. But that's it. But when
>> I just look at the "Windows API Data Types" page
>>
>> https://msdn.microsoft.com/en-us/library/windows/desktop/aa383751%28v=vs.85%29.aspx
>>
>>
>> I see a lot more types (including a lot of exact sized types) than
>> I know from <cstdint> - but, of course, with different names;-).
>> I normally tend to see less different types in weeks of work. I
>> guess when you're working on Windows you'll have to have all of
>> them memorized and so shouldn't have much trouble with e.g 'int32_t'
>> instead of 'INT32'... Perhaps you should make a point of avoiding
>> those ugly Windows API types and instead get people to use just the
>> fewer relevant and easier to grok ones from <cstdint>;-)
>>
>> Best regards, Jens
>>
>
> The windows types poop is largely why I have issue with it. This idea of
> using these typedef types seems like it is the same idea they had when
> they screwed everything up. It has made life hell for years and pretty
> much everyone thinks it was a horrible thing to do. I try, like I
> believe any good programmer should, to tuck anything with Windows
> specific types away and hide that from the main source.

So because MS made a dogs breakfast out of their types, you think that
/all/ types are bad?

Robert Wessel

unread,
Feb 6, 2015, 4:27:55 AM2/6/15
to
On Fri, 06 Feb 2015 09:43:37 +0100, David Brown
<david...@hesbynett.no> wrote:

>On 06/02/15 02:43, Robert Wessel wrote:
>> On 5 Feb 2015 23:14:44 GMT, j...@toerring.de (Jens Thoms Toerring)
>> wrote:
>>
>>> I always had the suspicion that much of the trouble Microsoft
>>> had in porting to 64-bit systems (and a lot of related problems)
>>> actually were a result of an attitude similar to yours, i.e.,
>>> labouring under a "will never change" assumption. The only
>>> port of Windows (of the back then rather fresh NT) before the
>>> recent ARM port (which also seems to be a dead end) I am aware
>>> of was for the Alpha processor around 1998 or so, and that ended
>>> rather abruptly.
>>
>>
>> NT shipped on day one on MIPS and x86. Alpha and PPC were added,
>> although PPC was pretty short-lived, and MIPS died soon thereafter.
>
>I believe that on MIPS, Alpha and PPC, the windows ports were all 32-bit
>- despite all three cpus having 64-bit versions as standard at the time.
> The ports also used the cpus in little-endian mode, although the PPC at
>least is more efficient in big-endian mode.


All of the released MIPS, PPC and Alpha versions were 32 bit. The
R4000, with the MIPS III 64-bit ISA, didn't arrive until 1992, which
would likely have far too late to be an issue for the initial WinNT
release in 1993.

Almost all of the 64-bit port was actually done on Alpha, although the
platform died* before that was ever released, and IPF became the first
64-bit release of Windows.


*For Windows - Alpha lived on for quite a while with other OSs after
DEC and MS parted ways.


>And while the Windows OS itself worked on these systems, applications
>and drivers were almost non-existent, because windows programmers would
>not or could not think about portability. Most were struggling with the
>move to 32-bit x86 at the time, nearly a decade after 32-bit x86 had
>become standard on PC's.


Yep. And the problem was circular. The platforms were not that
compelling, a bit maybe in raw performance at first, but not really in
price/performance, and they lacked software, so very few customers
existed. So nobody bothered to port, even though it wasn't that bad
(we made it half way through a couple of ports, it was really not too
bad). Then x86 pretty much closed the performance gap, while
maintaining a solid cost advantage, and there simply was no point.

In some ways MS doesn't get enough credit for its effort to be on
multiple platforms, it's not really MS's fault if all the platforms
mostly sucked compared to x86 for one reason or another. And MS did
make an effort. For the most part the MIPS, PPC, Alpha and IPF
releases were full featured, although the later IPF releases did pare
that back a fair bit. They did fall short of drivers and some
applications (I don't know if a full version of Office was ever
available as a non-x86 build, but partial ones were, and the x86
versions ran on IPF).


>> Windows was also ported to IPF fairly quickly, and that died only
>> recently.
>
>The problems with Windows on the "Itanic" are also partly due to the
>outstandingly bad processor - MS and Windows programmers are not to
>blame for the chip being orders of magnitude hotter, costlier and slower
>than other cpus.


Well, IPF (at least ignoring the execrable Merced), was never great,
but it never was "orders of magnitude" hotter or slower, and usually
not nearly that much more expensive either, at least for machines of
comparable size. IOW, an 8S IPF box was not nearly that much more
expensive than an 8S x86 box, but you basically had no workstations or
small servers to choose from.

There was a while when SQL Server on IPF was a nice little niche
market for MS. Between all the extra memory (remember that for years
IPF was the only 64-bit version of Windows), and the general
availability* of fairly large, high RAS, boxes (even if the individual
CPUs were not really competitive with x86s), made it a pretty decent
high end platform for SQL Server. In fact SQL Server was more-or-less
the last supported application for Windows on IPF.


*Even if mostly from a single vendor.

Robert Wessel

unread,
Feb 6, 2015, 4:29:34 AM2/6/15
to
On Fri, 06 Feb 2015 08:56:44 +0100, David Brown
<david...@hesbynett.no> wrote:

>On 06/02/15 02:32, Robert Wessel wrote:
>> On Thu, 5 Feb 2015 23:59:26 +0100, Martijn Lievaart
>> <m...@rtij.nl.invlalid> wrote:
>>
>>> On Wed, 04 Feb 2015 19:01:11 -0600, Christopher Pisz wrote:
>>>
>>>> Windows made a define for BYTE. Some use it some don't. I'd rather never
>>>> see it and use unsigned char, because unsigned char is a byte and is
>>>> part of the language itself already. Similar to the stdint.h argument.
>>>
>>> Well, a BYTE is 8 bits on the Windows implementations I'm familiar with.
>>> An unsigned char can be anything, as long as it is 8 bits or more.
>>>
>>> I personally prefer uint8_t if I need 8 bits.
>>
>>
>> I will be astonished if unsigned char is ever not an 8 bit, two's
>> complement type on Windows.
>>
>
>You would not be alone in your astonishment. However, code - and to a
>greater extent, habits - move around. It is better to use clear and
>accurate coding rather than sloppy style, even if you know the sloppy
>style works for the program you are working on at the time. Maybe your
>code will be re-used later, or you will work on a different system at a
>later time. "unsigned char" is usually the same as uint8_t, but the
>principle applies more widely to other assumptions that are more often
>wrong (such as assuming the signedness of plain char, or the size of
>long ints).


I certainly don't disagree, but I thought the question was in the
context of an application you know will always run on Windows.

David Brown

unread,
Feb 6, 2015, 5:33:46 AM2/6/15
to
That was the context - Christopher seems convinced that everything he
ever does will only ever be in the context of 32-bit Windows on x86.
However, there are a few points here:

1. Even in the context of Windows, things can change with different
architectures. Some type sizes changed with 64-bit windows - and it is
not impossible that Windows on ARM will become a realistic choice one
day (though type sizes are likely to match Windows on x86, there may be
other subtle changes).

2. Even as a Windows programmer, you don't know that your code will only
ever run on that platform. Some parts of the code will clearly be
Windows-specific, but other parts could be reused.

3. Even as a Windows programmer, you don't know that you will always be
a Windows programmer. You may change to working on different systems,
and it is better to learn good habits early rather than having to
unlearn the bad habits common in the Windows world.

4. Even as a Windows programmer, you may have to interact with
well-written code from other places, and deal with properly typed code.
It's good to be familiar with the standards of the language you claim
to use professionally.

Scott Lurndal

unread,
Feb 6, 2015, 10:40:55 AM2/6/15
to
That may be true for consumer grade stuff in the Windows space,
but Linux has been happily x86_64 for a decade now.

Christopher Pisz

unread,
Feb 6, 2015, 11:01:22 AM2/6/15
to
Ok, if I understand you correctly, then you are using these fixed width
types in code that the operating system calls. You are leaving it to the
operating system to deal with it. On the other side of the operating
system, I am calling libraries that, in turn, call the operating system,
and I am using int rather than int32_t justifiably. Is that correct?

If I do use int32_t in a public interface on my side of the operating
system, then haven't I created another boundary between types that
someone somewhere has to deal with if their code uses the regular
primitive types like int? So, is it your opinion that I shouldn't or
that I should?

Let's say in fictional theory Windows has some API with a function

void DoSomeSecurityThing(int ID);

and I need to call it from my function

void MySecurityThing(int32_t userid)
{
// Call OS here
}

What is it that the posters in this thread are proposing be done?

Since we are talking fiction here, let's say the same situation occured
in *nix. Are we saying that it wouldn't and that *nix would give you
APIs using int32_t instead?

I've never seen any library give me functions, structures, or constants
using those types. Are we saying it is because I live in a Windows
bubble, or because I am working on the opposite side of the operating
system from people who do?


Christopher Pisz

unread,
Feb 6, 2015, 11:04:25 AM2/6/15
to
I dunno how we define "happily has been"

I seem to recall less than 5 years ago, I tried to install 64 bit
drivers on a 64bit install of Red Hat Linux and reading all manner of,
"this doesn't work yet" articles, much like the problems on other 64bit
operating systems had to go through.

I guess we could say Windows XP 64bit worked too if all you wanted to do
was install it.








Scott Lurndal

unread,
Feb 6, 2015, 11:55:08 AM2/6/15
to
Christopher Pisz <nos...@notanaddress.com> writes:
>On 2/6/2015 9:40 AM, Scott Lurndal wrote:
>> Martijn Lievaart <m...@rtij.nl.invlalid> writes:
>>> On Thu, 05 Feb 2015 13:41:42 -0600, Christopher Pisz wrote:
>>>
>>>> We're still sorting out the move from x86 to x64 how
>>>> many years later?
>>>
>>> Bingo!
>>
>> That may be true for consumer grade stuff in the Windows space,
>> but Linux has been happily x86_64 for a decade now.
>
>I dunno how we define "happily has been"
>
>I seem to recall less than 5 years ago, I tried to install 64 bit
>drivers on a 64bit install of Red Hat Linux and reading all manner of,
>"this doesn't work yet" articles, much like the problems on other 64bit
>operating systems had to go through.

"I seem to recall"?

I've been using, and developing, 64-bit linux distributions since
2004. There have been no issues vis-a-vis I/O drivers in
this time other than the standard issues surrounding proprietary
binary kernel drivers [**](which are architecture independent). Also
did quite a bit of 64-bit Unix OS work on IRIX in the 90's which
along with the Alpha work, set the stage for 64-bit x86 linux
in the 2000's.

Between the quality of the programmatic API (which I was involved
in the definition of during the decade of the 90's[*]) and the
binary processor-specific ABI (psABI), and the general use of
the correct types in code designed to execute on multiple
architectures (as most unix/linux code has been); running 64-bit
applications (and legacy 32-bit applications) has been pretty much
trouble free.

[*] 88Open, X/Open, Unix International and the IEEE POSIX.
[**] 32-bit PCI device issues notwithstanding (e.g. SWIOTLB),
which were simliar to the old ISA issues vis-a-vis DMA.

Scott Lurndal

unread,
Feb 6, 2015, 11:56:51 AM2/6/15
to
Christopher Pisz <nos...@notanaddress.com> writes:
>On 2/5/2015 7:32 PM, Ian Collins wrote:
>> Christopher Pisz wrote:
>>>
>>> There has to be a boundary somewhere. I already asked Ian, but I'll ask
>>> again. What are you going to do at that boundary? What is someone else
>>> going to do?
>>
>> I already answered - the platform library interfaces use fixed width and
>> other typedef types. System data structures use fixed width types. The
>> standard library uses typedef types.
>
>
>Ok, if I understand you correctly, then you are using these fixed width
>types in code that the operating system calls. You are leaving it to the
>operating system to deal with it. On the other side of the operating
>system, I am calling libraries that, in turn, call the operating system,
>and I am using int rather than int32_t justifiably. Is that correct?

It's really simple. Use the defined interface type. If the library
routine header file requires an int, pass an int. If it requires an
unsigned long long, pass an unsigned long long. If it requires an
uint8_t, pass a uint8_t.

That said, int is too often used when the domain doesn't include
negative numbers.

Christopher Pisz

unread,
Feb 6, 2015, 12:14:32 PM2/6/15
to
Ok, so back to where we were from the start. If I never have a
dependency that uses these types, then I should never use them, in your
opinion.

In Fibble's opinion, I should _always_ use them.

In Ian's opinion, (I think), I should only use them if the operating
system is calling into my code.

In at least one other opinion, I should use them if I am going to
perform bitwise operations.

In my opinion, I shouldn't be using them at all as long as my target is
Windows x86 and I am not writing hardware drivers, firmware, etc.

I think there were several other opinions as well....










Wouter van Ooijen

unread,
Feb 6, 2015, 12:25:50 PM2/6/15
to
Christopher Pisz schreef op 06-Feb-15 om 6:14 PM:
Of course, take N experts and get N opinions, + at least 1 extra for
free! Now hire an expert to select the best opinion...

Wouter

Mr Flibble

unread,
Feb 6, 2015, 12:45:16 PM2/6/15
to
I would probably translate the value of userid the type of which my
algorithm uses (int32_t or whatever) to a value with the type the OS API
expects (int) using some kind of static cast that gives a compiler error
if value truncation would occur with the cast.

/Flibble

Christopher Pisz

unread,
Feb 6, 2015, 1:32:58 PM2/6/15
to
That's what I thought. I guess it depends on the compiler and settings,
but I've only received warnings. Worse, sometimes the really bad peers
love to disable warnings, or that warning, because they think they know
what they are doing when they don't.

If you can guarantee me a compiler error via the previously mentioned
static assertion, then I suppose I can get on board somewhat, but to me
the issue still remains in the form of - when it compiles for one but
not the other, you aren't really getting your cross platform-ed-ness you
were after, because the interface to the OS, which is out of your
control, pooped on it.



It is loading more messages.
0 new messages