No arguments there - C++ is big, and getting bigger.
But static assertions are such an important part of good development
practice (as I see it) that many people have used them heavily long
before they became part of the standard. There are lots of pre-C++11
implementations around (google is your friend) - a simple one like this
works fine for most uses:
#define STATIC_ASSERT_NAME_(line) STATIC_ASSERT_NAME2_(line)
#define STATIC_ASSERT_NAME2_(line) assertion_failed_at_line_##line
#define static_assert(claim, warning) \
typedef struct { \
char STATIC_ASSERT_NAME_(__COUNTER__) [(claim) ? 2 : -2]; \
} STATIC_ASSERT_NAME_(__COUNTER__)
The key difference with the language support for static_assert is that
error messages are clearer.
> Never had to use it, or went looking for such a mechanism, because,
> again, never needed it. But sure, I come across things all the time,
> especially with C++11, that sound neat.
Static assertions let you make your assumptions explicit, and lets the
compiler check those assumptions at zero cost.
>
>> And sizeof(unsigned char) is always 1 - it is guaranteed by the
>> standards, even if a char is 16-bit or anything else. "sizeof" returns
>> the size of something in terms of char, not in terms of 8-bit bytes.
>
> I fully realize this. I was demonstrating how silly the concern was
> about whether or not a char is one byte. I was saying, if you really
> feel the need to check if it is one byte, then do sizeof.
Unfortunately for you, with all the other mistakes you have been making
with these posts, the assumption was that you were making another one.
> I may have
> misunderstood earlier that the OP wants to check for bits, but I still
> don't see anywhere in his code where the number of bits matters. It sure
> looks like he is concerned with bytes or nibbles, but who knows...his
> code is nowhere near a working example.
No, it does not look like anything of the sort. It looks like he wants
to examine the individual 8-bit bytes that make up the long integer he
has. And when you want to look at 8-bit bytes, uint8_t is the /correct/
type to use - anything else is wrong.
It is true that there were several errors in his code. One of them is
the use of "long unsigned int" instead of "uint64_t", since the size of
"long unsigned int" varies between platforms, and of the common PC
systems it is only 64-bit Linux (and other *nix) that have 64-bit longs.
>
> This entire topic has degenerated into hedge cases about mutant
> platforms where types can be different lengths and contain different
> ranges. The OP stated no such requirement and is, has, and I bet will
> continue to, write code in this style without and logical reason.
No, it is about being explicit and using the types you want, rather than
following bad Windows programmers' practice of picking a vaguely defined
type that looks like it works today, and ignoring any silent corruption
you will get in the future.
>
> I still say, and always will say that you know your architecture up
> front. In most cases, you have to in order to even compile with the
> correct options.
An increasing proportion of code is used on different platforms. Even
if you stick to the PC world, there are four major targets - Win32,
Win64, Linux32 and Linux64, which can have subtle differences. I agree
that one should not go overboard about portability - for example, code
that accesses the Windows API will only ever run on Windows. But it is
always better to be clear and explicit about what you are doing - if you
want a type that is 8 bits, or 64 bits, then say so.
The alternative is to say "I want a type to hold this information, but I
don't care about the details" - in C++11, that is written "auto". It is
not written "int".
When you write "int", you are saying "I want a type that can store at
least 16-bit signed integers, and is fast". Arguably you might know
your code will run on at least a 32-bit system, and then it means "at
least 32-bit signed integer" - but it does /not/ mean /exactly/ 32 bits.
"Long int" is worse - on some current and popular systems it is 32
bits, on others it is 64 bits.
>
> To truly be "platform independent" you have to jump through all manner
> of disgusting hoops and I have yet in 20 years ever come across any real
> life code, no matter how trivial, that was truly platform independent.
Perhaps you live in a little old-fashioned Windows world - in the *nix
world there is a vast array of code that is portable across a wide range
of systems. Many modern programs are portable across Linux, Windows and
Mac systems. And in the embedded world, there is a great deal of code
that can work fine on cpus from 8 bits to 64 bits, with big and little
endian byte ordering.
Of course, portable code like this depends (to a greater or lesser
extent) on non-portable abstraction layers to interact with hardware or
the OS.
And there are usually limits in portability - it is common to have some
basic requirements such as 8-bit chars (though there are real-world
systems with 16-bit and 32-bit chars) or two's compliment arithmetic.
Writing code that is portable across systems that fall outside of those
assumptions is often challenging, interesting, and pointless.
> I
> have however come across several ugly projects littered with #if define
> of the year, do windows #if define of the year, do linux, or #if define
> of the year, do windows version X, but again you know your architecture.
>
> If you are creating something so entirely trivial and amazingly small
> that you never need to call another library or call the OS, then I still
> hold fast to my claim that you can check sizeof at your entry point, and
> as Victor pointed out earlier get the number of bits using numeric limits.
>
> You will, I guarantee, break your project at some point using typedefs
> for primitive types. I've seen it and wasted hours upon hours on it,
> when the target platform was known, singular, and Windows no less.
>
You can guarantee that you will break your project when you use the
/wrong/ types. And this attitude is particularly prevalent amongst
Windows programmers - it's just "64K should be enough for anyone" all
over again. It means that programs are pointlessly broken when changes
are made or the target /does/ change - such as from Win32 to Win64 -
because some amateur decided that they were always going to work on
Windows, and you can always store a pointer in a "long int".