Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

#ifdef with enum member

1,819 views
Skip to first unread message

Frederick Gotham

unread,
Jul 20, 2020, 5:32:09 PM7/20/20
to
Let me start off with this easy example:

#define MONKEY 5

#ifdef MONKEY

int Func(void) { return MONKEY; }

#else

int Func(void) { return 0; }

#endif


Now instead of MONKEY being a preprocessor macro, let's say it's a member of an anonymous enum, like so:


enum { MONKEY = 5 };

#ifdef MONKEY

int Func(void) { return MONKEY; }

#else

int Func(void) { return 0; }

#endif


Of course this code doesn't work because you can't use #ifdef with an enum member.

So how would I do the above with enable_if and some stuff from Boost?

Would it be something like:

template<class T>
using Monkey_t = decltype(MONKEY);

template<class T = void>
enable_if<boost::is_complete_c<Monkey_t>, T> int Func(void)
{
return MONKEY;
}

template<class T = void>
disable_if<boost::is_complete_c<Monkey_t>, T> int Func(void)
{
return 0;
}

I've been playing around with this all day and I'm ready to self-immolate.

Ideally what I am looking for is a very generic way of doing conditional compilation based on whether an expression is or isn't well-formed, something like:

template<class T>
enable_if<is_valid_expression(MONKEY), T> Func(void)
{
return MONKEY;
}

template<class T>
disable_if<is_valid_expression(MONKEY), T> Func(void)
{
return 0;
}


I've looked through the boost documentation for stuff like "has_*", but I don't find it very enlightening.

Alf P. Steinbach

unread,
Jul 20, 2020, 7:28:21 PM7/20/20
to
On 20.07.2020 23:31, Frederick Gotham wrote:
> Let me start off with this easy example:
>
> #define MONKEY 5
>
> #ifdef MONKEY
>
> int Func(void) { return MONKEY; }
>
> #else
>
> int Func(void) { return 0; }
>
> #endif
>
>
> Now instead of MONKEY being a preprocessor macro, let's say it's a member of an anonymous enum, like so:
>
>
> enum { MONKEY = 5 };
>
> #ifdef MONKEY
>
> int Func(void) { return MONKEY; }
>
> #else
>
> int Func(void) { return 0; }
>
> #endif
>
>
> Of course this code doesn't work because you can't use #ifdef with an enum member.
>
> So how would I do the above with enable_if and some stuff from Boost?

You can't.

You can check whether a header is available, via C++17 `__has_include`.
But that's the extent of C++ introspection per now.

It's a good idea to not write code that depends on whether other code
exists.

The usual way to deal with system dependencies, if that's your actual
problem, is to write two or more system dependent variants of code that
conform to a common system independent interface. Then use the interface
in the rest of the code, but link with or include in the system
dependent implementation.

For example, that's how the standard library works.

[snip]

- Alf

Frederick Gotham

unread,
Jul 21, 2020, 6:10:16 PM7/21/20
to
if the enum is not anonymous then I think there is a way of determining if it contains a particular member. I think this would be similar to determining whether a class has a particular member data or member function.

I think this is what the 'has_' functions are for in Boost, but I cannot understand the documentation.

Öö Tiib

unread,
Jul 22, 2020, 2:55:08 AM7/22/20
to
On Wednesday, 22 July 2020 01:10:16 UTC+3, Frederick Gotham wrote:
> if the enum is not anonymous then I think there is a way of determining if it contains a particular member.

I assume you mean enumerator by "member" or "value"?
There are no way to enumerate defined enumerators of enum. Several
enumerators may have same value.
As of values of enum variable all values that are valid values of
underlying integral type of said enum are valid values of the
enum too. Going out of bounds is undefined behavior.

> I think this would be similar to determining whether a class has a particular member data or member function.

Counting and enumerating data members programmatically is perhaps
only possible for aggregate structs. For non-aggregates it
is impossible without external parser tool.

> I think this is what the 'has_' functions are for in Boost, but I cannot understand the documentation.

Lot of reflection and meta-programming tricks that are possible
are used in boost. Still the result is that under limited
circumstances we get something. We don't have reflection
in C++ unless we program it ourselves into or around our types.

Juha Nieminen

unread,
Jul 22, 2020, 3:14:49 AM7/22/20
to
Öö Tiib <oot...@hot.ee> wrote:
> Lot of reflection and meta-programming tricks that are possible
> are used in boost. Still the result is that under limited
> circumstances we get something. We don't have reflection
> in C++ unless we program it ourselves into or around our types.

I'm wondering if you are using the correct term there.

According to wikipedia (which as we all know is the supreme authority in
everything) introspection means the ability to examine the type or properties
of an object, while reflection means, in addition, the ability to
manipulate these properties.

It's not immediately clear what it means by that, but I get the impression
that it means, essentially, that with introspection you can check, for
example, if an object has a member function with a given name/signature,
while with reflection you can also *call* that function if it exists
(ie. you won't get a compiler error if it doesn't actually exist, as long
as you don't call it if it doesn't).

Of course all this is from the perspective of runtime
introspection/reflection. The distinction may become a bit fuzzier at
compile time.

Öö Tiib

unread,
Jul 22, 2020, 5:52:35 AM7/22/20
to
On Wednesday, 22 July 2020 10:14:49 UTC+3, Juha Nieminen wrote:
> Öö Tiib <oot...@hot.ee> wrote:
> > Lot of reflection and meta-programming tricks that are possible
> > are used in boost. Still the result is that under limited
> > circumstances we get something. We don't have reflection
> > in C++ unless we program it ourselves into or around our types.
>
> I'm wondering if you are using the correct term there.

I meant "reflection" in meaning of program's capability
to meta-analyse and meta-program itself.

> According to wikipedia (which as we all know is the supreme authority in
> everything) introspection means the ability to examine the type or properties
> of an object, while reflection means, in addition, the ability to
> manipulate these properties.
>
> It's not immediately clear what it means by that, but I get the impression
> that it means, essentially, that with introspection you can check, for
> example, if an object has a member function with a given name/signature,
> while with reflection you can also *call* that function if it exists
> (ie. you won't get a compiler error if it doesn't actually exist, as long
> as you don't call it if it doesn't).

For me capability to inspect properties of data types (introspection) is
a part of program's capability to meta-analyse itself. So in that view
introspection is sub-feature of sub-feature of reflection.

> Of course all this is from the perspective of runtime
> introspection/reflection. The distinction may become a bit fuzzier at
> compile time.

Oh indeed it is fuzzy since the definitions of terms how different
people use those overlap in big part but rarely match perfectly.
For example for me reflection includes introspection fully while
for others there can be almost dichotomy between those. :D

C++ program can do very lot of processing compile time, big part of
what it is even disallowed to do run-time. For me compile-time
meta-programming is also sub-feature of sub-feature of reflection.

In my experience other programming languages tend to be far weaker
in that sector of reflection and in my experience boost is using
majority of tricks of it that C++ provides ... nice or ugly.

Alf P. Steinbach

unread,
Jul 22, 2020, 10:15:54 AM7/22/20
to
On 22.07.2020 08:54, Öö Tiib wrote:
> On Wednesday, 22 July 2020 01:10:16 UTC+3, Frederick Gotham wrote:
>> if the enum is not anonymous then I think there is a way of determining if it contains a particular member.
>
> I assume you mean enumerator by "member" or "value"?
> There are no way to enumerate defined enumerators of enum. Several
> enumerators may have same value.
> As of values of enum variable all values that are valid values of
> underlying integral type of said enum are valid values of the
> enum too. Going out of bounds is undefined behavior.

If by "out of bounds" you mean outside the range of minimum to maximum
enumerator value, then happily no UB any longer. C++17 fixed that.
Because `std::byte`.


[snip]

- Alf

Öö Tiib

unread,
Jul 22, 2020, 10:53:52 AM7/22/20
to
But how? C++17 itself added borken std::byte into language. C++20 is
supposed to fix it. C++20 being officially published is still being
underway (maybe thanks to pandemic?).
From C++20 it will be modular arithmetic for unsigned underlying
types and UB for signed underlying types.

Alf P. Steinbach

unread,
Jul 22, 2020, 4:19:42 PM7/22/20
to
First, it appears I was wrong, I remembered this incorrectly. :-(

And I got it backwards: it's C++17 that is the most unreliable with UB,
while C++14 and earlier just had unspecified.

Not sure where I got the notion that the introduction of std::byte
encountered the problem so they had to fix it.

The problem with exceeding the range of the defined enumerator values is
for converting to enum type without fixed underlying type. Enum types
with fixed underlying type don't have the problem.

In the standard the two different rules are abstracted up as a single
rule with the differences baked into the notion of “enumeration values”
which for an enum with unspecified underlying type are the values of a
bit field just large enough to contain the enumerator values, but which
for an enum with fixed underlying types are the underlying type values...

Essentially, for this purpose the underlying type of an enum with
unspecified underlying type is assumed to be a minimum bit field.

C++14 §5.2.9/10:
“A value of integral or enumeration type can be explicitly converted to
an enumeration type. The value is unchanged if the original value is
within the range of the enumeration values (7.2). Otherwise, the
resulting value is unspecified (and might not be in that range).”

C++17 §8.2.9/10:
“A value of integral or enumeration type can be explicitly converted to
a complete enumeration type. The value is unchanged if the original
value is within the range of the enumeration values (10.2). Otherwise,
the behavior is undefined.”


- Alf (on his way to getting completely senile, it seems)

Öö Tiib

unread,
Jul 22, 2020, 7:34:06 PM7/22/20
to
Oh it happens ... my memory is also far from what it was say 25 years
ago. But C++17 must be is worst standard of C++. Doing even byte wrongly?
If it wasn't standard about one of my favorite tools I would consider
it comical. Being annoyed somehow helps me to remember. :D

> The problem with exceeding the range of the defined enumerator values is
> for converting to enum type without fixed underlying type. Enum types
> with fixed underlying type don't have the problem.
>
> In the standard the two different rules are abstracted up as a single
> rule with the differences baked into the notion of “enumeration values”
> which for an enum with unspecified underlying type are the values of a
> bit field just large enough to contain the enumerator values, but which
> for an enum with fixed underlying types are the underlying type values...
>
> Essentially, for this purpose the underlying type of an enum with
> unspecified underlying type is assumed to be a minimum bit field.

Such bit-field would make perfect sense if C++ indicated any desire
to support architectures with arbitrary bit widths / single bit
addressing.

Typical 64 bits can precisely address 2 exabytes at singe bit
granularity so it is conceivable. Also LLVM actually contains
developments in direction of supporting arbitrary bit widths,
perhaps for FPGAs or the like.

But on any case it feels like pointless pessimization
as implementation has to select some concrete underlying
type in its ABI also for enum without user-specified underlying
type. So why not just say that there is implementation-specified
underlying type?


Daniel P

unread,
Jul 23, 2020, 10:07:43 AM7/23/20
to
On Wednesday, July 22, 2020 at 10:53:52 AM UTC-4, Öö Tiib wrote:

> C++17 itself added borken std::byte into language. C++20 is
> supposed to fix it.

C++ already had two byte types too many - char, signed char,
and unsigned char. For what reason then did we need another
non-integral one? We can't even write

std::vector<std::byte> v = {0x66,0x6f,0x6f};

It has to be

std::vector<std::byte> v = {std::byte{0x66},std::byte{0x6f},
std::byte{0x6f}};

Daniel

Juha Nieminen

unread,
Jul 24, 2020, 1:52:00 AM7/24/20
to
Daniel P <daniel...@gmail.com> wrote:
> C++ already had two byte types too many - char, signed char,
> and unsigned char.

I don't know what the motivation was originally to have three distinct char
types in C (from which C++ "inherited" them), but in a way it actually
makes sense, perhaps serendipitously.

The type 'char' is supposed to be the most efficient byte type for the
target platform, either a signed one or an unsigned one. Most usually
it should be preferred especially when dealing with strings, as long
as one is aware that it can be either signed or unsigned, and doesn't
make any assumptions either way.

Curiously, this is not just theoretical. There is a *modern* very
concrete example where this has bitten many a developer in the posterior,
with code compiling but working incorrectly, because it wrongly assumes
that 'char' is signed.

Namely in ARM processors (at least the 32-bit ones) it so happens that
an unsigned char is more efficient than a signed one, and thus most
compilers (such as gcc) will use the 'char' type as an unsigned char.
Most notoriously this happens when compiling for the Raspberry Pi (and
probably other ARM-based systems).

Many a C and C++ program out there doesn't work correctly for the Raspi
because it wrongly assumes that 'char' is signed. I have encountered
actual examples.

(Most often this happens because of a if(c < 0), which obviously
always evaluates to false if char is unsigned.)

Öö Tiib

unread,
Jul 24, 2020, 2:29:11 AM7/24/20
to
Yes and there is uint8_t that is in certain esoteric way better
than all of those others.

Also writing your own custom constexpr literal operator for std::byte
will take considerable effort of yours. Plus if you have large fields
of such binary data then also make compiling it slow. Plus if someone
looks its code of it then they likely go ewwwww. Especially if the
literal operator supports all the forms like:

constexpr auto h = 0x66_b;
constexpr auto d = 102_b;
constexpr auto o = 0146_b:
constexpr auto b = 0b1100110_b;
static_assert(h == d and d == o and o == b);

The idea of making raw bytes safer was actually good but it resulted
with fifth inconvenient to use thing with bear traps attached.

David Brown

unread,
Jul 24, 2020, 2:46:04 AM7/24/20
to
On 24/07/2020 07:51, Juha Nieminen wrote:
> Daniel P <daniel...@gmail.com> wrote:
>> C++ already had two byte types too many - char, signed char,
>> and unsigned char.
>
> I don't know what the motivation was originally to have three distinct char
> types in C (from which C++ "inherited" them), but in a way it actually
> makes sense, perhaps serendipitously.
>
> The type 'char' is supposed to be the most efficient byte type for the
> target platform, either a signed one or an unsigned one. Most usually
> it should be preferred especially when dealing with strings, as long
> as one is aware that it can be either signed or unsigned, and doesn't
> make any assumptions either way.
>
> Curiously, this is not just theoretical. There is a *modern* very
> concrete example where this has bitten many a developer in the posterior,
> with code compiling but working incorrectly, because it wrongly assumes
> that 'char' is signed.
>
> Namely in ARM processors (at least the 32-bit ones) it so happens that
> an unsigned char is more efficient than a signed one, and thus most
> compilers (such as gcc) will use the 'char' type as an unsigned char.
> Most notoriously this happens when compiling for the Raspberry Pi (and
> probably other ARM-based systems).

On most embedded targets, and most newer ABI's, plain char is unsigned
because making "char" signed is a totally meaningless historical
artefact from the days of 7-bit ASCII as the the only character set
supported by C. The signedness of plain char is specified in the ABI,
not given by the compiler - and pretty much every target except 32-bit
Windows and a few embedded microcontrollers has a proper ABI that pretty
much every compiler follows. (Though gcc, and some other compilers, may
let you override the signedness of char with a command-line switch.)

>
> Many a C and C++ program out there doesn't work correctly for the Raspi
> because it wrongly assumes that 'char' is signed. I have encountered
> actual examples.
>
> (Most often this happens because of a if(c < 0), which obviously
> always evaluates to false if char is unsigned.)
>

Any code that makes any assumptions about the signedness of plain char
is broken. If the signedness matters, make it explicit. (Usually
int8_t and uint8_t make vastly more sense in code than "signed char" and
"unsigned char". Or int_least8_t and uint_least8_t for maximal
portability.)

Unfortunately, you are right that people sometimes write such broken code.

David Brown

unread,
Jul 24, 2020, 2:48:40 AM7/24/20
to
Why? I must be missing something here, besides my morning coffee.

Bo Persson

unread,
Jul 24, 2020, 3:52:20 AM7/24/20
to
On 2020-07-24 at 07:51, Juha Nieminen wrote:
> Daniel P <daniel...@gmail.com> wrote:
>> C++ already had two byte types too many - char, signed char,
>> and unsigned char.
>
> I don't know what the motivation was originally to have three distinct char
> types in C (from which C++ "inherited" them), but in a way it actually
> makes sense, perhaps serendipitously.
>
> The type 'char' is supposed to be the most efficient byte type for the
> target platform, either a signed one or an unsigned one. Most usually
> it should be preferred especially when dealing with strings, as long
> as one is aware that it can be either signed or unsigned, and doesn't
> make any assumptions either way.

But people did. :-)

Dennis Ritchie writes in a C history paper about char being unspecified,
but just happened to be signed on their first implementations.

However, as soon as they did their first unsigned version (for a new
machine) it became apparent that users had already used char as a small
signed integer. And now their programs didn't work. Bad compiler!

So he added signed char to enable them to write more portable programs.

Soon he had to also add unsigned char for users of the new machines, so
they could move their programs to the older systems.

Keith Thompson

unread,
Jul 24, 2020, 1:55:23 PM7/24/20
to
Bo Persson <b...@bo-persson.se> writes:
[...]
> Dennis Ritchie writes in a C history paper about char being
> unspecified, but just happened to be signed on their first
> implementations.
>
> However, as soon as they did their first unsigned version (for a new
> machine) it became apparent that users had already used char as a
> small signed integer. And now their programs didn't work. Bad
> compiler!
>
> So he added signed char to enable them to write more portable programs.
>
> Soon he had to also add unsigned char for users of the new machines,
> so they could move their programs to the older systems.

Hmm. I thought that unsigned char was added before signed char was.

In K&R1 (1978), "unsigned" could only be applied to int, and the
"signed" keyword didn't exist. K&R2 has the full modern set of integer
types (other than long long).

signed char isn't mentioned here:
https://www.bell-labs.com/usr/dmr/www/chist.html

[...]

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */

Keith Thompson

unread,
Jul 24, 2020, 1:58:49 PM7/24/20
to
One way this error can occur is if the program stores the result of
getchar() in a char object rather than an int.
0 new messages