template <typename Enum, typename Int>
optional<Enum> enum_cast(Int value);
requires: is_enum<Enum>, is_integral<Int>
enum color
{
red = 1,
green = 200,
blue = 3
};
static_assert(enum_cast<color>(200u) == color::green);
static_assert(enum_cast<color>(3) == color::blue);
static_assert(enum_cast<color>(4) == nullopt);
On terça-feira, 3 de janeiro de 2017 01:56:05 BRST m.ce...@gmail.com wrote:
> template <typename Enum, typename Int>
> optional<Enum> enum_cast(Int value);
>
> requires: is_enum<Enum>, is_integral<Int>
>
> enum color
> {
> red = 1,
> green = 200,
> blue = 3
> };
How do you see compilers implementing:
void f(int x)
{
enum_cast<color>(x);
}
optional<color> enum_cast(int x)
{
switch(x)
{
case red:
case green:
case blue:
return static_cast<color>(x);
default:
return nullopt;
}
}
Hi,I propose to add a enum_cast function that safely casts int value to a target enum type if int value represents a valid enumerator.
Do you expect the handle the common case where people use enums as bitmasks of different enumerators?
On terça-feira, 3 de janeiro de 2017 07:06:10 BRST m.ce...@gmail.com wrote:
> Compiler should synthesize a function like below:
>
> optional<color> enum_cast(int x)
> {
> switch(x)
> {
> case red:
> case green:
> case blue:
> return static_cast<color>(x);
> default:
> return nullopt;
> }
> }
How do you know whether a use of enum_cast is compile-time or whether it is
runtime?
static_cast is always compile time and always succeeds or produces a
compilation error. Therefore, checking at runtime is unnecessary.
dynamic_cast is almost always runtime and its result needs to be checked at
runtime.
Also, where did that optional come from? If it's going to use optional, then
it needs to be a library feature.
* to_enum : throw exception if error
* try_to_enum : returns optional<Enum> or an interface based on error_code (like from_chars). status_value and expected can be considered also once adopted.
Do we expect several error conditions?
On terça-feira, 3 de janeiro de 2017 18:53:02 BRST Vicente J. Botet Escriba
wrote:
> Le 03/01/2017 à 16:56, Thiago Macieira a écrit :
> > On terça-feira, 3 de janeiro de 2017 07:06:10 BRST m.ce...@gmail.com
> > wrote:
> >
> > Also, where did that optional come from? If it's going to use optional,
> > then it needs to be a library feature.
>
> I'm not for the optional return type, but we have already a lot of cases
> where syntactic sugar as range-based for loop and structure binding
> depends on library.
> So I'm not sure this could be an argument against optional.
Those are quite different things. In fact, they show how to *not* depend on
the library in the first place. Ranged fors and structured bindings only
require that functions called begin, end, and get exist. So I can add any
function of my own with those names and it will work. And structured bindings
are very specifically *not* a std::tuple. They're something different.
So enum_cast could return an optional-like opaque type defined by the compiler
(like structured bindings and lambdas) which can be converted to std::optional
on construction.
On Tuesday, January 3, 2017 at 1:25:20 PM UTC-5, Vicente J. Botet Escriba wrote:
Le 03/01/2017 à 10:56, m.ce...@gmail.com a écrit :
Hi,
I propose to add a enum_cast function that safely casts int value to a target enum type if int value represents a valid enumerator.
template <typename Enum, typename Int>
optional<Enum> enum_cast(Int value);
requires: is_enum<Enum>, is_integral<Int>
Why do we want this:
- validation of input data (e.g. coming from user or deserialization).- other?
I know this could be purely library extension if based on reflection, but since we don't know when we will get reflection,and implementation based on reflection may not be optimal,I think it should be implemented with compiler support (i.e. via __builtin_* intrinsic).
Hi, I believe this could be useful. Library implementers could be free to use builtins or reflection once it is there.
Now, I believe that we need two kind of functions. One that is a cast and that says just that we are casting. It would be the same as a static_cast. Something like gsl::narrow_cast so that we express better the intent. In addition we need the function you are proposing similar to gsl::narrow.
The question is how the function reports errors. If we use exceptions, I agree with Nicol that a specific exception would be better.
I will then propose
* enum_cast ~static_cast
What's the point of `enum_cast`, save the fact that it would SFINAE on the given type being an actual enumeration? `static_cast` is the standard way of saying "make this integer an enum without checking". I don't think we need another way to spell that.
What is the point of narrow_cast? I believe it is useful to know the kind of cast we are using. This helps to inspect the code we (others) have written in a more efficient way.
* to_enum : throw exception if error
* try_to_enum : returns optional<Enum> or an interface based on error_code (like from_chars). status_value and expected can be considered also once adopted.
Do we expect several error conditions?
Well, there's only one failure state: the given value is not in the enumeration. As such, I see no need for using complicated objects that store errors; `optional` should be sufficient.
enum class trafflic_light{ unknown, reg, yellow, green };
On terça-feira, 3 de janeiro de 2017 11:59:46 BRST Nicol Bolas wrote:
> Lastly, having `enum_cast` work via some compiler-defined type is rather
> extreme for something as simplistic as this, don't you think? There are
> only so many ways to implement an "optional", and you can define implicit
> conversions from `std::optional` type to yours with little penalty. The
> conversion can even be `constexpr`, just like `std::optional`, so you would
> suffer no loss of performance in constexpr contexts.
Anything that is a compiler feature (a builtin or a core language expression)
needs to be able to choose between std::optional, std::experimental::optional
and std::__1::optional by just changing #includes.
Le 03/01/2017 à 19:43, Nicol Bolas a écrit :
What is the point of narrow_cast?
On Tuesday, January 3, 2017 at 1:25:20 PM UTC-5, Vicente J. Botet Escriba wrote:Le 03/01/2017 à 10:56, m.ce...@gmail.com a écrit :
Hi, I believe this could be useful. Library implementers could be free to use builtins or reflection once it is there.Hi,
I propose to add a enum_cast function that safely casts int value to a target enum type if int value represents a valid enumerator.
template <typename Enum, typename Int>
optional<Enum> enum_cast(Int value);
requires: is_enum<Enum>, is_integral<Int>
Why do we want this:
- validation of input data (e.g. coming from user or deserialization).- other?
I know this could be purely library extension if based on reflection, but since we don't know when we will get reflection,and implementation based on reflection may not be optimal,I think it should be implemented with compiler support (i.e. via __builtin_* intrinsic).
Now, I believe that we need two kind of functions. One that is a cast and that says just that we are casting. It would be the same as a static_cast. Something like gsl::narrow_cast so that we express better the intent. In addition we need the function you are proposing similar to gsl::narrow.
The question is how the function reports errors. If we use exceptions, I agree with Nicol that a specific exception would be better.
I will then propose
* enum_cast ~static_cast
What's the point of `enum_cast`, save the fact that it would SFINAE on the given type being an actual enumeration? `static_cast` is the standard way of saying "make this integer an enum without checking". I don't think we need another way to spell that.
I believe it is useful to know the kind of cast we are using. This helps to inspect the code we (others) have written in a more efficient way.
If there is only one error case, yes optional is a good candidate.* to_enum : throw exception if error
* try_to_enum : returns optional<Enum> or an interface based on error_code (like from_chars). status_value and expected can be considered also once adopted.
Do we expect several error conditions?
Well, there's only one failure state: the given value is not in the enumeration. As such, I see no need for using complicated objects that store errors; `optional` should be sufficient.
However it would be weird to have a different exception depending on whether the user uses to_enum or try_to_enum (or whatever names are more appropriated)
How about:enum class trafflic_light{ unknown, reg, yellow, green };
traffic_light = traffic_light::unknown; // My default.
bool success = make_enum(light, x); // Does not set light on failure. X of any integral type
Just considering my earlier make_enum comment further, The ultimate minimum for C++ is surely this?:traffic_light light{}; // whatever default.
bool guaranteed_success = std::can_cast_enum<traffic_light>(x);if (guaranteed_success)
light = reinterpret_cast<traffic_light>(x);// else error condition.It separates the validity check from the conversion
On terça-feira, 3 de janeiro de 2017 08:22:52 BRST m.ce...@gmail.com wrote:
> > How do you know whether a use of enum_cast is compile-time or whether it
> > is
> > runtime?
>
> I don't know and I don't care. The function should be constexpr to support
> both cases.
Well, I think people may care. A constexpr invocation has zero cost at
runtime, since it will have failed to convert by producing an error message,
or it will succeed by becoming a no-op (like static_cast, minus pointer
adjustment). But a non-constexpr invocation will have a runtime cost.
> What I'd like to hear now is whether anyone else would find such a
> functionality useful?
> Do you consider it a good idea to put it into C++ standard/library?
Maybe you should rename it to mean a check if the value is a valid enumerator
and drop the "cast" part.
Just considering my earlier make_enum comment further, The ultimate minimum for C++ is surely this?:traffic_light light{}; // whatever default.
bool guaranteed_success = std::can_cast_enum<traffic_light>(x);if (guaranteed_success)
light = reinterpret_cast<traffic_light>(x);// else error condition.It separates the validity check from the conversion and then you can build whatever on that, i.e. assign, atomic assign, etc.This seems more C++ like, but forgets C, which probably people feel is not a concern anyway.But other wise some special C and C++ syntax like can_cast_enum(traffic_light, x) would seem to cover both languages if that's desirable.
On quarta-feira, 4 de janeiro de 2017 00:44:09 BRST m.ce...@gmail.com wrote:
> I thought about it, but I see no use case for just a check.
> In all the use cases I can think of user wants the cast after the check.
>
> Do you have a example in mind where such a check without a cast later
> would be useful?
The problem is that a cast that can fail needs a way out. dynamic_cast can
fail by returning nullptr (for pointers) or by throwing std::bad_cast (for
references).
So you need to find a way to report that failure.
And I don't want exceptions in my code.
On quarta-feira, 4 de janeiro de 2017 07:20:11 BRST m.ce...@gmail.com wrote:
> Why do you want to use some compiler invented type instead?
> If enum_cast is a function in standard library, why can't it use another
> type from same library?
Then it is a function in the standard library. That means it needs to be based
on something else to perform the check, like reflection.
Hi,I propose to add a enum_cast function that safely casts int value to a target enum type if int value represents a valid enumerator.
template <typename Enum, typename Int>
optional<Enum> enum_cast(Int value);
requires: is_enum<Enum>, is_integral<Int>
enum color
{
red = 1,
green = 200,
blue = 3
};
static_assert(enum_cast<color>(200u) == color::green);
static_assert(enum_cast<color>(3) == color::blue);
static_assert(enum_cast<color>(4) == nullopt);
This should work for both scoped and unscoped enums.For opaque enums this would only check if value fits in underlying type.Why do we want this:- validation of input data (e.g. coming from user or deserialization).- other?I know this could be purely library extension if based on reflection, but since we don't know when we will get reflection,and implementation based on reflection may not be optimal,I think it should be implemented with compiler support (i.e. via __builtin_* intrinsic).Regards,Maciej
I hope this post assists you with your proposal. My suggestions are for your proposal to look something like this:In <utility> I think add:bool template<typename E, typename V> is_enum( V ev ){// compiler magic:// Calling this routine causes the compiler to generate or a call a routine// that returns true if the value ev matches one of the enum E's// values. otherwise false.}
if (is_enum<MyEnum>(139))
if (to_enum<MyEnum>(139).is_valid)
if (to_enum<MyEnum>(139))
// Users call this routine to convert an integer to an enum.They call this if they aren't sure about their conversion data// and don't mind exceptions and want simple like std::to_string// They might expect such a function to be present.// Though to be fair I'm not sure we should encourage this?// They should get good error messages.template<typename E, typename V> E to_enum_or_throw(V ev){if (is_enum<E>(ev))return static_cast<E>(ev);throw make_bad_enum<E,V>(ev);}template<typename E> class to_enum_result{public:E enum_value;bool is_valid;};// Call this if you can't use/afford exceptions. It seems optimal?template<typename E, typename V> to_enum_result<E> to_enum( V ev ) noexcept{if (is_enum<E, V>(ev))return to_enum_result<E>{true,static_cast<E>(ev)};return to_enum_result<E>{false, E{}};}// if is_valid returns false after to_enum, the user can use these routines to get good error// messages / report how they wish when they wish:
* The caller has all the utilities here to build that or work with optional directly - whatever they need.* I think being able to get good error messages on failure is important.This tries to enable making decent errors available no matter what method or is used.i.e. include the enum type name and bad value in the error.
* I've used std::range_error as the exception type for failure but another may be appropriate.bad_enum as an exception type contains the value that was bad and the type name that it failed to convert to.This would allow people to extract the values to construct their own error messages (perhaps in other languages like French etc.).If std::bad_enum is too much API, that could be dropped for just std::range_error or something as long a it containsa full description of the error I think that's fine.
I'd like to know what you and the Committee's disposition towards these ideas and views. Hope this helps.
ThanksGM
I'll try to create a first draft after I get some more comments w.r.t to exceptions.For sure baseline API must be exception-free.Do we really want to additionally support API that throws on error?I'm rather inclined to leave it out - similarly as in to_chars/from_chars.
I hope this post assists you with your proposal. My suggestions are for your proposal to look something like this:
In <utility> I think add:
bool template<typename E, typename V> is_enum( V ev ){// compiler magic:// Calling this routine causes the compiler to generate or a call a routine// that returns true if the value ev matches one of the enum E's// values. otherwise false.}
'is_enum' name is already used in type_traits header.
Also is_enum seems redundant to me, since to_enum already reportsif convertion succeeded instead of:
if (is_enum<MyEnum>(139))
you could write:
if (to_enum<MyEnum>(139).is_valid)
or even:if we add explicit bool convertion operator for to_enum_result.
if (to_enum<MyEnum>(139))
I believe we need two checking functions:
On 2017-01-10 11:20, Nicol Bolas wrote:
> On Tuesday, January 10, 2017 at 10:49:34 AM UTC-5, Matthew Woehlke wrote:
>> On 2017-01-10 10:31, m.ce...@gmail.com <javascript:> wrote:
>>> For sure baseline API must be exception-free.
>>> Do we really want to additionally support API that throws on error?
>>
>> I *really* hope we can use something like std::expected; this is a
>> textbook use case for that, and makes it trivial to have a
>> throw-on-error API (just grab the value from the expected without
>> checking it first; this will turn around and throw if the value is not
>> there).
>
> But the fact that it's throwing the wrong exception makes it useless from
> an exception handling perspective.
Huh? I'm talking about std::expected, not std::optional. If you didn't
get a value, you get whatever error/exception (preferably an exception
in this case, since it doesn't need more storage than the enum and thus
doesn't cost extra) the API stuffed in instead, which would presumably
be a std::range_error or std::bad_enum_cast or whatever. IOW, the *SAME*
exception that an API that throws right away would throw.
For me enumerators are any one of the named enum values. This set is a subset not a superset of the range of valid values.
8 For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type. Otherwise, for an enumeration where emin is the smallest enumerator and emax is the largest, the values of the enumeration are the values in the range bmin to bmax, defined as follows: Let K be 1 for a two’s complement representation and 0 for a ones’ complement or sign-magnitude representation. bmax is the smallest value greater than or equal to max(|emin| − K, |emax|) and equal to 2M − 1, where M is a non-negative integer. bmin is zero if emin is non-negative and −(bmax + K) otherwise. The size of the smallest bit-field large enough to hold all the values of the enumeration type is max(M,1) if bmin is zero and M + 1 otherwise. It is possible to define an enumeration that has values not defined by any of its enumerators. If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a single enumerator with value 0.
The standard says in 7.2/8 :For me enumerators are any one of the named enum values. This set is a subset not a superset of the range of valid values.
8 For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type. Otherwise, for an enumeration where emin is the smallest enumerator and emax is the largest, the values of the enumeration are the values in the range bmin to bmax, defined as follows: Let K be 1 for a two’s complement representation and 0 for a ones’ complement or sign-magnitude representation. bmax is the smallest value greater than or equal to max(|emin| − K, |emax|) and equal to 2M − 1, where M is a non-negative integer. bmin is zero if emin is non-negative and −(bmax + K) otherwise. The size of the smallest bit-field large enough to hold all the values of the enumeration type is max(M,1) if bmin is zero and M + 1 otherwise. It is possible to define an enumeration that has values not defined by any of its enumerators. If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a single enumerator with value 0.
As I said, until we don't have enums that consists only of the enumerators, there would be always the need to check if a value is a valid value for the enumeration.
We could define a class that accepts only the enumerators as valid values, but the language enums can accept more values.
This is way I believe that the two checks are needed.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
std::range_error was ok with me as that could be constructed with a string and the type more reflected the problem.What's wrong with this in your opinion?
A bad_enum class would have allowed an internal char buffer of fixed size to be used because we know what length string worst case we are storing here if that was the problem.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
What problem are you trying to solve? What code are you trying to write?
If the enum has a fixed underlying type, then you might need to know the range because you're using the enum as an ad-hoc strongly typed integer.
I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
But if the enum has an implied underlying type, then why would you need to know if a particular integer (which does not match any enumerator) is within the valid range for that enum?
What are you trying to do that you need to do this?
If you don't have a problem to be solved with such a function, then there's really no point in adding one.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
Because enums are integers. That's the rationale.
It would then create a string of the info an give you an exception object back that you could then throw or do whatever with.so given namespace my { enum class traffic_light { stop, careful, go }; } }make_enum<traffic_light>(3) would produce an exception where what() would return an array with'3 is not a valid value for my:;traffic_light'.I think this is desirable to just having either an std::bad_optional_access with what string exactly in?
Sure, but... I'm not arguing against what you suggested. Indeed, I was specifically arguing against the idea that generic exceptions (whether `bad_optional_access` or ` bad_expected_access<E>`) were a good idea.
Are you sure you intend to be replying to me?
std::range_error was ok with me as that could be constructed with a string and the type more reflected the problem.What's wrong with this in your opinion?
I see no reason to use a generic exception when an exception type specific to enumeration casting could be employed instead. It's ultimately more descriptive and obvious what's going on.A bad_enum class would have allowed an internal char buffer of fixed size to be used because we know what length string worst case we are storing here if that was the problem.
How? An enumeration can be a member of any number of namespaces and/or classes. Thus its name can be quite long. While any particular implementation could use its internal compiler limits to give it a maximum length, that length would be pretty huge, tens if not hundreds of kilobytes long.
Better to dynamically allocate it than to throw such a gargantuan object around.
What is important is the range of values.
Because initializing the enum with an integer out of range is UB.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
Right.What problem are you trying to solve? What code are you trying to write?
If the enum has a fixed underlying type, then you might need to know the range because you're using the enum as an ad-hoc strongly typed integer.
I don't want to solve problems that are already solved. We don't need any modification on the compiler to solve this case, but if it solve the more dificult case it could solve this as well.I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
Because this is legal. I can assign it a value on the specified range. That's all. If I want my code to be outside the UB world I should be able to check on the conditions. I can of course do it for each particular case, but what we are talking of here is about what the compiler could do for us in a generic way.
But if the enum has an implied underlying type, then why would you need to know if a particular integer (which does not match any enumerator) is within the valid range for that enum?
This is not a use case I would write myself, but I've see it a lot of times. When you use enums as flags of an bitsetWhat are you trying to do that you need to do this?
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
The valid enumerators don't correspond to the valid range, that in this case is any value between 0 and 8.
I was sure you will ask and say this ;-)
If you don't have a problem to be solved with such a function, then there's really no point in adding one.
I believe that you didn't understood my question. Let me see with an example.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
Because enums are integers. That's the rationale.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
Vicente
Right.What problem are you trying to solve? What code are you trying to write?
If the enum has a fixed underlying type, then you might need to know the range because you're using the enum as an ad-hoc strongly typed integer.
I don't want to solve problems that are already solved. We don't need any modification on the compiler to solve this case, but if it solve the more dificult case it could solve this as well.I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
Because this is legal. I can assign it a value on the specified range. That's all. If I want my code to be outside the UB world I should be able to check on the conditions. I can of course do it for each particular case, but what we are talking of here is about what the compiler could do for us in a generic way.
But if the enum has an implied underlying type, then why would you need to know if a particular integer (which does not match any enumerator) is within the valid range for that enum?
This is not a use case I would write myself, but I've see it a lot of times. When you use enums as flags of an bitsetWhat are you trying to do that you need to do this?
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
The valid enumerators don't correspond to the valid range, that in this case is any value between 0 and 8.
I was sure you will ask and say this ;-)If you don't have a problem to be solved with such a function, then there's really no point in adding one.
I believe that you didn't understood my question. Let me see with an example.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
Because enums are integers. That's the rationale.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
I don't want to solve problems that are already solved. We don't need any modification on the compiler to solve this case, but if it solve the more dificult case it could solve this as well.I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
What are you both referring to here? I assumed modifying the compiler was a requirement to implement is_enum?Or maybe you know is_enum can already be implemented through meta programming without a compiler change?Or maybe you are talking about modifying the language (so still a compiler change) but so is_enum can implement this feature?
On Wednesday, January 11, 2017 at 1:52:11 PM UTC-5, Vicente J. Botet Escriba wrote:
Le 11/01/2017 à 00:06, Nicol Bolas a écrit :
On Tuesday, January 10, 2017 at 5:26:11 PM UTC-5, Vicente J. Botet Escriba wrote:Le 10/01/2017 à 20:30, Nicol Bolas a écrit :
On Tuesday, January 10, 2017 at 1:49:05 PM UTC-5, Vicente J. Botet Escriba wrote:Le 10/01/2017 à 16:31, m.ce...@gmail.com a écrit :
I believe we need two checking functions:
* is_enumerator : checks if the explicit conversion from the integer is one of the explicit enumerators
* is_in_enum_range: checks if the integer is in the range of valid values. This is the precondition of the static_cast.
IIUC when the underlying type is explicit, the range of values are the range of the underlying type. However when the underlying type is implicit the range goes from the min to the max of the values of the enumerators.
Right, but `is_enumerator` is a functional superset of `is_in_enum_range`. Do people really need to ask only if a value is in the range of an enumerator?
The standard says in 7.2/8 :For me enumerators are any one of the named enum values. This set is a subset not a superset of the range of valid values.
8 For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type. Otherwise, for an enumeration where emin is the smallest enumerator and emax is the largest, the values of the enumeration are the values in the range bmin to bmax, defined as follows: Let K be 1 for a two’s complement representation and 0 for a ones’ complement or sign-magnitude representation. bmax is the smallest value greater than or equal to max(|emin| − K, |emax|) and equal to 2M − 1, where M is a non-negative integer. bmin is zero if emin is non-negative and −(bmax + K) otherwise. The size of the smallest bit-field large enough to hold all the values of the enumeration type is max(M,1) if bmin is zero and M + 1 otherwise. It is possible to define an enumeration that has values not defined by any of its enumerators. If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a single enumerator with value 0.
As I said, until we don't have enums that consists only of the enumerators, there would be always the need to check if a value is a valid value for the enumeration.
We could define a class that accepts only the enumerators as valid values, but the language enums can accept more values.
This is way I believe that the two checks are needed.
Here's the thing.
There are enumerations that have a fixed underlying type (`enum class`es use `int` by default), and there are enumerations that have an implied underlying type (ie: non-`class` enums without a specified type).
What is important is the range of values.
Because initializing the enum with an integer out of range is UB.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
You're misunderstanding my question.
The situation you describe is one where:
1. You have an integer of arbitrary origin.
2. You want to convert it to an enum type.
3. That integer does not match one of the enumerators in that type.
4. The enum type does not have a fixed underlying type.
You have described very well the context.
What goal are you trying to achieve with all this? Or more to the point, why are you incapable of simply giving the enum an underlying type and thus making the question moot?
Right.What problem are you trying to solve? What code are you trying to write?
If the enum has a fixed underlying type, then you might need to know the range because you're using the enum as an ad-hoc strongly typed integer.
I don't want to solve problems that are already solved. We don't need any modification on the compiler to solve this case, but if it solve the more dificult case it could solve this as well.I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
Because this is legal. I can assign it a value on the specified range. That's all. If I want my code to be outside the UB world I should be able to check on the conditions. I can of course do it for each particular case, but what we are talking of here is about what the compiler could do for us in a generic way.
But if the enum has an implied underlying type, then why would you need to know if a particular integer (which does not match any enumerator) is within the valid range for that enum?
This is not a use case I would write myself, but I've see it a lot of times. When you use enums as flags of an bitsetWhat are you trying to do that you need to do this?
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
The valid enumerators don't correspond to the valid range, that in this case is any value between 0 and 8.
But we can already answer that question. `X` is an `enum class`. As such, it always has a fixed underlying type. If you don't specify one, then it shall be `int`, and therefore `X` can legally assume any `int` value.
So the `std::underlying_type`-based solution will work fine.
It should also be noted that the standard-specified range guarantees the ability to use an enum with an implied underlying type as a bitfield. That is, if you perform bitwise operations with the enumerators, the results are guaranteed to fit in the range. So you have nothing to worry about for that use case.
I was sure you will ask and say this ;-)If you don't have a problem to be solved with such a function, then there's really no point in adding one.
I believe that you didn't understood my question. Let me see with an example.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
Because enums are integers. That's the rationale.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
In that case, both have a forced underlying type, as I pointed out above. So the reason for the differing ranges of values is obvious.
Now, let's assume you have revised your example to not use `enum class`. `enum X` does not have a fixed underlying type, so its range is determined by its enumerators. The reason that is different is because that's how it always was before, and there's terribly little reason to change it.
What is important is the range of values.
Because initializing the enum with an integer out of range is UB.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
In my earlier post where I described at length how I thought the API should be, was there UB in that and if so where?I'm trying to understand where your UB comments are directed as I'm not seeing where this happens.
Right.What problem are you trying to solve? What code are you trying to write?
If the enum has a fixed underlying type, then you might need to know the range because you're using the enum as an ad-hoc strongly typed integer.
I don't want to solve problems that are already solved. We don't need any modification on the compiler to solve this case, but if it solve the more dificult case it could solve this as well.I personally despise this obvious abuse of a language feature, but C++17 has effectively canonized it, so there it is. Alternatively, you may be using that enum as a bitfield.
The thing is, that is a solved problem: get the underlying type with `std::underlying_type_t<E>`. That, and its corresponding `numeric_limits` will tell you everything you need to know about the range of that enumeration.
What are you both referring to here?
I assumed modifying the compiler was a requirement to implement is_enum?
Or maybe you know is_enum can already be implemented through meta programming without a compiler change?
Or maybe you are talking about modifying the language (so still a compiler change) but so is_enum can implement this feature?
Because this is legal. I can assign it a value on the specified range. That's all. If I want my code to be outside the UB world I should be able to check on the conditions. I can of course do it for each particular case, but what we are talking of here is about what the compiler could do for us in a generic way.
But if the enum has an implied underlying type, then why would you need to know if a particular integer (which does not match any enumerator) is within the valid range for that enum?
The compiler is generating the code so where does generic come into this?
This is not a use case I would write myself, but I've see it a lot of times. When you use enums as flags of an bitsetWhat are you trying to do that you need to do this?
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
The valid enumerators don't correspond to the valid range, that in this case is any value between 0 and 8.
I was sure you will ask and say this ;-)
If you don't have a problem to be solved with such a function, then there's really no point in adding one.
I believe that you didn't understood my question. Let me see with an example.
I don't know why the new C++11 enum with an explicit underlying type have a different range of valid values.
I'll be interested in knowing the rationale.
Because enums are integers. That's the rationale.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
I had std::any_enum_value in my earlier post and make_bad_enum knew the type of the enum.The idea was that those two pieces of information allowed a value to be constructed that could express any enum as intended.I'm raising that in case it's helpful to this conversation here but it might not be because I can't quite follow what problem is being solved here?
Le 11/01/2017 à 21:49, gmis...@gmail.com a écrit :
The interface you have proposed doesn't introduce any UB, it avoid it, because the test is over a subset of the valid values for an enumeration. The UB is defined in standard. See the paragraph I mentioned. What I'm saying is that we need a function that checks also exactly for the valid values. Is that so complex to understand.What is important is the range of values.
Because initializing the enum with an integer out of range is UB.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
In my earlier post where I described at length how I thought the API should be, was there UB in that and if so where?I'm trying to understand where your UB comments are directed as I'm not seeing where this happens.
The question is : Does your any_enum_value check for the enumerators values or the valid range of the enumeration?
Le 11/01/2017 à 22:21, Nicol Bolas a écrit :
You have described very well the context.On Wednesday, January 11, 2017 at 1:52:11 PM UTC-5, Vicente J. Botet Escriba wrote:Le 11/01/2017 à 00:06, Nicol Bolas a écrit :
On Tuesday, January 10, 2017 at 5:26:11 PM UTC-5, Vicente J. Botet Escriba wrote:Because initializing the enum with an integer out of range is UB.
The question I have to ask is this: if you have an integer, why do you need to know if it will fit within the range of an enum with an implied underlying type?
You're misunderstanding my question.
The situation you describe is one where:
1. You have an integer of arbitrary origin.
2. You want to convert it to an enum type.
3. That integer does not match one of the enumerators in that type.
4. The enum type does not have a fixed underlying type.
Because the enum is declared in a 3pp library in C++98? or a common part that is shared by an application using C++98 and another using C++2x?What goal are you trying to achieve with all this? Or more to the point, why are you incapable of simply giving the enum an underlying type and thus making the question moot?
From the text of the standard I send before, the range is not the range of int. Could you point me from where are you concluding this?This is not a use case I would write myself, but I've see it a lot of times. When you use enums as flags of an bitset
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
The valid enumerators don't correspond to the valid range, that in this case is any value between 0 and 8.
But we can already answer that question. `X` is an `enum class`. As such, it always has a fixed underlying type. If you don't specify one, then it shall be `int`, and therefore `X` can legally assume any `int` value.
I'm not questioning the old C++98 behavior but the new C++11 behavior (if we can say new for C++11)I believe that you didn't understood my question. Let me see with an example.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
In that case, both have a forced underlying type, as I pointed out above. So the reason for the differing ranges of values is obvious.
Now, let's assume you have revised your example to not use `enum class`. `enum X` does not have a fixed underlying type, so its range is determined by its enumerators. The reason that is different is because that's how it always was before, and there's terribly little reason to change it.
I'm not questioning the old C++98 behavior but the new C++11 behavior (if we can say new for C++11)I believe that you didn't understood my question. Let me see with an example.
What is the difference between
enum class X { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
and
enum class Y : unsigned char { NONE=0, A=0x01, B=0x02, C=0x04, ALL 0x08};
?
X has a valid range 0..N, while Y has a valid range 0..255.
Why do we need this difference? Why forcing the underlying type changes the range of valid values?
In that case, both have a forced underlying type, as I pointed out above. So the reason for the differing ranges of values is obvious.
Now, let's assume you have revised your example to not use `enum class`. `enum X` does not have a fixed underlying type, so its range is determined by its enumerators. The reason that is different is because that's how it always was before, and there's terribly little reason to change it.
The C++11 behavior is much simpler: the range of an enum with a fixed underlying type is the range of its fixed underlying type. C++11 did not change the rules for the enum range of an enum without a fixed underlying type.