On 2015-11-20 14:14, Ville Voutilainen wrote:
> Potential fixes are apparently more complex than they seem to the naked
> eye; this code is valid:
>
> int x = {1LL};
I'd treat this as a red herring... *literals* appear to follow their own
rules. For instance, this:
char i = 0.0; // or {0.0}
...triggers neither a narrowing warning (error) not a conversion
warning. IOW, at least GCC will silently permit things that would
otherwise be "bad" if it statically knows that no problem will occur.
(Note: AFAICT, 'int x = {1LL}' is parsed as 'int x = int{1LL}' and not
as an initializer list.)
That doesn't excuse David's case of assigning 1000 to a char. That ought
to trip -Woverflow due to the compiler statically detecting that
overflow WILL occur. (It's also narrowing, of course, but that the
compiler flags neither the narrowing *nor the statically detectable
overflow* is just really, really bad.)
> This means that pure type-based analysis will not help a library implementation
> that would want to rely on a type trait.
Nope... red herring. This:
auto i = 1LL;
int x = i;
...triggers a -Wconversion warning as you'd expect. And even better,
this is a hard error:
auto i = {1LL};
int x = i; // error: can't convert initializer_list<long long> to int
Silent narrowing only occurs when trying to stuff a literal into a
smaller type where it is statically safe to do so. As soon as the
literal is assigned to a typed symbol, you start getting diagnostics as
expected. (Except in the case David noted.) So type-based analysis
should be fine; you won't be dealing with vaguely-typed literals at that
point.
Although, what we actually, *really* need is for the literal to not stop
being a literal (i.e. losing the combination of vague type and
statically-known value) quite so soon. This, unfortunately, sounds less
like a DR and more like a feature.
IOW, this:
auto x = {1LL}; // decltype(x) == std::integer_literal<1>
--
Matthew