Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

C++ (and some C) quiz questions

86 views
Skip to first unread message

Juha Nieminen

unread,
May 16, 2022, 7:21:52 AM5/16/22
to
In another forum I have, over the years, come up with C++ (and C) quiz
questions. Some of these may showcase the (perhaps needless) complexity
of the language, but anyway. How many could you answer without looking
it up?

----------------- Question 1 -----------------
// What does this print?

#include <iostream>

void foo(short i) { std::cout << "short: " << i << "\n"; }
void foo(int i) { std::cout << "int: " << i << "\n"; }

int main()
{
short a = 1, b = 2;
foo(a + b);
}

----------------- Question 2 -----------------
#include <iostream>

void foo(int i)
{ std::cout << "int: " << i << "\n"; }

void foo(int* ptr)
{ std::cout << "int*: " << ptr << "\n"; }

int main()
{
foo(0);
}

// will:
// a) print int: 0
// b) print int*: 0x0
// c) give a compiler error because the call is ambiguous.

----------------- Question 3 -----------------
// Should this compile (according to the language standard) or not?

#include <iostream>

int main()
{
int $name = 5;
std::cout << $name << "\n";
}

----------------- Question 4 -----------------
// Can you tell what the problem is with this?

#include <vector>
#include <string>

int main()
{
std::vector<std::string> v = { "a", "b", "c", "d" };
v.insert(v.begin(), v.front());

// Also, should this work correctly or not?
v.push_back(v.front());
}

----------------- Question 5 -----------------
// What is the problem with this?

long double value = 0.1;

----------------- Question 6 -----------------
// What does this print?

#include <iostream>
#include <string>

using namespace std::literals;

int main()
{
std::cout << std::string("guesthouse", 5)
<< std::string("guesthouse"s, 5)
<< '\n';
}

----------------- Question 7 -----------------
// What is the potential problem with this code?

std::FILE* inFile = std::fopen(filename, "r");

if(!inFile)
std::cout << "Could not open " << filename << ": "
<< std::strerror(errno) << "\n";

----------------- Question 8 -----------------
// What is the using thing below doing, and when does it make a difference?

class MyClass: public BaseClass
{
public:
using BaseClass::foo;
virtual void foo(int) override;
};

// (Note: All foo() functions in BaseClass are public.)

----------------- Question 9 -----------------
// What is wrong with this code?

void func(const std::vector<int>& values)
{
for(std::vector<int>::iterator iter = values.begin();
iter != values.end(); ++iter)
std::cout << *iter << "\n";
}

----------------- Question 10 -----------------
// What does this print?

#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>

int main()
{
signed char c = -1;
uint32_t u = c;
int64_t i = u;

printf("%" PRIi64 "\n", i);
}

// a) -1
// b) 255
// c) 4294967295
// d) 18446744073709551615
// e) Nothing because it doesn't compile.

----------------- Question 11 -----------------
double foo(double d) { return d * 0.0; }

/* When the above is compiled with "gcc -O3" it results in:

// It explicitly multiplies the parameter by 0.0:
vmulsd xmm0, xmm0, QWORD PTR .LC0[rip]
ret
.LC0:
.long 0
.long 0

If the "-ffast-math" option is added, then it results in:

// Ignore the parameter, just zero the xmm0 register
// and return it:
vxorpd xmm0, xmm0, xmm0
ret

Why doesn't it do the latter in all cases?
*/

----------------- Question 12 -----------------
// What happens here?

#include <iostream>
#include <string>

void foobar(std::string s) { std::cout << "str\n"; }
void foobar(int i) { std::cout << "int\n"; }

int main()
{
foobar({});
}

// a) It doesn't compile because it's invalid syntax.
// b) It doesn't compile because the call is ambiguous.
// c) It calls the first foobar() with an empty string.
// d) It calls the second foobar() with a 0.
// e) It's undefined behavior, so anything may happen.

----------------- Question 13 -----------------
int main()
{
int i1 = { int() };
int i2 = int();
int i3 { int() };
int i4 ( int() );
}

// Select one or more:
// a) Compiler error, invalid syntax.
// b) All four lines do the same thing and are interchangeable.
// c) Not all four lines are semantically the same thing, but they end up
// doing the same thing in practice.
// d) Not all four lines do the same thing. (How do they differ?)
// e) It compiles but it's undefined behavior. (Which ones are invalid?)

Paavo Helde

unread,
May 16, 2022, 10:36:03 AM5/16/22
to
16.05.2022 14:21 Juha Nieminen kirjutas:
> In another forum I have, over the years, come up with C++ (and C) quiz
> questions. Some of these may showcase the (perhaps needless) complexity
> of the language, but anyway. How many could you answer without looking
> it up?

I believe I got two wrong, #12 really surprised me and in #10 I misread
"signed char" as "unsigned char".

With #7 I have bitten myself in the past, repeatedly. I finally ended up
with writing and using an equivalent of strerror() which takes the
parameter by non-const reference.

Juha Nieminen

unread,
May 16, 2022, 12:14:11 PM5/16/22
to
Paavo Helde <ees...@osa.pri.ee> wrote:
> I believe I got two wrong, #12 really surprised me and in #10 I misread
> "signed char" as "unsigned char".

When I originally posed the problem, I had just 'char', but then someone
pointed out that a mere 'char' could be signed or unsigned, which is
true. My intent was for it to be signed, so I explicitly specify it
as such.

This is actually a very common mistake that many C and C++ programmers
make: Assuming 'char' is signed. There are platforms where (by default)
it isn't. And not just obscure and obsolete platforms from the 1970's.
Modern platforms in common use.

(In the vast majority of cases, serendipitously, it doesn't matter if
the programmer made the assumption. But in way too many cases it does
matter, and makes programs buggy.)

Andrey Tarasevich

unread,
May 16, 2022, 1:49:30 PM5/16/22
to
On 5/16/2022 4:21 AM, Juha Nieminen wrote:
> In another forum I have, over the years, come up with C++ (and C) quiz
> questions.

It would make a lot of sense to explicitly label each question as C++ or
C questions.

> ----------------- Question 3 -----------------
> // Should this compile (according to the language standard) or not?
>
> #include <iostream>
>
> int main()
> {
> int $name = 5;
> std::cout << $name << "\n";
> }

Weird question... Standard itself does not directly permit `$` in
identifiers, but implementations are allowed to be more permissive.

> ----------------- Question 4 -----------------
> // Can you tell what the problem is with this?
>
> #include <vector>
> #include <string>
>
> int main()
> {
> std::vector<std::string> v = { "a", "b", "c", "d" };
> v.insert(v.begin(), v.front());
>
> // Also, should this work correctly or not?
> v.push_back(v.front());
> }

"Problem"? DR#526 marked it a NAD with the remark "this is required to
work because the standard doesn't give permission for it not to work".
So, there's no "problem" here, at least from the user's point of view.

> ----------------- Question 5 -----------------
> // What is the problem with this?
>
> long double value = 0.1;

Again, depends on what you consider a "problem".

> ----------------- Question 6 -----------------
> // What does this print?
>
> #include <iostream>
> #include <string>
>
> using namespace std::literals;
>
> int main()
> {
> std::cout << std::string("guesthouse", 5)
> << std::string("guesthouse"s, 5)
> << '\n';
> }

A nice question. This is indeed counterintuitive.

But even if one doesn't know the answer right away, the very fact that
you decided to make it a question already hints heavily at what's going
to happen here :)

> // What is the using thing below doing, and when does it make a difference?
>
> class MyClass: public BaseClass
> {
> public:
> using BaseClass::foo;
> virtual void foo(int) override;
> };
>
> // (Note: All foo() functions in BaseClass are public.)

I'm not sure what `virtual` and `override` are doing here. If I
correctly understand your intent, nothing will change if you just remove
both. Are these intended as red herring? Or am I missing something
important?

> ----------------- Question 10 -----------------
> // What does this print?
>
> #include <stdio.h>
> #include <stdint.h>
> #include <inttypes.h>
>
> int main()
> {
> signed char c = -1;
> uint32_t u = c;
> int64_t i = u;
>
> printf("%" PRIi64 "\n", i);
> }
>

If you are into that whole brevity thing, then you don't need to
explicitly include <stdint.h>. The standard specifies that <inttypes.h>
brings <stdint.h> with it.

> ----------------- Question 12 -----------------
> // What happens here?
>
> #include <iostream>
> #include <string>
>
> void foobar(std::string s) { std::cout << "str\n"; }
> void foobar(int i) { std::cout << "int\n"; }
>
> int main()
> {
> foobar({});
> }


Must be a recent addition based on the "Uniform initialization
ambiguity" thread...

--
Best regards,
Andrey

Christian Gollwitzer

unread,
May 16, 2022, 2:03:07 PM5/16/22
to
Am 16.05.22 um 13:21 schrieb Juha Nieminen:
> In another forum I have, over the years, come up with C++ (and C) quiz
> questions. Some of these may showcase the (perhaps needless) complexity
> of the language, but anyway. How many could you answer without looking
> it up?

Thank you for the nice quiz. #5 and #11 are the only ones I could
immediately see, because I'm doing numerical math in C++. I haven't
tried compiling the other samples, but this will hopefully be instructive.

Christian

Alf P. Steinbach

unread,
May 16, 2022, 2:51:24 PM5/16/22
to
Re #5, I guess that the problem with

long double value = 0.1;

... is that when `long double` doesn't have the same representation as
`double`, one may get a less precise value than with

long double value = 0.1L;

Is that it?

---

Unfortunately, if that is indeed the problem, then in my experimentation
(MSVC, g++) curly braces don't help ensure right type for initializer.

One can however use code like this:


#include <utility>
#include <type_traits>

template<
class R, class Arg,
class = std::enable_if_t<std::is_same_v<R, Arg>>
>
auto of_type_( const Arg& v )
-> R
{ return v; }

auto main() -> int
{
const auto x = of_type_<long double>( 0.1L );
const auto y = of_type_<long double>( x );
(void) x; (void) y;
}


Cheers,

- Alf

Christian Gollwitzer

unread,
May 16, 2022, 2:59:45 PM5/16/22
to
Am 16.05.22 um 20:51 schrieb Alf P. Steinbach:
> On 16 May 2022 20:02, Christian Gollwitzer wrote:
>> Am 16.05.22 um 13:21 schrieb Juha Nieminen:
>>> In another forum I have, over the years, come up with C++ (and C) quiz
>>> questions. Some of these may showcase the (perhaps needless) complexity
>>> of the language, but anyway. How many could you answer without looking
>>> it up?
>>
>> Thank you for the nice quiz. #5 and #11 are the only ones I could
>> immediately see, because I'm doing numerical math in C++. I haven't
>> tried compiling the other samples, but this will hopefully be
>> instructive.
>
> Re #5, I guess that the problem with
>
>     long double value = 0.1;
>
> ... is that when `long double` doesn't have the same representation as
> `double`, one may get a less precise value than with
>
>     long double value = 0.1L;
>
> Is that it?

That's how I see it. 0.1 is a double constant and since 1/10 can't be
represented exactly in binary (assuming binary floats), the value in
"value" is not the closes approximation to 0.1 possible, which is 0.1L.
Also you wouldn't get a warning usually. In the case of float x=0.1, the
0.1 is converted to a narrower type which is a typical warning, however
Juha's line looks perfectly fine for the compiler.

I don't get the point of the template code you wrote afterwards.

Christian

Manfred

unread,
May 16, 2022, 3:56:10 PM5/16/22
to
On 5/16/2022 8:59 PM, Christian Gollwitzer wrote:
> I don't get the point of the template code you wrote afterwards.

It is meant to give a compile time error if the initializer expression
has a different type than the variable to be initialized.

Christian Gollwitzer

unread,
May 16, 2022, 4:09:54 PM5/16/22
to
Am 16.05.22 um 21:55 schrieb Manfred:
OK I suspected something like this. However, if that means I'd have to
write the ridiculously complex

const auto x = of_type_<long double>( 0.1L );

instead of

long double x = 0.1L;

or even

auto x = 0.1L;

I'm questioning the seriousness of the proposal.

BTW, in old C when the types were omitted, it was implicit int. How
about "implicit auto" for today? That way, one could write

x = 0.1L;

or
f(x, y) {
return 3*x+y;
}

to get a templated function with type inference and get closer to modern
languages?

Christian

Andrey Tarasevich

unread,
May 16, 2022, 4:47:47 PM5/16/22
to
On 5/16/2022 1:09 PM, Christian Gollwitzer wrote:
> BTW, in old C when the types were omitted, it was implicit int.

However even then you had to designate a declaration as a declaration by
specifying at least a qualifier or a storage class specifier. You could say

const x = 1; /* relies on "implicit int" */

but you could not just say

x = 1;

and expect "implicit int" to kick in.

In other words, "implicit int" did not permit you to use undeclared
variables. There was no such thing as "implicitly declared variables" in
old C (as opposed to functions). You still had to meticulously
pre-declare your variables.

> How about "implicit auto" for today?

Well, again, this is really not about "implicit auto". This is about
permitting implicit variable declarations, i.e. automatic treatment of
an expression statement (of some restricted form?) as a declaration
statement. This a way bigger change than just "implicit auto".

--
Best regards,
Andrey



Ben

unread,
May 16, 2022, 5:19:40 PM5/16/22
to
Andrey Tarasevich <andreyta...@hotmail.com> writes:

> On 5/16/2022 1:09 PM, Christian Gollwitzer wrote:
>> BTW, in old C when the types were omitted, it was implicit int.
>
> However even then you had to designate a declaration as a declaration
> by specifying at least a qualifier or a storage class specifier.

Not always.

> You could say
>
> const x = 1; /* relies on "implicit int" */

Yes, and (ironically, given the suggestion for "implicit auto" in C++)
it was often auto that was used. We have auto today because of implicit
int of old.

> but you could not just say
>
> x = 1;
>
> and expect "implicit int" to kick in.

You could at file scope.

> In other words, "implicit int" did not permit you to use undeclared
> variables.

Yes, but you didn't need anything but the name in certain contexts. For
example this declares a K&R C function that takes an int and returns an
int:

f(x) { return x+1; }

--
Ben.

Andrey Tarasevich

unread,
May 16, 2022, 5:53:16 PM5/16/22
to
On 5/16/2022 2:19 PM, Ben wrote:
>
>> but you could not just say
>>
>> x = 1;
>>
>> and expect "implicit int" to kick in.
>
> You could at file scope.
>
>> In other words, "implicit int" did not permit you to use undeclared
>> variables.
>
> Yes, but you didn't need anything but the name in certain contexts. For
> example this declares a K&R C function that takes an int and returns an
> int:
>
> f(x) { return x+1; }
>

Yes, but in both cases these are contextually forced to be interpreted
as declarations. So, the point still stands: even in old C one had to
introduce a variable before being able to use it. There was no ambiguity
between declarations and statements.

--
Best regards,
Andrey

Paavo Helde

unread,
May 16, 2022, 5:54:52 PM5/16/22
to
16.05.2022 23:09 Christian Gollwitzer kirjutas:
> BTW, in old C when the types were omitted, it was implicit int. How
> about "implicit auto" for today? That way, one could write
>
>     x = 0.1L;
>
> or
>     f(x, y) {
>         return 3*x+y;
>     }
>
> to get a templated function with type inference and get closer to modern
> languages?

You mean sloppy languages, not modern. At the same time, these sloppy
languages are currently trying to move closer to strictly typed
languages, as - surprise-surprise - sloppy code is not so easy to
maintain. That's why you can now write e.g. in Python things like

def headline(text: str, align: bool = True) -> str:



Ben

unread,
May 16, 2022, 5:57:38 PM5/16/22
to
Andrey Tarasevich <andreyta...@hotmail.com> writes:

> On 5/16/2022 2:19 PM, Ben wrote:
>>
>>> but you could not just say
>>>
>>> x = 1;
>>>
>>> and expect "implicit int" to kick in.
>> You could at file scope.
>>
>>> In other words, "implicit int" did not permit you to use undeclared
>>> variables.
>> Yes, but you didn't need anything but the name in certain contexts. For
>> example this declares a K&R C function that takes an int and returns an
>> int:
>> f(x) { return x+1; }
>>
>
> Yes, but in both cases these are contextually forced to be interpreted
> as declarations. So, the point still stands: even in old C one had to
> introduce a variable before being able to use it.

That's why I was agreeing with you. (You saw my "yes" I presume).

I was disagreeing with this:

"However even then you had to designate a declaration as a declaration
by specifying at least a qualifier or a storage class specifier."

because it's not true.

--
Ben.

Ben

unread,
May 16, 2022, 7:34:41 PM5/16/22
to
I doubt CG was referring to Python since it does no type inference. He
may be thinking of languages like Haskell that are strongly typed but
include very effective type inference.

Of course I don't think C++ can or should go down that route, but that's
another issue.

> def headline(text: str, align: bool = True) -> str:

--
Ben.

Juha Nieminen

unread,
May 17, 2022, 12:42:02 AM5/17/22
to
Christian Gollwitzer <auri...@gmx.de> wrote:
>> Re #5, I guess that the problem with
>>
>>     long double value = 0.1;
>>
>> ... is that when `long double` doesn't have the same representation as
>> `double`, one may get a less precise value than with
>>
>>     long double value = 0.1L;
>>
>> Is that it?
>
> That's how I see it. 0.1 is a double constant and since 1/10 can't be
> represented exactly in binary (assuming binary floats), the value in
> "value" is not the closes approximation to 0.1 possible, which is 0.1L.

It's easy to make such mistakes, and not just in variable initialization,
but also pretty much anywhere where a literal is used, like:

x = y * 0.1; // If x and y are long double, precision is lost

It's also a good reason to never use literals and instead always use
const(expr) variables for all values, even the "literals". (Because that
way the type of everything can be easily changed.)

(Of course nowadays this is less of an issue, sort of, at least in x86
code, because 'long double' being an 80-bit floating point is deprecated
in hardware, as using it forces the compiler to use FPU instructions, which
are measurably slower than SSE, which makes long double a very poor choice
for efficient number-crunching. However, the time may well come soon that
long double will make a resurgence.)

Christian Gollwitzer

unread,
May 17, 2022, 2:24:30 AM5/17/22
to
Am 17.05.22 um 01:34 schrieb Ben:
> Paavo Helde <ees...@osa.pri.ee> writes:
>
>> 16.05.2022 23:09 Christian Gollwitzer kirjutas:
>>> BTW, in old C when the types were omitted, it was implicit int. How
>>> about "implicit auto" for today? That way, one could write
>>>     x = 0.1L;
>>> or
>>>     f(x, y) {
>>>         return 3*x+y;
>>>     }
>>> to get a templated function with type inference and get closer to modern
>>> languages?
>>
>> You mean sloppy languages, not modern. At the same time, these sloppy
>> languages are currently trying to move closer to strictly typed
>> languages, as - surprise-surprise - sloppy code is not so easy to
>> maintain. That's why you can now write e.g. in Python things like
>
> I doubt CG was referring to Python since it does no type inference. He
> may be thinking of languages like Haskell that are strongly typed but
> include very effective type inference.

Yes, indeed, I was thinking about other languages with static typing
(Python does have strong typing, but it's dynamic). For example, have a
look at Nim: https://nim-lang.org/

It looks similar to Python, but it uses type inferencing to deduce the
type of a variable from the first assignment. Hence, it is statically
compiled and achieves the same overall performance.

To distinguish variable declaration and assignment, Nim uses "var". "var
x = 0" declares a new variable initialized to 0 whereas "x=0" assigns
zero. This is not unlike, in current C++ "auto x= 0" and "x=0" would
work - even though "auto" is a very srange keyword for that.


Christian

Juha Nieminen

unread,
May 17, 2022, 4:22:01 AM5/17/22
to
Christian Gollwitzer <auri...@gmx.de> wrote:
> It looks similar to Python, but it uses type inferencing to deduce the
> type of a variable from the first assignment.

Does it support integers of different sizes (8-bit, 16-bit, 32-bit,
64-bit...)? If yes, how do you specify which type of integer you want?

> This is not unlike, in current C++ "auto x= 0" and "x=0" would
> work - even though "auto" is a very srange keyword for that.

The standardization committee seems to have this weird attitude towards
new reserved keywords, where they both want to minimize the amount of
them in newer standards (which is why they eg. reused the 'auto' keyword,
which was already reserved since the very start), but also they are happy
to add new reserved keywords almost at a whim (thread_local, co_yield,
requires...)

They want to both eat and keep the cake, which is a bit weird.

I suppose 'auto' is not the *worst* possible choice for the role it
currently has. (Much better than eg. "=0" to denote a pure virtual
function.)

Ben

unread,
May 17, 2022, 6:06:29 AM5/17/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Christian Gollwitzer <auri...@gmx.de> wrote:
>> It looks similar to Python, but it uses type inferencing to deduce the
>> type of a variable from the first assignment.
>
> Does it support integers of different sizes (8-bit, 16-bit, 32-bit,
> 64-bit...)? If yes, how do you specify which type of integer you want?

In Nim, literals have iXX appended (3i16 for example). As is so often
the case, there's a bunch of implicit conversions to make writing
arithmetic expressions easier. Haskell does not do any implicit
conversions, much to the consternation of beginners, but its type
classes do help to make it simpler than it might otherwise be.

--
Ben.

Ben

unread,
May 17, 2022, 6:13:06 AM5/17/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Christian Gollwitzer <auri...@gmx.de> wrote:
>>> Re #5, I guess that the problem with
>>>
>>>     long double value = 0.1;
>>>
>>> ... is that when `long double` doesn't have the same representation as
>>> `double`, one may get a less precise value than with
>>>
>>>     long double value = 0.1L;
>>>
>>> Is that it?
>>
>> That's how I see it. 0.1 is a double constant and since 1/10 can't be
>> represented exactly in binary (assuming binary floats), the value in
>> "value" is not the closes approximation to 0.1 possible, which is 0.1L.
>
> It's easy to make such mistakes, and not just in variable initialization,
> but also pretty much anywhere where a literal is used, like:
>
> x = y * 0.1; // If x and y are long double, precision is lost

7.4 Usual arithmetic conversions

(1.2) — If either operand is of type long double, the other shall be
converted to long double.

--
Ben.

Paavo Helde

unread,
May 17, 2022, 7:37:23 AM5/17/22
to
17.05.2022 13:12 Ben kirjutas:
> Juha Nieminen <nos...@thanks.invalid> writes:
>>
>> x = y * 0.1; // If x and y are long double, precision is lost
>
> 7.4 Usual arithmetic conversions
>
> (1.2) — If either operand is of type long double, the other shall be
> converted to long double.

Yes, and what happens when 0.1 is converted to long double (assuming
long double is larger than double)? You will get a wrong value

0.100000000000000005551

instead of the intended more precise value

0.100000000000000000001




Tim Rentsch

unread,
May 17, 2022, 7:53:41 AM5/17/22
to
Ben <ben.u...@bsb.me.uk> writes:

> Juha Nieminen <nos...@thanks.invalid> writes:
>
>> Christian Gollwitzer <auri...@gmx.de> wrote:
>>
>>>> Re #5, I guess that the problem with
>>>>
>>>> long double value = 0.1;
>>>>
>>>> ... is that when `long double` doesn't have the same representation as
>>>> `double`, one may get a less precise value than with
>>>>
>>>> long double value = 0.1L;
>>>>
>>>> Is that it?
>>>
>>> That's how I see it. 0.1 is a double constant and since 1/10 can't be
>>> represented exactly in binary (assuming binary floats), the value in
>>> "value" is not the closes approximation to 0.1 possible, which is 0.1L.
>>
>> It's easy to make such mistakes, and not just in variable initialization,
>> but also pretty much anywhere where a literal is used, like:
>>
>> x = y * 0.1; // If x and y are long double, precision is lost
>
> 7.4 Usual arithmetic conversions
>
> (1.2) ? If either operand is of type long double, the other shall be
> converted to long double.

But that doesn't change the problem of potential loss of
precision, because the representation (and hence the particular
value) of the constant 0.1 is chosen before that value is
converted to long double. The constant 0.1, being of type
double, doesn't have to have less precision than 0.1L, but
certainly it can.

Ben

unread,
May 17, 2022, 9:31:27 AM5/17/22
to
Yes, of course. Sorry for the noise. I was thinking about the more
general case, not the case of a literal.

--
Ben.

Juha Nieminen

unread,
May 18, 2022, 12:40:51 AM5/18/22
to
Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
> But that doesn't change the problem of potential loss of
> precision, because the representation (and hence the particular
> value) of the constant 0.1 is chosen before that value is
> converted to long double. The constant 0.1, being of type
> double, doesn't have to have less precision than 0.1L, but
> certainly it can.

AFAIK no C/C++ compiler will second-guess the programmer and assume that
the literal was meant to be a long double literal. If you specify a
literal of type double, the compiler will assume you meant it (and will
just convert it to double without adding any precision to the resulting
value.)

With double vs. long double the loss in precision isn't extremely
drastic (although in some calculations it could accumulate to significant
proportions). However, people often make the mistake when they are using
some third-party multiple-precision libraries that support floating point
values of arbitrary size. If you are calculating with eg. 1024-bit
floating point, make sure you initialize them properly (ie. do not
initialize such a value with eg. the literal 0.1).

Manfred

unread,
May 18, 2022, 7:30:06 AM5/18/22
to
I believe Tim was referring to the fact that the standard does not
mandate for long double to have more precision than double.

Tim Rentsch

unread,
May 18, 2022, 9:01:24 AM5/18/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>
>> But that doesn't change the problem of potential loss of
>> precision, because the representation (and hence the particular
>> value) of the constant 0.1 is chosen before that value is
>> converted to long double. The constant 0.1, being of type
>> double, doesn't have to have less precision than 0.1L, but
>> certainly it can.
>
> AFAIK no C/C++ compiler will second-guess the programmer and
> assume that the literal was meant to be a long double literal. If
> you specify a literal of type double, the compiler will assume you
> meant it [...]

My statement is not about what compilers do but only about what
the respective standards allow.

Tim Rentsch

unread,
May 18, 2022, 9:57:52 AM5/18/22
to
That is one possibility, but I didn't mean just that.

Here is the original context:

long double value = 0.1;

I'm not sure what C++ allows or doesn't allow for the value of
the literal 0.1.

For C, my understanding is that the current C standard allows the
constant 0.1 to be represented in the format, and precision, of a
long double even though its type is double. (The rules for
floating constants in C has changed over time so I'm not sure if
that allowance might be different for earlier C standards.)

The next C standard apparently will be explicit on this point -
in the n2731 draft of the C standard, section 6.4.4.2 paragraph 6
says this in part:

The values of floating constants may be represented in
greater range and precision than that required by the type
(determined by the suffix); the types are not changed
thereby.

Paavo Helde

unread,
May 18, 2022, 10:37:26 AM5/18/22
to
Doesn't it just mean that in the program code, one can write the literal
with more precision than needed, just in case the type may have more
precision in another or future implementation?

const double pi = 3.141592653589793238462643383279502884197;



Juha Nieminen

unread,
May 19, 2022, 10:44:07 AM5/19/22
to
Paavo Helde <ees...@osa.pri.ee> wrote:
>> That is one possibility, but I didn't mean just that.
>>
>> Here is the original context:
>>
>> long double value = 0.1;
>>
>> I'm not sure what C++ allows or doesn't allow for the value of
>> the literal 0.1.
>>
>> For C, my understanding is that the current C standard allows the
>> constant 0.1 to be represented in the format, and precision, of a
>> long double even though its type is double. (The rules for
>> floating constants in C has changed over time so I'm not sure if
>> that allowance might be different for earlier C standards.)
>>
>> The next C standard apparently will be explicit on this point -
>> in the n2731 draft of the C standard, section 6.4.4.2 paragraph 6
>> says this in part:
>>
>> The values of floating constants may be represented in
>> greater range and precision than that required by the type
>> (determined by the suffix); the types are not changed
>> thereby.
>
> Doesn't it just mean that in the program code, one can write the literal
> with more precision than needed, just in case the type may have more
> precision in another or future implementation?
>
> const double pi = 3.141592653589793238462643383279502884197;

There may indeed be a bit of confusion in concepts at play here. After all,
there are three stages of compilation at which precision of a floating point
literal plays a role: In the ascii representation of the value in the source
code, the internal value that the compiler converts that ascii representation
to, and the value that ends up in the executable binary. All three may (in
a sense, when it comes to the first form) use different precisions.

I really think that something like a programming language standard should
be clearer and more unambiguous about things like this (unless, perhaps,
the standard specifies somewhere else precisely what it means by
"values" being "represented".)

Tim Rentsch

unread,
May 19, 2022, 11:21:56 AM5/19/22
to
The short answer to this question is no. That seems obvious to
me, but let me try to give a more complete explanation.

When talking about floating point, the C standard uses the terms
range and precision in relation to aspects of elements in the
abstract machine. (There are separate notions of "precision"
that pertain to integer types or to the *printf() functions, but
these uses do not concern us here.)

In contrast, a floating constant occurs in program source and is
just a sequence of characters. A floating constant has a source
form but does not have a range or precision, as the C standard
uses those terms.

The word "type" is used both to mean a compile-time notion that
is manipulated during compilation and to describe an internal
format that occurs inside the abstract machine at run time. In
many cases these two notions are used interchangeably, but they
aren't quite the same, and pointedly so in the case of floating
point. For example, in n1570 (a C11 draft), 6.3.1.8 paragraph 2
says this:

The values of floating operands and of the results of
floating expressions may be represented in greater range and
precision than that required by the type; the types are not
changed thereby.

This sentence illustrates the distinction between "type" as a
compile-time notion and a run-time internal format, which may
be different than the internal format of the compile-time type.

Floating constants have a compile-time type (which is determined
by their suffix, or lack thereof). However the type does not
necessarily determine the internal format used to represent the
constant. Again from n1570, 6.4.4.2 paragraph 5 says in its
last sentence:

All floating constants of the same source form(75) shall
convert to the same internal format with the same value.

The footnote numbered 75 gives clarifying examples:

1.23, 1.230, 123e-2, 123e-02, and 1.23L are all different
source forms and thus need not convert to the same internal
format and value.

After reading the above explanation I hope it is clear that the
excerpt from n2731 refers to what internal format is used, and
does not refer to any aspect of the source form (except of course
indirectly because constants having the same source form must use
the same internal format and have the same value).

Does that make more sense now?
0 new messages