Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Don't be fooled by cpp.sh

1,352 views
Skip to first unread message

woodb...@gmail.com

unread,
Dec 14, 2019, 12:28:10 AM12/14/19
to
Shalom, comp.lang.c++,

I was just looking at the website cpp.sh (C++ shell) and the
latest standard they have listed (as of December 2019) is
2014 C++.

I'm here to tell you that on-line C++ is alive and
well. My code generator requires 2017 C++ and has been
updated hundreds of times since 2017. Imo the future
of C++ is bright, especially with respect to on-line code
generation.

Please don't be fooled by cpp.sh and others like it. There
are some fools imo here. Perhaps one of them has "blessed"
us with cpp.sh.


Brian
Ebenezer Enterprises - Enjoying programming again.
https://github.com/Ebenezer-group/onwards

bol...@nowhere.co.uk

unread,
Dec 14, 2019, 4:23:24 AM12/14/19
to
On Fri, 13 Dec 2019 21:28:01 -0800 (PST)
woodb...@gmail.com wrote:
>Shalom, comp.lang.c++,
>
>I was just looking at the website cpp.sh (C++ shell) and the
>latest standard they have listed (as of December 2019) is
>2014 C++.
>
>I'm here to tell you that on-line C++ is alive and
>well. My code generator requires 2017 C++ and has been
>updated hundreds of times since 2017. Imo the future
>of C++ is bright, especially with respect to on-line code
>generation.
>
>Please don't be fooled by cpp.sh and others like it. There
>are some fools imo here. Perhaps one of them has "blessed"
>us with cpp.sh.

Who the hell cares about online C++ compilers? Anyone who uses C++ will
already have a far superior compiler on their own machine so what purpose
do they serve other than stroking the egos of the people writing them?

Vir Campestris

unread,
Dec 14, 2019, 12:19:35 PM12/14/19
to
On 14/12/2019 09:23, bol...@nowhere.co.uk wrote:
> Who the hell cares about online C++ compilers? Anyone who uses C++ will
> already have a far superior compiler on their own machine so what purpose
> do they serve other than stroking the egos of the people writing them?

I care. Just the other day I was testing some code across different
compilers. The online compilers let me try half a dozen different
compilers on my code fragment. I only have GCC installed on my machine.

Andy

Öö Tiib

unread,
Dec 14, 2019, 12:54:21 PM12/14/19
to
With so complex and quickly evolving language like C++ it is good idea
to have couple of different compilers installed plus some static
analysis tools as well. When any of such clever tools (written by quite
good programmers) become confused about your code then it is almost
guaraneed that other person (or even you yourself few months later)
will be even more confused about it.

woodb...@gmail.com

unread,
Dec 14, 2019, 3:01:08 PM12/14/19
to
On Saturday, December 14, 2019 at 11:54:21 AM UTC-6, Öö Tiib wrote:
> On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
> >
> > I care. Just the other day I was testing some code across different
> > compilers. The online compilers let me try half a dozen different
> > compilers on my code fragment. I only have GCC installed on my machine.
>
> With so complex and quickly evolving language like C++ it is good idea
> to have couple of different compilers installed plus some static
> analysis tools as well.

It takes a few minutes to install one, and that's if everything
goes well. The network usage from downloading is another factor.

> When any of such clever tools (written by quite
> good programmers) become confused about your code then it is almost
> guaraneed that other person (or even you yourself few months later)
> will be even more confused about it.

Compilers aren't like human intelligence imo. When someone
says compilers are smart I laugh. If they were smart, they
would have transformed themselves into services 20 years
ago.

Have you heard of the biggest ball of twine in Minnesota?
Weird Al wrote a song about it:
https://duckduckgo.com/?q=weird+al+twine+ball&t=h_&ia=videos&iax=videos&iai=Tcw326PJuDw

This is a history of the ball from 2019:
https://duckduckgo.com/?q=largest+ball+twine&iax=videos&ia=videos&iai=JTGAJDdFsGw

That's not going to transform itself either.


Öö Tiib

unread,
Dec 14, 2019, 4:51:53 PM12/14/19
to
On Saturday, 14 December 2019 22:01:08 UTC+2, woodb...@gmail.com wrote:
> On Saturday, December 14, 2019 at 11:54:21 AM UTC-6, Öö Tiib wrote:
> > On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
> > >
> > > I care. Just the other day I was testing some code across different
> > > compilers. The online compilers let me try half a dozen different
> > > compilers on my code fragment. I only have GCC installed on my machine.
> >
> > With so complex and quickly evolving language like C++ it is good idea
> > to have couple of different compilers installed plus some static
> > analysis tools as well.
>
> It takes a few minutes to install one, and that's if everything
> goes well. The network usage from downloading is another factor.

Installing consumes very little computer resources so we can do
everything what we want while it installs. And the price of terabyte of
storage is so low that there are next to no reasons to uninstall
anything ever nowadays.

>
> > When any of such clever tools (written by quite
> > good programmers) become confused about your code then it is almost
> > guaraneed that other person (or even you yourself few months later)
> > will be even more confused about it.
>
> Compilers aren't like human intelligence imo. When someone
> says compilers are smart I laugh. If they were smart, they
> would have transformed themselves into services 20 years
> ago.

People who write compiler make it to translate the code as wisely
as they would do it themselves. As the compiler writers are usually
far smarter than its users ... like you ... that laughter is
foolish. Your last sentence is just demagogy boasting online
services.

>
> Have you heard of the biggest ball of twine in Minnesota?
> Weird Al wrote a song about it:
> https://duckduckgo.com/?q=weird+al+twine+ball&t=h_&ia=videos&iax=videos&iai=Tcw326PJuDw
>
> This is a history of the ball from 2019:
> https://duckduckgo.com/?q=largest+ball+twine&iax=videos&ia=videos&iai=JTGAJDdFsGw
>
> That's not going to transform itself either.

http://www.smbc-comics.com/comic/feeling-stupid

Melzzzzz

unread,
Dec 14, 2019, 5:17:11 PM12/14/19
to
C++ can be quite clean language, written.
>


--
press any key to continue or any other to quit...
U ničemu ja ne uživam kao u svom statusu INVALIDA -- Zli Zec
Svi smo svedoci - oko 3 godine intenzivne propagande je dovoljno da jedan narod poludi -- Zli Zec
Na divljem zapadu i nije bilo tako puno nasilja, upravo zato jer su svi
bili naoruzani. -- Mladen Gogala

Melzzzzz

unread,
Dec 14, 2019, 5:19:50 PM12/14/19
to
On 2019-12-14, Öö Tiib <oot...@hot.ee> wrote:
> On Saturday, 14 December 2019 22:01:08 UTC+2, woodb...@gmail.com wrote:
>> On Saturday, December 14, 2019 at 11:54:21 AM UTC-6, Öö Tiib wrote:
>> > On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
>> > >
>> > > I care. Just the other day I was testing some code across different
>> > > compilers. The online compilers let me try half a dozen different
>> > > compilers on my code fragment. I only have GCC installed on my machine.
>> >
>> > With so complex and quickly evolving language like C++ it is good idea
>> > to have couple of different compilers installed plus some static
>> > analysis tools as well.
>>
>> It takes a few minutes to install one, and that's if everything
>> goes well. The network usage from downloading is another factor.
>
> Installing consumes very little computer resources so we can do
> everything what we want while it installs. And the price of terabyte of
> storage is so low that there are next to no reasons to uninstall
> anything ever nowadays.

I have gcc and clang and when I want icpp. What others are there
that are half usable? MS VC but that one is Windows only...

>
>>
>> > When any of such clever tools (written by quite
>> > good programmers) become confused about your code then it is almost
>> > guaraneed that other person (or even you yourself few months later)
>> > will be even more confused about it.
>>
>> Compilers aren't like human intelligence imo. When someone
>> says compilers are smart I laugh. If they were smart, they
>> would have transformed themselves into services 20 years
>> ago.
>
> People who write compiler make it to translate the code as wisely
> as they would do it themselves. As the compiler writers are usually
> far smarter than its users ... like you ... that laughter is
> foolish. Your last sentence is just demagogy boasting online
> services.

Given that compilers are black boxes to some, I agree.

>
>>
>> Have you heard of the biggest ball of twine in Minnesota?
>> Weird Al wrote a song about it:
>> https://duckduckgo.com/?q=weird+al+twine+ball&t=h_&ia=videos&iax=videos&iai=Tcw326PJuDw
>>
>> This is a history of the ball from 2019:
>> https://duckduckgo.com/?q=largest+ball+twine&iax=videos&ia=videos&iai=JTGAJDdFsGw
>>
>> That's not going to transform itself either.
>
> http://www.smbc-comics.com/comic/feeling-stupid


Öö Tiib

unread,
Dec 14, 2019, 7:03:20 PM12/14/19
to
On Sunday, 15 December 2019 00:17:11 UTC+2, Melzzzzz wrote:
> On 2019-12-14, Öö Tiib <oot...@hot.ee> wrote:
> > On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
> >> On 14/12/2019 09:23, bol...@nowhere.co.uk wrote:
> >> > Who the hell cares about online C++ compilers? Anyone who uses C++ will
> >> > already have a far superior compiler on their own machine so what purpose
> >> > do they serve other than stroking the egos of the people writing them?
> >>
> >> I care. Just the other day I was testing some code across different
> >> compilers. The online compilers let me try half a dozen different
> >> compilers on my code fragment. I only have GCC installed on my machine.
> >
> > With so complex and quickly evolving language like C++ it is good idea
> > to have couple of different compilers installed plus some static
> > analysis tools as well. When any of such clever tools (written by quite
> > good programmers) become confused about your code then it is almost
> > guaraneed that other person (or even you yourself few months later)
> > will be even more confused about it.
>
> C++ can be quite clean language, written.

Sure, and with cleanly written code the tools tend to be content as
well.
But we have to be capable of reading code of others anyway if
for nothing else then for constructively explaining why it is bad
to use it like that. The C++98 was already huge in that sense.
Then C++11 came and changed everything by quadrupling the language
with move semantics, lambdas and constexpr. In February C++2020 will
quadruple it again. I feel like too old for learning it all
once again. :(

Öö Tiib

unread,
Dec 14, 2019, 7:13:17 PM12/14/19
to
On Sunday, 15 December 2019 00:19:50 UTC+2, Melzzzzz wrote:
> On 2019-12-14, Öö Tiib <oot...@hot.ee> wrote:
> > Installing consumes very little computer resources so we can do
> > everything what we want while it installs. And the price of terabyte of
> > storage is so low that there are next to no reasons to uninstall
> > anything ever nowadays.
>
> I have gcc and clang and when I want icpp. What others are there
> that are half usable? MS VC but that one is Windows only...

I have to use some less common compilers for embedded systems.
Also I typically have several versions of same line because the
compilers have some breaking changes and differences in defects
between versions. Just look at their issue management. ;)

Ian Collins

unread,
Dec 14, 2019, 9:41:42 PM12/14/19
to
It helps to keep the dementia away :)


I enjoy working with and reviewing code for younger programmers keen to
try new stuff I had missed.

--
Ian

Chris M. Thomasson

unread,
Dec 15, 2019, 1:56:07 AM12/15/19
to
Perfect!

bol...@nowhere.co.uk

unread,
Dec 15, 2019, 4:49:45 AM12/15/19
to
On Sat, 14 Dec 2019 22:17:00 GMT
Melzzzzz <Melz...@zzzzz.com> wrote:
>On 2019-12-14, Öö Tiib <oot...@hot.ee> wrote:
>> On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
>>> On 14/12/2019 09:23, bol...@nowhere.co.uk wrote:
>>> > Who the hell cares about online C++ compilers? Anyone who uses C++ will
>>> > already have a far superior compiler on their own machine so what purpose
>>> > do they serve other than stroking the egos of the people writing them?
>>>
>>> I care. Just the other day I was testing some code across different
>>> compilers. The online compilers let me try half a dozen different
>>> compilers on my code fragment. I only have GCC installed on my machine.
>>
>> With so complex and quickly evolving language like C++ it is good idea
>> to have couple of different compilers installed plus some static
>> analysis tools as well. When any of such clever tools (written by quite
>> good programmers) become confused about your code then it is almost
>> guaraneed that other person (or even you yourself few months later)
>> will be even more confused about it.
>
>C++ can be quite clean language, written.

It can be , but often isn't. And as certain people in this group demonstrate,
some coders go out of their way to make their code obstuse and hard to
understand for reasons more to do with their personalities rather than logical
requirements.

bol...@nowhere.co.uk

unread,
Dec 15, 2019, 4:54:00 AM12/15/19
to
On Sat, 14 Dec 2019 16:03:10 -0800 (PST)
=?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>On Sunday, 15 December 2019 00:17:11 UTC+2, Melzzzzz wrote:
>> C++ can be quite clean language, written.
>
>Sure, and with cleanly written code the tools tend to be content as=20
>well.
>But we have to be capable of reading code of others anyway if
>for nothing else then for constructively explaining why it is bad
>to use it like that. The C++98 was already huge in that sense.
>Then C++11 came and changed everything by quadrupling the language
>with move semantics, lambdas and constexpr. In February C++2020 will
>quadruple it again. I feel like too old for learning it all
>once again. :(

You're not the only one. However no doubt I and many others will have to learn
all the extra pointless BS in c++20 simply in order to get our CVs through the
door for future job interviews. I doubt I will ever use any of it - I hardly
use anything from C++14 and so far nothing from 17 in my own projects. In
fact my last 2 projects work and personal were in plain C.

Melzzzzz

unread,
Dec 15, 2019, 6:12:22 AM12/15/19
to
You just use what is convenient. In practice when write new code.
That is how I learned new features. But still don't use constexpr ;)

Melzzzzz

unread,
Dec 15, 2019, 6:18:54 AM12/15/19
to
I use C only when necessary. When I have to. syntactic sugar of C++ is
what makes me use it. And I hate macros.

Daniel

unread,
Dec 15, 2019, 9:53:18 AM12/15/19
to
On Saturday, December 14, 2019 at 7:03:20 PM UTC-5, Öö Tiib wrote:
> The C++98 was already huge in that sense.
> Then C++11 came and changed everything by quadrupling the language
> with move semantics, lambdas and constexpr. In February C++2020 will
> quadruple it again.

And yet, it still doesn't have big decimal, big float, big integer,
std::int128, a good story for unicode, a streams library that doesn't follow
the "massive overhead principle", and much else. There is no common language
package manager, which practically all modern languages have. In
consequence, the C++ open source offerings frequently look pale in comparison
to their counterparts in other languages that have these things, for example,
there doesn't seem to be anything in the C++ arsenal that fully compares to
Rust's serde, nor, it seems, can there be, given the lack of basic things.

Daniel

Jorgen Grahn

unread,
Dec 15, 2019, 10:35:24 AM12/15/19
to
On Sat, 2019-12-14, Melzzzzz wrote:
> On 2019-12-14, Öö Tiib <oot...@hot.ee> wrote:
>> On Saturday, 14 December 2019 22:01:08 UTC+2, woodb...@gmail.com wrote:
>>> On Saturday, December 14, 2019 at 11:54:21 AM UTC-6, Öö Tiib wrote:
>>> > On Saturday, 14 December 2019 19:19:35 UTC+2, Vir Campestris wrote:
>>> > >
>>> > > I care. Just the other day I was testing some code across different
>>> > > compilers. The online compilers let me try half a dozen different
>>> > > compilers on my code fragment. I only have GCC installed on my machine.
>>> >
>>> > With so complex and quickly evolving language like C++ it is good idea
>>> > to have couple of different compilers installed plus some static
>>> > analysis tools as well.
>>>
>>> It takes a few minutes to install one, and that's if everything
>>> goes well. The network usage from downloading is another factor.
>>
>> Installing consumes very little computer resources so we can do
>> everything what we want while it installs. And the price of terabyte of
>> storage is so low that there are next to no reasons to uninstall
>> anything ever nowadays.
>
> I have gcc and clang and when I want icpp. What others are there
> that are half usable? MS VC but that one is Windows only...

For Unix users, I suppose what's most interesting is newer and older
versions of those you mentioned. (Although I personally rarely bother
with them, and stay away from the bleeding edge in my code, too.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

bol...@nowhere.co.uk

unread,
Dec 15, 2019, 11:44:58 AM12/15/19
to
If you do cross platform systems or network programming you have no choice but
to use macros even in C++17.

bol...@nowhere.co.uk

unread,
Dec 15, 2019, 11:48:56 AM12/15/19
to
On Sun, 15 Dec 2019 06:53:07 -0800 (PST)
Daniel <daniel...@gmail.com> wrote:
>the "massive overhead principle", and much else. There is no common languag=
>e=20
>package manager, which practically all modern languages have. In

Why does it need one? You have a directory tree for include files and most
of the libraries are closely tied in with the OS itself so can't be managed
by the language system. Its easy to offer a nice package manager in a high
level language like Python that hides all the nasty low level details where
it interfaces with the OS libraries but C/C++ use the OS libraries direct.
If the package manager screws up the whole system is fucked, not just the
compiler.


Daniel

unread,
Dec 15, 2019, 12:10:35 PM12/15/19
to
On Sunday, December 15, 2019 at 11:48:56 AM UTC-5, bol...@nowhere.co.uk wrote:
> On Sun, 15 Dec 2019 06:53:07 -0800 (PST)
> Daniel wrote:
> >the "massive overhead principle", and much else. There is no common languag=
> >e=20
> >package manager, which practically all modern languages have. In
>
> Why does it need one?

There appears to be a demand for one, witness the fact that we have HomeBrew,
Conan, vcpkg, Hunter, Buckaroo, Spack, and many others. The problem in C++ is
not that we don't have a package manager, but that we have too many, and no
one commonly accepted one.

Daniel

bol...@nowhere.co.uk

unread,
Dec 15, 2019, 1:34:07 PM12/15/19
to
On Sun, 15 Dec 2019 09:10:23 -0800 (PST)
Daniel <daniel...@gmail.com> wrote:
>On Sunday, December 15, 2019 at 11:48:56 AM UTC-5, bol...@nowhere.co.uk wrote:
>> On Sun, 15 Dec 2019 06:53:07 -0800 (PST)
>> Daniel wrote:
>> >the "massive overhead principle", and much else. There is no common languag=
>
>> >e=20
>> >package manager, which practically all modern languages have. In
>>
>> Why does it need one?
>
>There appears to be a demand for one, witness the fact that we have HomeBrew,
>Conan, vcpkg, Hunter, Buckaroo, Spack, and many others. The problem in C++ is

I don't know about the others but Homebrew is a generic 3rd party package
manager for macos.

>not that we don't have a package manager, but that we have too many, and no
>one commonly accepted one.

That doesn't answer the question - why does C++ need one? Perhaps on a junk
OS like windows where files are scattered to the 4 winds it might help but
on *nix things are pretty clean and ordered already and you don't need a
package manager to unzip some header and library files into /usr/local


Jorgen Grahn

unread,
Dec 15, 2019, 2:29:36 PM12/15/19
to
On Sun, 2019-12-15, bol...@nowhere.co.uk wrote:
> On Sun, 15 Dec 2019 09:10:23 -0800 (PST)
> Daniel <daniel...@gmail.com> wrote:
>>On Sunday, December 15, 2019 at 11:48:56 AM UTC-5, bol...@nowhere.co.uk wrote:
>>> On Sun, 15 Dec 2019 06:53:07 -0800 (PST)
>>> Daniel wrote:

>>> >the "massive overhead principle", and much else. There is no
>>> >common language package manager, which practically all modern
>>> >languages have. In
>>>
>>> Why does it need one?
>>
>>There appears to be a demand for one, witness the fact that we have
>>HomeBrew, Conan, vcpkg, Hunter, Buckaroo, Spack, and many
>>others. The problem in C++ is
...
>>not that we don't have a package manager, but that we have too many, and no
>>one commonly accepted one.
>
> That doesn't answer the question - why does C++ need one? Perhaps on a junk
> OS like windows where files are scattered to the 4 winds it might help but
> on *nix things are pretty clean and ordered already and you don't need a
> package manager to unzip some header and library files into /usr/local

Actually you /do/ want a package manager, but I run Debian Stable and
don't want parallel package managers for Perl, Python, Ruby, Java etc
all providing software according to quality, stability and security
principles which are unclear to me.

I'm happy with C++ and C not picking sides in this regard.

Mr Flibble

unread,
Dec 15, 2019, 3:48:27 PM12/15/19
to
You don't learn C++17 and C++20 for your fucking CV, you learn it because the improvements they bring to C++ are worth using and, thankfully, you will be forced to use them and if you don't like that you can simply fuck off elsewhere: I'm sure you will love Java instead.

/Flibble

--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin

“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who doesn’t believe in any God the most. Oh, no..wait.. that never happens.” – Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."

Melzzzzz

unread,
Dec 15, 2019, 3:49:53 PM12/15/19
to
I can use macros but I don't write them...

Chris M. Thomasson

unread,
Dec 15, 2019, 6:12:01 PM12/15/19
to
On 12/15/2019 12:49 PM, Melzzzzz wrote:
> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
>> On Sun, 15 Dec 2019 11:18:43 GMT
>> Melzzzzz <Melz...@zzzzz.com> wrote:
>>> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
[...]
>>> I use C only when necessary. When I have to. syntactic sugar of C++ is
>>> what makes me use it. And I hate macros.
>>
>> If you do cross platform systems or network programming you have no choice but
>> to use macros even in C++17.
>
> I can use macros but I don't write them...

Have you ever experienced the Boost preprocessor macros? Iirc, it was
the Chaos lib way back. Some hardcore shi%.

;^)

Melzzzzz

unread,
Dec 15, 2019, 6:28:14 PM12/15/19
to
Heh, I never used boost as well :P

Öö Tiib

unread,
Dec 15, 2019, 6:41:29 PM12/15/19
to
On Monday, 16 December 2019 01:28:14 UTC+2, Melzzzzz wrote:
> On 2019-12-15, Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> > On 12/15/2019 12:49 PM, Melzzzzz wrote:
> >> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
> >>> On Sun, 15 Dec 2019 11:18:43 GMT
> >>> Melzzzzz <Melz...@zzzzz.com> wrote:
> >>>> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
> > [...]
> >>>> I use C only when necessary. When I have to. syntactic sugar of C++ is
> >>>> what makes me use it. And I hate macros.
> >>>
> >>> If you do cross platform systems or network programming you have no choice but
> >>> to use macros even in C++17.
> >>
> >> I can use macros but I don't write them...
> >
> > Have you ever experienced the Boost preprocessor macros? Iirc, it was
> > the Chaos lib way back. Some hardcore shi%.
> >
> > ;^)
>
> Heh, I never used boost as well :P

Last 15 years it has been almost inevitable to use something from boost
in bigger project. Other way is to write precisely same thing yourself.
Did you have myFilesystem, myOptional and myVariant before C++17?

James Kuyper

unread,
Dec 15, 2019, 11:38:20 PM12/15/19
to
On 12/15/19 9:53 AM, Daniel wrote:
...
> And yet, it still doesn't have ...
> std::int128,

I'm curious: how would std::int128 differ from std::int128_t, which has
been standardized? It is optional, just like all the other fixed-sized
typedefs. Like those other typedefs, if not supported, that identifier,
and the corresponding macros such at INT128_MAX, may not be used for any
conflicting purpose if the type is not supported.
What more would you want of std::int128? Making it mandatory? That won't
work any better than making std:int16_t mandatory - C++'s mandate is to
be implementable almost everywhere, including platforms on which
std::int16_t (much less std::int128_t) would not be implementable.

Melzzzzz

unread,
Dec 16, 2019, 12:18:47 AM12/16/19
to
On 2019-12-15, Öö Tiib <oot...@hot.ee> wrote:
Yes.

Öö Tiib

unread,
Dec 16, 2019, 12:23:39 AM12/16/19
to
May be it is not about standard but about compiler makers who provide
128 bit integers that are not conformant with std::int128_t.

Daniel

unread,
Dec 16, 2019, 12:24:37 AM12/16/19
to
On Sunday, December 15, 2019 at 11:38:20 PM UTC-5, James Kuyper wrote:
>
> std::int128_t ... has been standardized

I wasn't aware of that. Can you provide a reference?

Thanks,
Daniel

Daniel

unread,
Dec 16, 2019, 12:36:32 AM12/16/19
to
On Monday, December 16, 2019 at 12:23:39 AM UTC-5, Öö Tiib wrote:
>
> May be it is not about standard but about compiler makers who provide
> 128 bit integers that are not conformant with std::int128_t.

I don't know any compilers that have std::128_t. gcc and clang have __int128,
but std::numeric_limits<__int128> is only defined if compiled with e.g.
-std=gnu++11 and not -std=c++11.

Daniel

Öö Tiib

unread,
Dec 16, 2019, 12:43:50 AM12/16/19
to
So don't I know of any. All compilers implement 128 bit integers as
non-standard extensions.

David Brown

unread,
Dec 16, 2019, 3:35:23 AM12/16/19
to
Boost has always pushed the limits of what can be done with C++. And
when it seems they have pushed it to something useful, but the
user-experience or the implementation is unpleasant (such as needed
"hardcore macros"), the C++ powers-that-be look at what they need to do
to the language to give users roughly the same results, but less
unpleasantness. This is one of the driving forces behind the evolution
of C++.


David Brown

unread,
Dec 16, 2019, 3:44:12 AM12/16/19
to
There are technical - but currently unavoidable - reasons why compilers
don't provide a full 128-bit integer type. One of these is that if
64-bit gcc were to provide a 128-bit extended integer type (using the C
and C++ technical term "extended integer type"), then "maxint_t" would
have to be that type rather than "long long int". This would have a
knock-on effect on ABI's, libraries, headers, etc., which would be out
of the question. Other complications involve support for literals of
the type. (That would open more challenges with user-defined literals -
you'd need language support for operator "" X(unsigned __int128) as well.)

Compilers are free to implement a type that is mostly like a 128-bit
integer type. But it can't be classified as an "extended integer type".
And thus it can't be added as int128_t and uint128_t.

Hopefully the C and/or C++ folks will figure out a way to improve this
deadlock, but this is not something the compiler writers can fix by
themselves. The nearest they can get is collaborating on the name of
the almost-integer-type __int128.

bol...@nowhere.co.uk

unread,
Dec 16, 2019, 4:55:25 AM12/16/19
to
Do try growing up at some point.

bol...@nowhere.co.uk

unread,
Dec 16, 2019, 4:57:36 AM12/16/19
to
Optional and variant are both solutions looking for a problem. Just more noise
added to the language to keep a tiny number of purists happy.

Bo Persson

unread,
Dec 16, 2019, 6:13:42 AM12/16/19
to
The *meaning* of intN_t is standardized for all N. Still optional though
(with special rules for N = 8, 16, 32, and 64).


Bo Persson

Bart

unread,
Dec 16, 2019, 7:10:45 AM12/16/19
to
On 16/12/2019 08:44, David Brown wrote:
> On 16/12/2019 06:43, Öö Tiib wrote:
>> On Monday, 16 December 2019 07:36:32 UTC+2, Daniel wrote:
>>> On Monday, December 16, 2019 at 12:23:39 AM UTC-5, Öö Tiib wrote:
>>>>
>>>> May be it is not about standard but about compiler makers who provide
>>>> 128 bit integers that are not conformant with std::int128_t.
>>>
>>> I don't know any compilers that have std::128_t. gcc and clang have __int128,
>>> but std::numeric_limits<__int128> is only defined if compiled with e.g.
>>> -std=gnu++11 and not -std=c++11.
>>
>> So don't I know of any. All compilers implement 128 bit integers as
>> non-standard extensions.
>>
>
> There are technical - but currently unavoidable - reasons why compilers
> don't provide a full 128-bit integer type. One of these is that if
> 64-bit gcc were to provide a 128-bit extended integer type (using the C
> and C++ technical term "extended integer type"), then "maxint_t" would
> have to be that type rather than "long long int". This would have a
> knock-on effect on ABI's, libraries, headers, etc., which would be out
> of the question. Other complications involve support for literals of
> the type.

You mean that even when int128 types are built-in, there is no support
for 128-bit constants? And presumably not for output either, according
to my tests with gcc and g++:

__int128_t a;
a=170141183460469231731687303715884105727;
std::cout << a;

The constant overflows, and if I skip that part, I get pages of errors
when trying to print it. On gcc, I have no idea what printf format to use.

This is frankly astonishing with such a big and important language, and
with comprehensive compilers such as gcc and g++. Even my own 'toy'
language can do this:

int128 a := 170'141'183'460'469'231'731'687'303'715'884'105'727

println a
println a:"hs'"
println word128.maxvalue # (ie. uint128)

Output is:

170141183460469231731687303715884105727
7fff'ffff'ffff'ffff'ffff'ffff'ffff'ffff
340282366920938463463374607431768211455

What exactly is the problem with allowing 128-bit constants at least?

(My implementation is by no means complete (eg. 128/128 is missing, it
does 128/64), but it can do most of the basics and I just need to get
around to filling in the gaps.

It will also have arbitrary precision decimal integers /and/ floats
(another thing that was mentioned as missing from core C++), which I'm
working on this month. That will have constant and output support too:

a := 123.456e1000'000L
println a

So mere 128-bit integer-only support is trivial!)

> (That would open more challenges with user-defined literals -
> you'd need language support for operator "" X(unsigned __int128) as well.)

But this would be just a natural extension? I assume that whatever that
means, there is already X(uint64_t). There is not a totally new feature
or concept to add.

(And if your example means there is no '__uint128', then that's another
incredibly easy thing to have fixed!)

James Kuyper

unread,
Dec 16, 2019, 7:17:23 AM12/16/19
to
There's nothing that defining std::int128 would do to solve that problem
unless the standard says something different about std::int128 than it
currently says about std::int128_t. So what would you want it to say
different?

James Kuyper

unread,
Dec 16, 2019, 7:25:36 AM12/16/19
to
On 12/16/19 3:44 AM, David Brown wrote:
...
> There are technical - but currently unavoidable - reasons why compilers
> don't provide a full 128-bit integer type. One of these is that if
> 64-bit gcc were to provide a 128-bit extended integer type (using the C
> and C++ technical term "extended integer type"), then "maxint_t" would
> have to be that type rather than "long long int". This would have a
> knock-on effect on ABI's, libraries, headers, etc., which would be out
> of the question. Other complications involve support for literals of
> the type. (That would open more challenges with user-defined literals -
> you'd need language support for operator "" X(unsigned __int128) as well.)
>
> Compilers are free to implement a type that is mostly like a 128-bit
> integer type. But it can't be classified as an "extended integer type".
> And thus it can't be added as int128_t and uint128_t.
>
> Hopefully the C and/or C++ folks will figure out a way to improve this
> deadlock, but this is not something the compiler writers can fix by
> themselves. The nearest they can get is collaborating on the name of
> the almost-integer-type __int128.

The fundamental problem is that people didn't want to implement
[u]intmax_t as intended: the largest supported integer type, a type that
must necessarily change every time that the list of supported integer
types expands to include a larger type. Instead, for some reason, they
wanted to use it as a synonym for int64_t. Why they didn't use int64_t
as the type for such interfaces, I don't know. This was in fact
predicted before the proposal to add [u]intmax_t to the C standard was
approved: someone (I think it was Doug Gwynn) predicted that someday we
would need to add something like
really_really_max_int_type_this_time_we_really_mean_it - and then people
would get into the habit of assuming that
really_really_max_int_type_this_time_we_really_mean_it was just a
synonym for int128_t, and they'd have to do it all over again. I suspect
he's right.

James Kuyper

unread,
Dec 16, 2019, 7:44:38 AM12/16/19
to
As with most of those parts of the C++ library that correspond to the C
standard library, the C++ standard says very little itself about
<stdint.h>, <inttypes.h>, <cstdint> and <cinttypes> (C++ 27.9.2p3,4 is
the main exception, and not relevant to this discussion). Instead, it
simply incorporates the corresponding wording from the C standard by
reference (C++ 17.6.1.2p4).

The relevant words are in the C standard.

First of all:

"For each type described herein that the implementation provides, 254)
<stdint.h> shall declare that typedef name and define the associated
macros. Conversely, for each type described herein that the
implementation does not provide, <stdint.h> shall not declare that
typedef name nor shall it define the associated macros. An
implementation shall provide those types described as ‘‘required’’, but
need not provide any of the others (described as ‘‘optional’’)."
(C 7.20.1p4)

Support for [u]intleastN_t (7.2.1.2p3) and [u]intfastN_t (7.20.1.3p3)
and is mandatory for N==8, 16, 32, and 64. Support for [u]intN_t is
mandatory for those same values, but only if an implementation provides
integer types with those sizes, no padding bits, and 2's complement
representation for the signed versions of those types (7.20.1.1p3). All
an implementation needs to do to justify not providing int64_t is to not
provide such a 64-bit type. For all other values of N cases, support is
optional - but per 7.20.1p4, the relevant identifiers cannot be used for
any conflicting purpose by the implementation - and that is the sense in
which I say that it has been standardized.

Bo Persson

unread,
Dec 16, 2019, 8:13:26 AM12/16/19
to
When C was first standardized we still had some mainframes that used
36-bit ones' complement. On those intmax_t could be 72-bits and int64_t
was just not available.


> This was in fact
> predicted before the proposal to add [u]intmax_t to the C standard was
> approved: someone (I think it was Doug Gwynn) predicted that someday we
> would need to add something like
> really_really_max_int_type_this_time_we_really_mean_it - and then people
> would get into the habit of assuming that
> really_really_max_int_type_this_time_we_really_mean_it was just a
> synonym for int128_t, and they'd have to do it all over again. I suspect
> he's right.
>

We never learn, of course. :-)


Bo Persson

David Brown

unread,
Dec 16, 2019, 10:21:36 AM12/16/19
to
There is no printf support for 128-bit integers, except for systems
where "long long" is 128-bit. (Or, hypothetically, 128-bit "long",
"int", "short" or "char".) As you know full well, gcc does not have
printf, since gcc is a compiler and printf is from the standard library.

> This is frankly astonishing with such a big and important language, and
> with comprehensive compilers such as gcc and g++. Even my own 'toy'
> language can do this:

It is only astonishing if you don't understand what it means for a
language to have a standard. It's easy to do this with a toy language
(and that is an advantage of toy languages). It would not be
particularly difficult for the gcc or clang developers to support it
either. But the ecosystem around C is huge - you can't change the
standard in a way that breaks compatibility, which is what would have to
happen to make these types full integer types.

I am not convinced that there is any great reason for having full
support for 128-bit types here - how often do you really need them?
There are plenty of use-cases for big integers, such as in cryptography,
but then 128-bit is not nearly big enough. And for situations where
128-bit integers are useful, how often do you need literals of that size?

With C++, it would not be difficult to put together a class that acts
like a 128-bit integer type for most purposes - including support for
literals and std::cout. Consider it a challenge to test your C++ knowledge.

>
>     int128 a := 170'141'183'460'469'231'731'687'303'715'884'105'727
>
>     println a
>     println a:"hs'"
>     println word128.maxvalue             # (ie. uint128)
>
> Output is:
>
>     170141183460469231731687303715884105727
>     7fff'ffff'ffff'ffff'ffff'ffff'ffff'ffff
>     340282366920938463463374607431768211455
>
> What exactly is the problem with allowing 128-bit constants at least?
>
> (My implementation is by no means complete (eg. 128/128 is missing, it
> does 128/64), but it can do most of the basics and I just need to get
> around to filling in the gaps.
>
> It will also have arbitrary precision decimal integers /and/ floats
> (another thing that was mentioned as missing from core C++), which I'm
> working on this month. That will have constant and output support too:
>
>    a := 123.456e1000'000L
>    println a
>
> So mere 128-bit integer-only support is trivial!)

Arbitrary precision arithmetic is, I think, a lot more useful than
128-bit integers. But it would be well outside the scope of C. It
would be more reasonable to implement them it in a C++ library, but it
is questionable whether it makes sense for the standard library. After
all, there are many ways to implement arbitrary precision arithmetic
with significantly different trade-offs in terms of speed, run-time
memory efficiency, code space efficiency, and how these match with the
type of usage you want for them.

>
>> (That would open more challenges with user-defined literals -
>> you'd need language support for operator "" X(unsigned __int128) as
>> well.)
>
> But this would be just a natural extension? I assume that whatever that
> means, there is already X(uint64_t). There is not a totally new feature
> or concept to add.
>

<https://en.cppreference.com/w/cpp/language/user_literal>

David Brown

unread,
Dec 16, 2019, 10:32:34 AM12/16/19
to
I suspect that at the time, no one really thought about the possibility
of 128-bit integers. After all, 64-bit integers were new, and 64 bits
should be enough for anybody!

Personally, I think the whole "intmax_t" concept was a mistake. But
it's easier to see that kind of thing afterwards.

"intmax_t" is defined (roughly) as being the biggest integer type. I
wonder if the definition could be changed in the standards to apply only
to being the biggest /standard/ integer type? That would leave the door
open for /extended/ integer types that are larger. As far as I have
know, compilers that actually implement extended integer types are rare
or non-existent, so this could be done without breaking existing code or
tools.

Robert Wessel

unread,
Dec 16, 2019, 11:54:17 AM12/16/19
to
Trivially, arbitrary precision arithmetic is more useful than 128-bit
integers, in that the format can do everything the latter can do, and
more.

On the other hand, a proper bignum library will almost always carry a
substantial performance penalty compared to fixed 128-bit integers.
Consider that on x86-64 a typical add/subtract/compare reduces to two
instructions (ignoring data movement), and a multiply to about three
multiplies and two adds (division being the usual PITA).

On the third hand, 128-bit integers allow a number of things that
64-bit integers don't - a useful type in which to perform currency
calculations, usefully wide (and precise) times, and the like.

Personally I think both 128-bit types and arbitrary precision
arithmetic should be part of the language (whether in the language
proper or the library).

Öö Tiib

unread,
Dec 16, 2019, 1:21:40 PM12/16/19
to
Nonsense. The problems are usual. Performance and reliability. Union is
rather good performance optimization in decent hands, variant is a easy
to use reliable union, and optional is a variant between value and
nothing. So "purist" who can wield such trivial tools typically
wins such a sorry "latitudinarians" who can't.

Öö Tiib

unread,
Dec 16, 2019, 1:46:43 PM12/16/19
to
On Monday, 16 December 2019 07:18:47 UTC+2, Melzzzzz wrote:
> On 2019-12-15, Öö Tiib <oot...@hot.ee> wrote:
> > On Monday, 16 December 2019 01:28:14 UTC+2, Melzzzzz wrote:
> >> On 2019-12-15, Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
> >> > On 12/15/2019 12:49 PM, Melzzzzz wrote:
> >> >> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
> >> >>> On Sun, 15 Dec 2019 11:18:43 GMT
> >> >>> Melzzzzz <Melz...@zzzzz.com> wrote:
> >> >>>> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
> >> > [...]
> >> >>>> I use C only when necessary. When I have to. syntactic sugar of C++ is
> >> >>>> what makes me use it. And I hate macros.
> >> >>>
> >> >>> If you do cross platform systems or network programming you have no choice but
> >> >>> to use macros even in C++17.
> >> >>
> >> >> I can use macros but I don't write them...
> >> >
> >> > Have you ever experienced the Boost preprocessor macros? Iirc, it was
> >> > the Chaos lib way back. Some hardcore shi%.
> >> >
> >> > ;^)
> >>
> >> Heh, I never used boost as well :P
> >
> > Last 15 years it has been almost inevitable to use something from boost
> > in bigger project. Other way is to write precisely same thing yourself.
> > Did you have myFilesystem, myOptional and myVariant before C++17?
>
> Yes.

Variant and filesystem felt too boring and complicated to
write myself so I always either used some platform-specific
or portable library (like Qt, or boost) solutions. Some "optional"
or "fallible" is simpler to write but why to bother when there is
boost already?

Keith Thompson

unread,
Dec 16, 2019, 1:47:53 PM12/16/19
to
Robert Wessel <robert...@yahoo.com> writes:
[...]
> Personally I think both 128-bit types and arbitrary precision
> arithmetic should be part of the language (whether in the language
> proper or the library).

128-bit integer types are already permitted by the C++ standard.
But adding a standard (extended or not) 128-bit integer type would
conflict with existing ABIs and libraries due to the effect on
[u]intmax_t.

I don't think making 128-bit integers mandatory would be practical,
given the difficulty (and questionable usefulness) of providing
them on small embedded systems. Note that gcc supports __int128,
but not on all targets.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
[Note updated email address]
Working, but not speaking, for Philips Healthcare
void Void(void) { Void(); } /* The recursive call of the void */

Keith Thompson

unread,
Dec 16, 2019, 1:52:29 PM12/16/19
to
David Brown <david...@hesbynett.no> writes:
[...]
> Personally, I think the whole "intmax_t" concept was a mistake. But
> it's easier to see that kind of thing afterwards.
[...]

Personally, I think intmax_t is useful (perhaps more in C than in C++).
For example:

some_integer_type x = some_value;
printf("x = %jd\n", (intmax_t)x);

The zoo of ugly format macros in <cinttypes> doesn't cover all
possible cases.

Of course in C++ you can just write:

std::cout << "x = " << x << "\n";

Öö Tiib

unread,
Dec 16, 2019, 1:56:23 PM12/16/19
to
Sure it was, it did not take into account how dimwittedly shortshighted
humans (even most bright engineers) are forever.

> "intmax_t" is defined (roughly) as being the biggest integer type. I
> wonder if the definition could be changed in the standards to apply only
> to being the biggest /standard/ integer type? That would leave the door
> open for /extended/ integer types that are larger. As far as I have
> know, compilers that actually implement extended integer types are rare
> or non-existent, so this could be done without breaking existing code or
> tools.

Non-existent at least on case of C++.

Öö Tiib

unread,
Dec 16, 2019, 2:19:38 PM12/16/19
to
On Monday, 16 December 2019 20:47:53 UTC+2, Keith Thompson wrote:
> Robert Wessel <robert...@yahoo.com> writes:
> [...]
> > Personally I think both 128-bit types and arbitrary precision
> > arithmetic should be part of the language (whether in the language
> > proper or the library).
>
> 128-bit integer types are already permitted by the C++ standard.
> But adding a standard (extended or not) 128-bit integer type would
> conflict with existing ABIs and libraries due to the effect on
> [u]intmax_t.

Sure, when humans make ABI they make it non-extendable, immutable
and carved into rock? I don't buy it. Didn't we evolve farther from
troglodyte apes some time ago?

> I don't think making 128-bit integers mandatory would be practical,
> given the difficulty (and questionable usefulness) of providing
> them on small embedded systems. Note that gcc supports __int128,
> but not on all targets.

Not mandatory. Even int_least128_t not mandatory ... but just adding
some __int128 that is only barely good enough so evil monster companies
can implement polished versions into their javas, c#s, gos and swifts
is somewhat insulting. Maybe we, C and C++ programmers should give
two months strike to those fat fuckers after February world-wide. ;)

Daniel

unread,
Dec 16, 2019, 2:56:33 PM12/16/19
to
On Monday, December 16, 2019 at 10:21:36 AM UTC-5, David Brown wrote:
>
> With C++, it would not be difficult to put together a class that acts
> like a 128-bit integer type for most purposes - including support for
> literals and std::cout. Consider it a challenge to test your C++ knowledge.
>
That's one perspective, that of a lone C++ developer or team solving a
particular problem, with the tools at hand, using a home wrapped 128-bit
integer type here, some gcc extension there, and so on. Reading David's posts,
it often sounds like that is all of C++. But it's not, that's only part
of C++, and may not be the most important part.

Generally speaking, open source libraries have come to provide the missing
features in the C++ standard, some supported by vendors such as Microsoft and
Google, some by individuals, and of course boost. Lord knows, the supply of
free software, even of a very high quality, vastly exceeds the demand. The
ability of open source to provide clean API's, though, depends on the coverage
of basic types that are standard. There is no point, for example, for a
database vendor to ship an API with a custom big decimal class. Consequently,
most C++ API's have to be content with binding things like big integers and
big decimals and big floats and int128 to strings, and leave it up to the user
to convert the strings into some extension type supplied by their compiler, or
some other things. Contrast that with practically all other modern languages
with richer type systems, where it is straightforward to provide libraries
that implement API's for standard things such as CBOR or SQL that bind
directly into language types.

Daniel

David Brown

unread,
Dec 16, 2019, 3:15:14 PM12/16/19
to
On 16/12/2019 19:47, Keith Thompson wrote:
> Robert Wessel <robert...@yahoo.com> writes:
> [...]
>> Personally I think both 128-bit types and arbitrary precision
>> arithmetic should be part of the language (whether in the language
>> proper or the library).
>
> 128-bit integer types are already permitted by the C++ standard.
> But adding a standard (extended or not) 128-bit integer type would
> conflict with existing ABIs and libraries due to the effect on
> [u]intmax_t.
>
> I don't think making 128-bit integers mandatory would be practical,
> given the difficulty (and questionable usefulness) of providing
> them on small embedded systems. Note that gcc supports __int128,
> but not on all targets.
>

gcc supports 128-bit integers on 64-bit systems, which is not
unreasonable. Support for "double size" integers can be done fairly
simply and efficiently. I think it would be fine to support them on
32-bit systems too, but it's more effort than it's worth to ask
compilers to support 128-bit types on 8-bit processors!

I'd like to see some standard way (with standard names) to have 128-bit
integer types without increasing intmax_t. I don't think there would be
a need to make them mandatory - compilers would support them where they
are practical and useful.

David Brown

unread,
Dec 16, 2019, 3:19:06 PM12/16/19
to
On 16/12/2019 20:56, Daniel wrote:
> On Monday, December 16, 2019 at 10:21:36 AM UTC-5, David Brown wrote:
>>
>> With C++, it would not be difficult to put together a class that acts
>> like a 128-bit integer type for most purposes - including support for
>> literals and std::cout. Consider it a challenge to test your C++ knowledge.
>>
> That's one perspective, that of a lone C++ developer or team solving a
> particular problem, with the tools at hand, using a home wrapped 128-bit
> integer type here, some gcc extension there, and so on. Reading David's posts,
> it often sounds like that is all of C++. But it's not, that's only part
> of C++, and may not be the most important part.

Of course it is not the most important part of C++. And I think it
would be nice if a 128-bit type was standardised in the C++ library so
that you don't have to write your own (or use a third-party library -
surely Boost has one).

The point is merely that you /can/ write such a class in C++, and it
will work fine as a 128-bit integer for most uses. And writing such a
class can be an interesting exercise - I think it might open Bart's eyes
to some of the things you can do with C++.

David Brown

unread,
Dec 16, 2019, 3:25:02 PM12/16/19
to
On 16/12/2019 19:52, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> Personally, I think the whole "intmax_t" concept was a mistake. But
>> it's easier to see that kind of thing afterwards.
> [...]
>
> Personally, I think intmax_t is useful (perhaps more in C than in C++).
> For example:
>
> some_integer_type x = some_value;
> printf("x = %jd\n", (intmax_t)x);
>

Fair enough. In my coding, I invariably know the sizes I am dealing
with (and I usually use the <stdint.h> or <cstdint> types). So it is
good to hear the opinions of people who do other kinds of programming.

> The zoo of ugly format macros in <cinttypes> doesn't cover all
> possible cases.
>

Indeed.

> Of course in C++ you can just write:
>
> std::cout << "x = " << x << "\n";
>

Yes.

In C, you can use _Generic to handle this, but it is not as easily
extended as std::cout.

Ben Bacarisse

unread,
Dec 16, 2019, 3:45:47 PM12/16/19
to
I disagree. See below for at least one reason.

> "intmax_t" is defined (roughly) as being the biggest integer type. I
> wonder if the definition could be changed in the standards to apply only
> to being the biggest /standard/ integer type? That would leave the door
> open for /extended/ integer types that are larger. As far as I have
> know, compilers that actually implement extended integer types are rare
> or non-existent, so this could be done without breaking existing code or
> tools.

This puzzles me. gcc (I am talking mainly about C here as I don't know
the C++ standard well enough) implements extended integer types (like
__int128) but not in conforming mode because that would entail making
intmax_t 128 bits wide. Did you intend a rather narrower remark that
compilers that implement extended integer types in conforming mode are
rare or non-existent?

intmax_t is useful, in part because it solves the printf problem. You
can portably print any integer i using

printf("%ld\n", (intmax_t)i);

whether i is an extended type like __int128 or even some weird 99-bit
integer type.

--
Ben.

David Brown

unread,
Dec 16, 2019, 4:00:51 PM12/16/19
to
gcc supports 128-bit types even with "-std=c11 -pedantic" (or other C or
C++ standards), but requires "__extension__" to avoid a warning:

__extension__ typedef __int128 int128;

int foo(void) {
return sizeof(int128);
}


__int128 is a gcc extension, and is treated as an "integer scalar type",
but it is /not/ an "extended integer type" as defined by the C
standards. That applies whether you are in conforming modes or not.

<https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html>
"""
GCC does not support any extended integer types.
"""


> intmax_t is useful, in part because it solves the printf problem. You
> can portably print any integer i using
>
> printf("%ld\n", (intmax_t)i);
>
> whether i is an extended type like __int128 or even some weird 99-bit
> integer type.
>

Yes, Keith mentioned that use, which I had not thought about (it's not
something that turns up in my kind of coding).

Bart

unread,
Dec 16, 2019, 4:03:36 PM12/16/19
to
On 16/12/2019 15:21, David Brown wrote:
> On 16/12/2019 13:10, Bart wrote:

>> You mean that even when int128 types are built-in, there is no support
>> for 128-bit constants? And presumably not for output either, according
>> to my tests with gcc and g++:
>>
>>     __int128_t a;
>>     a=170141183460469231731687303715884105727;
>>     std::cout << a;
>>
>> The constant overflows, and if I skip that part, I get pages of errors
>> when trying to print it. On gcc, I have no idea what printf format to use.
>>
>
> There is no printf support for 128-bit integers, except for systems
> where "long long" is 128-bit. (Or, hypothetically, 128-bit "long",
> "int", "short" or "char".) As you know full well, gcc does not have
> printf, since gcc is a compiler and printf is from the standard library.

So? printf is supposed to reflect the available primitive types. If a
compiler is extended to 128 buts, then printf should support that too.
How it does that is not the concern of the programmer.

(However printf currently doesn't even directly support int64_t, so
don't hold your breath for int128.)

>> This is frankly astonishing with such a big and important language, and
>> with comprehensive compilers such as gcc and g++. Even my own 'toy'
>> language can do this:
>
> It is only astonishing if you don't understand what it means for a
> language to have a standard. It's easy to do this with a toy language
> (and that is an advantage of toy languages). It would not be
> particularly difficult for the gcc or clang developers to support it
> either. But the ecosystem around C is huge - you can't change the
> standard in a way that breaks compatibility, which is what would have to
> happen to make these types full integer types.

What exactly would be the problem in supporting an integer constant
bigger than 2**64-1? This part is inside the compiler not the library,
so if a type bigger than 2**64-1 exists, then the constant can be of
that type.

> I am not convinced that there is any great reason for having full
> support for 128-bit types here - how often do you really need them?

A few weeks ago there was a thread on clc that made use of 128-bit
numbers to create perfect hashes of words in a dictionary. And the
longest word in the dictionary, with the correct ordering of prime
numbers, would just fit into 128 bits.

Anyway, there seems to have long been a tradition in programming
languages with a word-sized integer type, to also provide a
double-word-sized too.

So 32-bit ints were available on 16-bit hardware; and 64-bit on 32-bit
machines. Since most machines now are 64-bit (other than your specialist
processors), why not have a 128-bit type? Even my rubbish compiler can
turn this:

int128 a,b,c
a := b+c

into (Dn are 64-bit registers):

mov D0, [b]
mov D1, [b+8]
mov D2, [c]
mov D3, [c+8]
add D0, D2
adc D1, D3
mov [a], D0
mov [a+8], D1


> There are plenty of use-cases for big integers, such as in cryptography,
> but then 128-bit is not nearly big enough. And for situations where
> 128-bit integers are useful, how often do you need literals of that size?

That would be like creating an arbitrary precision library without
having a way to write arbitrarily long constants. They can occur either
in input data (needing support via scanf, atol etc) or in code or {...}
data generated by machine.

Doing it via string conversions is crass.

>> So mere 128-bit integer-only support is trivial!)

> Arbitrary precision arithmetic is, I think, a lot more useful than
> 128-bit integers.

As I think Robert mentioned, it can have considerably more overheads,
especially mine. So a:=b+c might involve executing many hundreds of
instructions involving function calls, loops, and memory allocation;
compare with the code above.

Mr Flibble

unread,
Dec 16, 2019, 4:13:35 PM12/16/19
to
Wrong! You really are an ignorant twit.

/Flibble

--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin

“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who doesn’t believe in any God the most. Oh, no..wait.. that never happens.” – Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."

Vir Campestris

unread,
Dec 16, 2019, 4:45:53 PM12/16/19
to
On 16/12/2019 16:54, Robert Wessel wrote:
> On the third hand, 128-bit integers allow a number of things that
> 64-bit integers don't - a useful type in which to perform currency
> calculations, usefully wide (and precise) times, and the like.

My mind boggles.

32 bit numbers are quite big enough for my financial needs, and the
world GDP is only about 40 bits (in USD). What currency work have you
done where 64 bit currency wasn't enough?

However - the argument against _not_ implementing std::int128_t here
seems to be that a lot of people have assumed that the biggest in is
only 64 bits. When will people learn...

I started in PCs when a pointer and an int were both 16 bit, and people
misused int. A few years later they were both 32 bit... now they're
generally a different size, and _still_ I see int x = sizeof(...)

Andy

David Brown

unread,
Dec 16, 2019, 6:04:32 PM12/16/19
to
On 16/12/2019 22:03, Bart wrote:
> On 16/12/2019 15:21, David Brown wrote:
>> On 16/12/2019 13:10, Bart wrote:
>
>>> You mean that even when int128 types are built-in, there is no support
>>> for 128-bit constants? And presumably not for output either, according
>>> to my tests with gcc and g++:
>>>
>>>      __int128_t a;
>>>      a=170141183460469231731687303715884105727;
>>>      std::cout << a;
>>>
>>> The constant overflows, and if I skip that part, I get pages of errors
>>> when trying to print it. On gcc, I have no idea what printf format to
>>> use.
>>>
>>
>> There is no printf support for 128-bit integers, except for systems
>> where "long long" is 128-bit.  (Or, hypothetically, 128-bit "long",
>> "int", "short" or "char".)  As you know full well, gcc does not have
>> printf, since gcc is a compiler and printf is from the standard library.
>
> So? printf is supposed to reflect the available primitive types. If a
> compiler is extended to 128 buts, then printf should support that too.
> How it does that is not the concern of the programmer.

The C standards form the contract. A C compiler supports a given C
standard, and a C library supports the given C standard. (A compiler
and library can also be designed together and made to support each
other.) If you have a C compiler that conforms to a standard and a C
library that conforms to a standard, then together they form a C
implementation for that standard. The same applies to C++.

It is entirely possible for one part of this to support features that
are not supported by the other. If these are extensions, not required
by the standards, then that's fine.

And printf - according to its definition in the standard - does not have
any support for integer sizes bigger than "intmax_t" as defined by the
implementation (generally, by the ABI for the platform). It doesn't
matter if the compiler has support for other types - the standard printf
does not support them.

>
> (However printf currently doesn't even directly support int64_t, so
> don't hold your breath for int128.)

Any conforming C99 printf will support "long long int", which is at
least 64 bits (and generally exactly 64 bits). You can't blame the
compiler just because /you/ happen to use it with an outdated and
non-conforming library. Anyone who uses gcc as part of a conforming
implementation has printf that supports int64_t.

(This discussion was somewhat interesting the first couple of times it
came up - it must have been explained to you a dozen times or more.)

>
>>> This is frankly astonishing with such a big and important language, and
>>> with comprehensive compilers such as gcc and g++. Even my own 'toy'
>>> language can do this:
>>
>> It is only astonishing if you don't understand what it means for a
>> language to have a standard.  It's easy to do this with a toy language
>> (and that is an advantage of toy languages).  It would not be
>> particularly difficult for the gcc or clang developers to support it
>> either.  But the ecosystem around C is huge - you can't change the
>> standard in a way that breaks compatibility, which is what would have to
>> happen to make these types full integer types.
>
> What exactly would be the problem in supporting an integer constant
> bigger than 2**64-1? This part is inside the compiler not the library,
> so if a type bigger than 2**64-1 exists, then the constant can be of
> that type.

I don't think the size of the constants itself is an issue. The problem
is that you can't have a fully supported integer type bigger than
maxint_t in C, and maxint_t is constrained by the ABI for the platform.

>
>> I am not convinced that there is any great reason for having full
>> support for 128-bit types here - how often do you really need them?
>
> A few weeks ago there was a thread on clc that made use of 128-bit
> numbers to create perfect hashes of words in a dictionary. And the
> longest word in the dictionary, with the correct ordering of prime
> numbers, would just fit into 128 bits.

You don't mean "perfect hash" here - you mean "hash". "Perfect hash
function" has a specific meaning.

128-bit hashes are, in general, pointless. They are far bigger than
necessary to avoid a realistic possibility of an accidental clash, and
far too small to avoid intentional clashes.

Of course it is possible to find uses of 128-bit integers - especially
for obscure and artificial problems like that thread. I did not suggest
they were never useful - I am suggesting they are /rarely/ useful. And
I am suggesting that it is even rarer that there would be need for
/full/ support for such types. I didn't follow the thread in question,
but I doubt if it needed constants of 128 bit length, or printf support
for them, or support for abs, div, or other integer-related functions in
the C standard library. (Since it was a C discussion, I assume it
didn't need any integer functions from the C++ library.)
Of course arbitrary precision arithmetic involves more work. But that's
what you need for things like public key cryptography. I am not
suggestion that you should use arbitrary precision arithmetic to handle
128 bit numbers - I am saying that it is rare that you need something
bigger than 64-bit but will be satisfied with 128-bit.

Bart

unread,
Dec 16, 2019, 8:06:47 PM12/16/19
to
On 16/12/2019 15:21, David Brown wrote:
> On 16/12/2019 13:10, Bart wrote:

>> But this would be just a natural extension? I assume that whatever that
>> means, there is already X(uint64_t). There is not a totally new feature
>> or concept to add.
>>
>
> <https://en.cppreference.com/w/cpp/language/user_literal>

I didn't follow most of that (C++ has a knack of making what should be
simple things, complicated). But I did recognise these examples:

"1-4) user-defined integer literals, such as 12_km"
"5-6) user-defined floating-point literals, such as 0.5_Pa"

"double x = 90.0_deg;"

These are numeric suffixes like the ones I first introduced around 30
years ago. However I managed to do it without needing the "_"; just a space:

5 km
6 m
7 mm

(These ones represented 5000000.0, 6000.0 and 7.0. Actually, at the time
I didn't even need the space, and could write 5mm, but know I require it.)

This was achieved without the km, m, mm etc infringing on any other
name-spaces (they can still be used as identifiers).

The degree one could also be written as:

90°

In my scheme, these were just suffixes for numeric constants that
applied a scale-factor; they didn't result in any new user-type, or
prevent units being mixed (a full treatment requires a language like
Frink (frinklang.org)).

Creating true new literal types for user-defined types, even if limited
to numeric types, is difficult. But probably not as difficult as that
C++ link is making it.

Daniel

unread,
Dec 16, 2019, 9:54:01 PM12/16/19
to
Thanks, appreciate the references.

Daniel

James Kuyper

unread,
Dec 16, 2019, 10:40:31 PM12/16/19
to
On 12/16/19 10:32 AM, David Brown wrote:
...> I suspect that at the time, no one really thought about the possibility
> of 128-bit integers. After all, 64-bit integers were new, and 64 bits
> should be enough for anybody!
I can assure you that the possibility of people wanting, and
implementations providing, 128 bit integers was fairly heavily discussed
during the lead-up to C99's approval. If there had been an implicit
belief that there would never be any types larger than int64_t, there'd
have been no point in specifying intmax_t - int64_t would have served
the purpose that intmax_t was designed to serve.


Chris M. Thomasson

unread,
Dec 16, 2019, 10:42:43 PM12/16/19
to
I would not object to C/C++ supporting a standard bignum lib.

Ben Bacarisse

unread,
Dec 16, 2019, 11:08:04 PM12/16/19
to
>>> know, compilers that actually implement extey srnded integer types are rare
>>> or non-existent, so this could be done without breaking existing code or
>>> tools.
>>
>> This puzzles me. gcc (I am talking mainly about C here as I don't know
>> the C++ standard well enough) implements extended integer types (like
>> __int128) but not in conforming mode because that would entail making
>> intmax_t 128 bits wide. Did you intend a rather narrower remark that
>> compilers that implement extended integer types in conforming mode are
>> rare or non-existent?
>>
>
> gcc supports 128-bit types even with "-std=c11 -pedantic" (or other C
> or C++ standards), but requires "__extension__" to avoid a warning:
>
> __extension__ typedef __int128 int128;

Right, but that amounts to something very similar. It's not standard C.

> int foo(void) {
> return sizeof(int128);
> }
>
>
> __int128 is a gcc extension, and is treated as an "integer scalar
> type", but it is /not/ an "extended integer type" as defined by the C
> standards. That applies whether you are in conforming modes or not.

If you need __extension__ or a non-conforming mode I don't see that it
matters what gcc calls it.

Is it your contention that gcc /could/ provide __int128 in conforming
mode (with no extra syntax) and, just by declaring it so, not have it be
considered and extended integer type? You may be right about that, but
then what is the __extension__ there for? gcc could do without it,
declare __int128 not an extended type, and all would be fine.

> <https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html>
> """
> GCC does not support any extended integer types.
> """

I can see how they can say that! There are none in conforming mode
without a special extension, and in other modes the language is not C so
that can say anything they like about it! In that sense I was wrong to
say that gcc "implements extended integer types but not in conforming
mode". In non-C mode everything it's up to the gcc authors to say what
these types are called.

>> intmax_t is useful, in part because it solves the printf problem. You
>> can portably print any integer i using
>>
>> printf("%ld\n", (intmax_t)i);
>>
>> whether i is an extended type like __int128 or even some weird 99-bit
>> integer type.
>>
>
> Yes, Keith mentioned that use, which I had not thought about (it's not
> something that turns up in my kind of coding).

Yes, noise on my part because I'd not refreshed the thread before
replying.

--
Ben.

Robert Wessel

unread,
Dec 17, 2019, 12:20:33 AM12/17/19
to
On Mon, 16 Dec 2019 21:45:47 +0000, Vir Campestris
<vir.cam...@invalid.invalid> wrote:

>On 16/12/2019 16:54, Robert Wessel wrote:
>> On the third hand, 128-bit integers allow a number of things that
>> 64-bit integers don't - a useful type in which to perform currency
>> calculations, usefully wide (and precise) times, and the like.
>
>My mind boggles.
>
>32 bit numbers are quite big enough for my financial needs, and the
>world GDP is only about 40 bits (in USD). What currency work have you
>done where 64 bit currency wasn't enough?


Currency calculations are often performed in pennies or mills for
dollars, sometimes even with finer divisions. Exxon's revenue, for
example, requires 45 bits to store in pennies. Even global GDP in
dollars requires 47 bits (about 85 *trillion* dollars). Toss in some
of the badly inflated currencies, and things get worse (before it was
reformed, the Turkish Lira was my go-to example - a less dramatic
current example is the Iranian Rial, worth less than a thousandth of a
penny each).

That being said, it's not that 64-bit type are inadequate for stored
formats - for the most part they are. Cobol, where if nothing else,
they care about handling currency, for decades only required 18
decimal* digits for stored formats, although the more recent standards
have doubled that. Some folks have recommended IEEE doubles as a
storage format, as that nets you about 53 bits - that's tight for
not-that-crazy scenarios (global GDP in pennies) - and that not an
attempt to actually store currency in floats (which tends to be
*really* problematic) - just to use a double as a fairly wide integer.

In any event, 64 bits are usually enough to store currency related
values. The problem is intermediate results. Multiply Exxon's
revenue by a couple of percentages, and you'll overflow a 64-bit
number (even with fairly "short" percentages - tell that you some
states that have five digit percentages in their tax rates). So
currency calculations really require something bigger than a 64-bit
intermediate, in order to compute the exact (correctly rounded)
result, since people get funny when you lop off bits of their bank
balance. Again, Cobol usually requires longer intermediates in order
to produce exact results (and they've understood that since about
1959).

32-bit numbers aren't actually good enough to manage *your* bank
account - interest rates are commonly four digits, factor in pennies,
and the bank's calculations for your account goes tilt at less than
$4300. Again, it might be adequate for *storing* your bank balance
(how many of us actually have more than $42 million in an account?),
but the bank needs to perform operations on those balances.




*Regardless of the actual stored representation, binary, decimal, or
something else. In binary 18 decimal digits happens to fit neatly in
a 64-bit int.

Robert Wessel

unread,
Dec 17, 2019, 12:23:15 AM12/17/19
to
On Mon, 16 Dec 2019 10:47:41 -0800, Keith Thompson
<Keith.S.T...@gmail.com> wrote:

>Robert Wessel <robert...@yahoo.com> writes:
>[...]
>> Personally I think both 128-bit types and arbitrary precision
>> arithmetic should be part of the language (whether in the language
>> proper or the library).
>
>128-bit integer types are already permitted by the C++ standard.
>But adding a standard (extended or not) 128-bit integer type would
>conflict with existing ABIs and libraries due to the effect on
>[u]intmax_t.
>
>I don't think making 128-bit integers mandatory would be practical,
>given the difficulty (and questionable usefulness) of providing
>them on small embedded systems. Note that gcc supports __int128,
>but not on all targets.


I don't really buy it. On systems that small, the operations would
almost certainly be RTL functions. If you don't define any large
types, those routes won't get linked. The way floats are handled on
many small systems is a parallel. And the RTL functions for the basic
operations aren't actually going to be that large.

Chris M. Thomasson

unread,
Dec 17, 2019, 1:06:47 AM12/17/19
to
64 bits are not even close to what is needed to take a very deep zoom of
a fractal.

Keith Thompson

unread,
Dec 17, 2019, 1:23:14 AM12/17/19
to
David Brown <david...@hesbynett.no> writes:
[...]
> I'd like to see some standard way (with standard names) to have
> 128-bit integer types without increasing intmax_t. I don't think
> there would be a need to make them mandatory - compilers would support
> them where they are practical and useful.

I'd greatly prefer to see the *existing* standard way to have 128-bit
integer types *along with* a redefinition of intmax_t as appropriate.

Because **that's what intmax_t means**.

I know that ABIs and existing libraries make that difficult, but
understanding the reasons does not lessen my annoyance.

On the other hand, 64 bits really are enough to count just about
anything. For example, we're not going to see files bigger than
2**64 bytes any time soon. Some standardized way of handling huge
integer-like entities without calling them "integer types" would
probably give us most of the benefits of 128-bit integer types,
and C++ has enough features that it's feasible to do that in the
library rather than in the language.

Keith Thompson

unread,
Dec 17, 2019, 1:30:07 AM12/17/19
to
Ben Bacarisse <ben.u...@bsb.me.uk> writes:
[...]
> intmax_t is useful, in part because it solves the printf problem. You
> can portably print any integer i using
>
> printf("%ld\n", (intmax_t)i);

You mean "%jd\n".

> whether i is an extended type like __int128 or even some weird 99-bit
> integer type.

To be completely sure, you need to use either intmax_t or uintmax_t
depending on whether the argument is signed or unsigned. If you
have a name for the type, you can do something like

big_type i = some_value;
if ((big_type)-1 > (big_type)0) {
printf("%ju\n", (uintmax_t)i);
}
else {
printf("%jd\n", (intmax_t)i);
}

But quite often you'll know from context whether the type is signed or
unsigned, even if you don't know which type it is.

David Brown

unread,
Dec 17, 2019, 3:31:29 AM12/17/19
to
On 17/12/2019 07:23, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> I'd like to see some standard way (with standard names) to have
>> 128-bit integer types without increasing intmax_t. I don't think
>> there would be a need to make them mandatory - compilers would support
>> them where they are practical and useful.
>
> I'd greatly prefer to see the *existing* standard way to have 128-bit
> integer types *along with* a redefinition of intmax_t as appropriate.
>

I'd be fine with that too.

> Because **that's what intmax_t means**.
>
> I know that ABIs and existing libraries make that difficult, but
> understanding the reasons does not lessen my annoyance.
>

Fair enough.


It is certainly easy to see that mistakes were made in the way intmax_t
has been handled (without necessarily doing the blamestorming) and that
this has restricted the way possibilities for bigger standard integer
types in C and C++.

The question is, where is it possible to go from here? If it is too
late to fix maxint_t usage, then we must accept that maxint_t is
effectively long long int, and 64-bit on modern "big" processors (as
distinct from small embedded devices). Is it possible to get something
that works at least as well as gcc's __int128 today, but can be
supported with the same name in other compilers, along with appropriate
limit macros (and C++ equivalents) ?


Also, while it could be nice to have a 128-bit type and make maxint_t
use this, I think that would limit the possibilities of 256-bit or
512-bit types in the future. (I am not sure what the usage of these
would be - perhaps for sha512 checksums - but let's avoid saying that
128 bits should be big enough for anyone.) If a compiler has int128_t,
int256_t and int512_t, do we really want intmax_t to be 512 bits? Do we
want to have to use 512 bit arithmetic with imaxdiv when we actually
want to divide 128 bit numbers?

I think a better solution would be to phase out intmax_t entirely, and
move to an explicit sized system. Have printf support explicit sizes
for the integers, rather than an endless series of arbitrary letters or
a single inefficient huge size. Have the functions named "abs32",
"abs64", etc., rather than "imaxabs", and with a type-generic version.

> On the other hand, 64 bits really are enough to count just about
> anything. For example, we're not going to see files bigger than
> 2**64 bytes any time soon. Some standardized way of handling huge
> integer-like entities without calling them "integer types" would
> probably give us most of the benefits of 128-bit integer types,
> and C++ has enough features that it's feasible to do that in the
> library rather than in the language.
>

That's it, yes. For C++, a standard library should be practical - with
the size being a template parameter. For C, it would need to be part of
the language. But if we had a new term rather than "integer type", the
intmax_t problem could be circumvented.

Jorgen Grahn

unread,
Dec 17, 2019, 4:14:15 AM12/17/19
to
On Mon, 2019-12-16, bol...@nowhere.co.uk wrote:
> On Sun, 15 Dec 2019 15:41:18 -0800 (PST)
> =?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>>On Monday, 16 December 2019 01:28:14 UTC+2, Melzzzzz wrote:
...
>>> Heh, I never used boost as well :P
>>
>>Last 15 years it has been almost inevitable to use something from boost
>>in bigger project. Other way is to write precisely same thing yourself.
>>Did you have myFilesystem, myOptional and myVariant before C++17?
>
> Optional and variant are both solutions looking for a problem. Just
> more noise added to the language to keep a tiny number of purists
> happy.

As far as I can tell, they're useful for solving some kinds of
problems in a specific way -- but not all programs need them.

To have a concrete example, optional<T> is good for representing "zero
or one" and is an alternative to conventions like

- "zero means absent"
- "-1 means absent"
- "default-constructed means absent"
- "T::invalid() means absent"
- keeping T by pointer, just to get "nullptr means absent"

This way you don't have to complicate T itself; it doesn't have to
be aware of its own possible absense.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

bol...@nowhere.co.uk

unread,
Dec 17, 2019, 4:37:49 AM12/17/19
to
On Mon, 16 Dec 2019 10:21:20 -0800 (PST)
=?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>On Monday, 16 December 2019 11:57:36 UTC+2, bol...@nowhere.co.uk wrote:
>> On Sun, 15 Dec 2019 15:41:18 -0800 (PST)
>> =?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>> >On Monday, 16 December 2019 01:28:14 UTC+2, Melzzzzz wrote:
>> >> On 2019-12-15, Chris M. Thomasson <chris.m.t...@gmail.com> wrote:
>> >> > On 12/15/2019 12:49 PM, Melzzzzz wrote:
>> >> >> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
>> >> >>> On Sun, 15 Dec 2019 11:18:43 GMT
>> >> >>> Melzzzzz <Melz...@zzzzz.com> wrote:
>> >> >>>> On 2019-12-15, bol...@nowhere.co.uk <bol...@nowhere.co.uk> wrote:
>> >> > [...]
>> >> >>>> I use C only when necessary. When I have to. syntactic sugar of C++
>is
>> >> >>>> what makes me use it. And I hate macros.
>> >> >>>
>> >> >>> If you do cross platform systems or network programming you have no
>> >choice but
>> >> >>> to use macros even in C++17.
>> >> >>
>> >> >> I can use macros but I don't write them...
>> >> >
>> >> > Have you ever experienced the Boost preprocessor macros? Iirc, it was
>> >> > the Chaos lib way back. Some hardcore shi%.
>> >> >
>> >> > ;^)
>> >>
>> >> Heh, I never used boost as well :P
>> >
>> >Last 15 years it has been almost inevitable to use something from boost
>> >in bigger project. Other way is to write precisely same thing yourself.
>> >Did you have myFilesystem, myOptional and myVariant before C++17?
>>
>> Optional and variant are both solutions looking for a problem. Just more
>noise
>> added to the language to keep a tiny number of purists happy.
>
>Nonsense. The problems are usual. Performance and reliability. Union is

Rubbish. Optional is just a lazy inefficient way of checking valid return
object values. Any decently designed class thats going to have many instances
thrown around should have a valid flag or some other marker showing whether it
can be used so optional not required. And if you're dealing with pointers just
return NULL.

>rather good performance optimization in decent hands, variant is a easy
>to use reliable union, and optional is a variant between value and

Non POD types should never be unioned unless memory space is so critical that
there is no other option. In which case why are you using C++ in the first
place? And a standard union is fine for PODs and in all sane use cases for
a union you HAVE to know what types you're dealing with as you're probably
dealing with very low level operations so leaving it to the compiler makes
no sense.

>nothing. So "purist" who can wield such trivial tools typically
>wins such a sorry "latitudinarians" who can't.

Uh huh.

bol...@nowhere.co.uk

unread,
Dec 17, 2019, 4:38:59 AM12/17/19
to
Oh hello. Is playtime over already?


Paavo Helde

unread,
Dec 17, 2019, 8:13:27 AM12/17/19
to
On 17.12.2019 11:37, bol...@nowhere.co.uk wrote:
>
> Non POD types should never be unioned unless memory space is so critical that
> there is no other option.

What? Trying to start another Luddite upraise or something?

> In which case why are you using C++ in the first
> place?

To have non-POD types, obviously! And for speed, of course.

> And a standard union is fine for PODs and in all sane use cases for
> a union you HAVE to know what types you're dealing with as you're probably
> dealing with very low level operations so leaving it to the compiler makes
> no sense.

Of course I have to know which type is active in an union. This is not
specific to POD or non-POD. And one can still use a simple integer
discriminator tag and switch() on it at will, no need to use heavy
templating a la std::variant if it does not suit the task.

David Brown

unread,
Dec 17, 2019, 8:51:18 AM12/17/19
to
On 17/12/2019 02:06, Bart wrote:
> On 16/12/2019 15:21, David Brown wrote:
>> On 16/12/2019 13:10, Bart wrote:
>
>>> But this would be just a natural extension? I assume that whatever that
>>> means, there is already X(uint64_t). There is not a totally new feature
>>> or concept to add.
>>>
>>
>> <https://en.cppreference.com/w/cpp/language/user_literal>
>
> I didn't follow most of that (C++ has a knack of making what should be
> simple things, complicated). But I did recognise these examples:
>
> "1-4) user-defined integer literals, such as 12_km"
> "5-6) user-defined floating-point literals, such as 0.5_Pa"
>
> "double x = 90.0_deg;"

The real benefit of these is when you can write:

distance x = 2210_m;
distance y = 3_km;
time t = 300_sec;

and then you c compile-time error when you try:

speed v = (x + t) + y;

but are allowed to write:

seed v = (x + y) / t;


>
> These are numeric suffixes like the ones I first introduced around 30
> years ago. However I managed to do it without needing the "_"; just a
> space:
>
>    5 km
>    6 m
>    7 mm
>
> (These ones represented 5000000.0, 6000.0 and 7.0. Actually, at the time
> I didn't even need the space, and could write 5mm, but know I require it.)
>
> This was achieved without the km, m, mm etc infringing on any other
> name-spaces (they can still be used as identifiers).
>
> The degree one could also be written as:
>
>  90°

Unicode symbols sound nice, until you have to distinguish between 90°
and 90˚ or 90⁰, or between 90℃ and 90°C. And it looks great being able
to write π = 3.14 and y = x², until the next programmer is using Windows
instead of *nix and can't type those symbols without a character map applet.

>
> In my scheme, these were just suffixes for numeric constants that
> applied a scale-factor; they didn't result in any new user-type, or
> prevent units being mixed (a full treatment requires a language like
> Frink (frinklang.org)).
>
> Creating true new literal types for user-defined types, even if limited
> to numeric types, is difficult. But probably not as difficult as that
> C++ link is making it.

Nah, it's easy in C++.

And it's the user-defined types that are the real feature here.
Otherwise, it's almost nothing more than you could have in C with:

#define km *1000.0
#define m *1.0
#define mm *0.001

double x = 5 km;
double y = 5 mm;

Your language apparently lets you make these context sensitive in some
way, which is nice.

David Brown

unread,
Dec 17, 2019, 9:00:46 AM12/17/19
to
It doesn't really matter what /we/ call it - it matters what /gcc/ calls
it. And they say it is not an extended integer type, regardless of the
compiler flags, precisely because they can't have an extended integer
type that is bigger than maxint_t.

> Is it your contention that gcc /could/ provide __int128 in conforming
> mode (with no extra syntax) and, just by declaring it so, not have it be
> considered and extended integer type?

Yes.

__int128 is in a reserved namespace - they can put whatever types they
want there, even in conforming modes. They can't make an "extended
integer type" that is bigger than maxint_t while remaining conforming,
and it would be ridiculous to say it is an extended integer type with
"-std=gnu11" and a different name (but identical functionality) with
"-std=c11".

It is an implementation-specific type that handles 128-bit integers, but
it is not an "integer type" as defined by the C standards.

> You may be right about that, but
> then what is the __extension__ there for? gcc could do without it,
> declare __int128 not an extended type, and all would be fine.

It could, yes. You only need the "__extension__" marker if you are
specifying "-pedantic", and even then it is just to avoid getting a
warning. __int128 is supported in the most standards-compliant and
pedantic warning modes gcc has. But the "-pedantic" warnings don't just
ensure that gcc emits all required diagnostics, it also gives warnings
on gcc extensions that would limit portability another compilers.
"__extensions__" tells the compiler that you know this thing is a gcc
extension, and you know you asked for warnings about gcc-specific
features, but you want to hide the warning in this particular case.

>
>> <https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html>
>> """
>> GCC does not support any extended integer types.
>> """
>
> I can see how they can say that! There are none in conforming mode
> without a special extension, and in other modes the language is not C so
> that can say anything they like about it! In that sense I was wrong to
> say that gcc "implements extended integer types but not in conforming
> mode". In non-C mode everything it's up to the gcc authors to say what
> these types are called.
>
>>> intmax_t is useful, in part because it solves the printf problem. You
>>> can portably print any integer i using
>>>
>>> printf("%ld\n", (intmax_t)i);
>>>
>>> whether i is an extended type like __int128 or even some weird 99-bit
>>> integer type.
>>>
>>
>> Yes, Keith mentioned that use, which I had not thought about (it's not
>> something that turns up in my kind of coding).
>
> Yes, noise on my part because I'd not refreshed the thread before
> replying.
>

No problem - it is useful to know that such uses are not just a
peculiarity of a single person.

bol...@nowhere.co.uk

unread,
Dec 17, 2019, 9:12:41 AM12/17/19
to
On Tue, 17 Dec 2019 15:13:14 +0200
Paavo Helde <myfir...@osa.pri.ee> wrote:
>On 17.12.2019 11:37, bol...@nowhere.co.uk wrote:
>>
>> Non POD types should never be unioned unless memory space is so critical that
>
>> there is no other option.
>
>What? Trying to start another Luddite upraise or something?
>
>> In which case why are you using C++ in the first
>> place?
>
>To have non-POD types, obviously! And for speed, of course.

You've never done any embedded development have you.

No, thats not a question.


Ben Bacarisse

unread,
Dec 17, 2019, 9:45:24 AM12/17/19
to
David Brown <david...@hesbynett.no> writes:

> On 17/12/2019 05:07, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>>
>>> On 16/12/2019 21:45, Ben Bacarisse wrote:
>>>> David Brown <david...@hesbynett.no> writes:
<cut>
(I think you mean "signed integer type" -- the C standard does not
define the term "integer type" as far as I can see.)

The only way that it can not be a signed integer type is by a decree
from gcc. I am not saying this is wrong -- maybe that was the intent of
the committee -- but then I'm not sure why gcc uses the __extension__
keyword. Maybe, as you say below, that's all about managing diagnostics
for portability.

>> You may be right about that, but
>> then what is the __extension__ there for? gcc could do without it,
>> declare __int128 not an extended type, and all would be fine.
>
> It could, yes.

And it turns out clang decided to do just that. __int128 is there in
conforming modes, but sizeof (intmax_t) is 8. Presumably they too have
decreed that __int128 is not a signed integer type, despite meeting all
the technical requirements to be one!

<cut>
--
Ben.

David Brown

unread,
Dec 17, 2019, 9:54:55 AM12/17/19
to
It does (C11 6.2.5p17) - "integer types" cover "char, the signed and
unsigned integer types, and the enumerated types". The "signed integer
types" covers the "standard signed integer types" and "extended signed
integer types".

>
> The only way that it can not be a signed integer type is by a decree
> from gcc. I am not saying this is wrong -- maybe that was the intent of
> the committee -- but then I'm not sure why gcc uses the __extension__
> keyword. Maybe, as you say below, that's all about managing diagnostics
> for portability.
>
>>> You may be right about that, but
>>> then what is the __extension__ there for? gcc could do without it,
>>> declare __int128 not an extended type, and all would be fine.
>>
>> It could, yes.
>
> And it turns out clang decided to do just that. __int128 is there in
> conforming modes, but sizeof (intmax_t) is 8. Presumably they too have
> decreed that __int128 is not a signed integer type, despite meeting all
> the technical requirements to be one!
>

__int128 meets almost all the technical requirements to be a signed
integer type - the only missing one is the classification, as that would
mean maxint_t would have to be (at least) 128 bits.

It is a strange situation indeed.

Ben Bacarisse

unread,
Dec 17, 2019, 10:52:46 AM12/17/19
to
David Brown <david...@hesbynett.no> writes:

> On 17/12/2019 15:44, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
<cut>
>>> It is an implementation-specific type that handles 128-bit integers, but
>>> it is not an "integer type" as defined by the C standards.
>>
>> (I think you mean "signed integer type" -- the C standard does not
>> define the term "integer type" as far as I can see.)
>
> It does (C11 6.2.5p17) - "integer types" cover "char, the signed and
> unsigned integer types, and the enumerated types". The "signed integer
> types" covers the "standard signed integer types" and "extended signed
> integer types".

So it does. Thanks.

--
Ben.

Öö Tiib

unread,
Dec 17, 2019, 11:32:34 AM12/17/19
to
"Invalid"/"Incomplete"/"Half-received"/"Erred" states of objects are the common
source of errors. You are rare fan of iostreams failbit-badbit garbage it seems?

> And if you're dealing with pointers just
> return NULL.

Per-object dynamic memory management is common source of inefficiency.

> >rather good performance optimization in decent hands, variant is a easy
> >to use reliable union, and optional is a variant between value and
>
> Non POD types should never be unioned unless memory space is so critical that
> there is no other option. In which case why are you using C++ in the first
> place?

Because C++ is often more efficient than alternatives and can union non-pod
types in variant. If C++ can't do it then there are no chance to do better job
with your Perl ... or what you program there.

> And a standard union is fine for PODs and in all sane use cases for
> a union you HAVE to know what types you're dealing with as you're probably
> dealing with very low level operations so leaving it to the compiler makes
> no sense.

PODs are useful only for dealing with those low level operations. Most
embedded controllers have tens of kilobytes of RAM (since it just costs
nothing) and so are often expected to be quite smart these days.

Paavo Helde

unread,
Dec 17, 2019, 12:35:14 PM12/17/19
to
Ah, trying to change the topic and ad hominem in a single line. Well done!

Mr Flibble

unread,
Dec 17, 2019, 12:50:33 PM12/17/19
to
Why are you trying to teach a brick wall something? He is an obtuse fucktarded troll, nothing more.

/Flibble

--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin

“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who doesn’t believe in any God the most. Oh, no..wait.. that never happens.” – Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."

Keith Thompson

unread,
Dec 17, 2019, 2:51:20 PM12/17/19
to
David Brown <david...@hesbynett.no> writes:
[...]
> __int128 meets almost all the technical requirements to be a signed
> integer type - the only missing one is the classification, as that would
> mean maxint_t would have to be (at least) 128 bits.

It doesn't meet *all* the requirements. N1570 6.4.4.1p6 (Integer
constants):

If an integer constant cannot be represented by any type in its
list, it may have an extended integer type, if the extended integer
type can represent its value.

In gcc, there are no constants of type __int128.

(The word "may" is a bit ambiguous, and it could be argued that an
implementation isn't required to support constants of type int128_t
even if it's an extended integer type.)

David Brown

unread,
Dec 17, 2019, 4:10:52 PM12/17/19
to
On 17/12/2019 20:51, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> __int128 meets almost all the technical requirements to be a signed
>> integer type - the only missing one is the classification, as that would
>> mean maxint_t would have to be (at least) 128 bits.
>
> It doesn't meet *all* the requirements. N1570 6.4.4.1p6 (Integer
> constants):
>
> If an integer constant cannot be represented by any type in its
> list, it may have an extended integer type, if the extended integer
> type can represent its value.
>
> In gcc, there are no constants of type __int128.
>
> (The word "may" is a bit ambiguous, and it could be argued that an
> implementation isn't required to support constants of type int128_t
> even if it's an extended integer type.)
>

That is how I have always read that part of the standard - I hadn't
considered your reading. But I agree that your interpretation is just
as valid.

Certainly (based on other discussions involving gcc developers) the lack
of 128-bit integer constants is viewed as one reason why __int128 is not
a "full" integer type to the same degree as the other integer types.
(With the maxint_t issue being another key reason, and lack of support
in library functions such as printf, *abs, strto*, etc., being others.)

Vir Campestris

unread,
Dec 17, 2019, 4:56:33 PM12/17/19
to
On 17/12/2019 05:20, Robert Wessel wrote:
<snip>
> but the bank needs to perform operations on those balances.

Ah, that makes sense. Thanks

Andy

James Kuyper

unread,
Dec 17, 2019, 11:55:54 PM12/17/19
to
On 12/17/19 9:44 AM, Ben Bacarisse wrote:
> David Brown <david...@hesbynett.no> writes:
>
>> On 17/12/2019 05:07, Ben Bacarisse wrote:
...
>> __int128 is in a reserved namespace - they can put whatever types they
>> want there, even in conforming modes. They can't make an "extended
>> integer type" that is bigger than maxint_t while remaining conforming,
>> and it would be ridiculous to say it is an extended integer type with
>> "-std=gnu11" and a different name (but identical functionality) with
>> "-std=c11".
>>
>> It is an implementation-specific type that handles 128-bit integers, but
>> it is not an "integer type" as defined by the C standards.

...
> The only way that it can not be a signed integer type is by a decree
> from gcc.

You say that as if you think there's something odd about it. The set of
extended integer types that are supported by a given implementation is
implementation-defined (3.9.1p2). _int128_t fails to be an extended
integer type, despite being a type supported by gcc, because gcc has
exercised it's prerogative of choosing not to define it as an extended
integer type.

This has some consequences: _int128_t also cannot be an integer type,
since it isn't any of the other things that can be integer types
(standard integer types, bool, char, char16_t, char32_t, wchar_t). It
also can't be an arithmetic type, because it isn't a floating point
type, the only kind of non-integer type that can be an arithmetic type
(note: n4578.pdf uses the term "arithmetic type" without ever defining
it - I suspect this was an oversight, and the intended definition is
similar to the one provided in the C standard).

It also can't be a scalar type, because it certainly doesn't qualify as
an enumerated type, a pointer type, a pointer to member type, or
std::nullptr_t. It can't qualify as a POD type, since it certainly isn't
a POD class type or an array of POD types. Similarly, it isn't a
fundamental type, a trivially-copyable type, a trivial type, or a
standard-layout type, or a literal type.

Therefore, when invoked in fully conforming mode, gcc must diagnose any
use of _int128_t in any context where the C++ standard requires a type
that is in one of those categories. When discussing the corresponding
issue in comp.lang.c, I was told that gcc does in fact issue all of the
diagnostics required by the C standard when using _int128_t. I would
presume that it does the same when compiling in C++ mode.

After issuing that diagnostic message, the behavior is undefined, so gcc
is free to handle _int128_t exactly the way it would have handled it if
gcc had chosen to define it as being an extended integer type. It's also
free not to define intmax_t as large enough to hold all values that are
representable as _int128_t.

Keith Thompson

unread,
Dec 18, 2019, 1:08:14 AM12/18/19
to
David Brown <david...@hesbynett.no> writes:
[...]
> Certainly (based on other discussions involving gcc developers) the
> lack of 128-bit integer constants is viewed as one reason why __int128
> is not a "full" integer type to the same degree as the other integer
> types. (With the maxint_t issue being another key reason, and lack of
> support in library functions such as printf, *abs, strto*, etc., being
> others.)

It's intmax_t, not maxint_t. (I've seen this misspelling several times
in this thread.)

David Brown

unread,
Dec 18, 2019, 2:57:05 AM12/18/19
to
On 18/12/2019 07:08, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> Certainly (based on other discussions involving gcc developers) the
>> lack of 128-bit integer constants is viewed as one reason why __int128
>> is not a "full" integer type to the same degree as the other integer
>> types. (With the maxint_t issue being another key reason, and lack of
>> support in library functions such as printf, *abs, strto*, etc., being
>> others.)
>
> It's intmax_t, not maxint_t. (I've seen this misspelling several times
> in this thread.)
>

Yes, of course. It's obvious when you use it (it's an "intXXX_t" type),
but it's easy to mix when typing a post. Sorry if I have caused
confusion by my misspellings.

bol...@nowhere.co.uk

unread,
Dec 18, 2019, 11:15:07 AM12/18/19
to
On Tue, 17 Dec 2019 19:35:00 +0200
Paavo Helde <myfir...@osa.pri.ee> wrote:
>On 17.12.2019 16:12, bol...@nowhere.co.uk wrote:
>> On Tue, 17 Dec 2019 15:13:14 +0200
>> Paavo Helde <myfir...@osa.pri.ee> wrote:
>>> On 17.12.2019 11:37, bol...@nowhere.co.uk wrote:
>>>>
>>>> Non POD types should never be unioned unless memory space is so critical
>that
>>>
>>>> there is no other option.
>>>
>>> What? Trying to start another Luddite upraise or something?
>>>
>>>> In which case why are you using C++ in the first
>>>> place?
>>>
>>> To have non-POD types, obviously! And for speed, of course.
>>
>> You've never done any embedded development have you.
>
>Ah, trying to change the topic and ad hominem in a single line. Well done!

I suggest you go look up what ad hominem means. It doesn't mean stating a fact
you don't happen to like.

bol...@nowhere.co.uk

unread,
Dec 18, 2019, 11:26:54 AM12/18/19
to
On Tue, 17 Dec 2019 08:32:21 -0800 (PST)
=?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>On Tuesday, 17 December 2019 11:37:49 UTC+2, bol...@nowhere.co.uk wrote:
>> Rubbish. Optional is just a lazy inefficient way of checking valid return
>> object values. Any decently designed class thats going to have many instances
>
>> thrown around should have a valid flag or some other marker showing whether
>it
>> can be used so optional not required.
>
>"Invalid"/"Incomplete"/"Half-received"/"Erred" states of objects are the common
>
>source of errors. You are rare fan of iostreams failbit-badbit garbage it
>seems?

I never use iostreams unless I have no choice. They're just bloat inbetween
my code and the OS I/O system.

My point is any class likely to be flung around a program as instances usually
has some kind of flag designating whether an instance is valid. Personally
however I prefer pointers.

>> And if you're dealing with pointers just
>> return NULL.
>
>Per-object dynamic memory management is common source of inefficiency.

And you think creating an instance of optional for a function return just to
signify a valid return value and then having to call a method to get the actual
contained instance is efficient?

>> Non POD types should never be unioned unless memory space is so critical
>that
>> there is no other option. In which case why are you using C++ in the first
>> place?
>
>Because C++ is often more efficient than alternatives and can union non-pod
>types in variant. If C++ can't do it then there are no chance to do better job
>with your Perl ... or what you program there.

I don't use Perl.

>> And a standard union is fine for PODs and in all sane use cases for
>> a union you HAVE to know what types you're dealing with as you're probably
>> dealing with very low level operations so leaving it to the compiler makes
>> no sense.
>
>PODs are useful only for dealing with those low level operations. Most
>embedded controllers have tens of kilobytes of RAM (since it just costs
>nothing) and so are often expected to be quite smart these days.

No all of them. Some PICs have < 1K of RAM.

Anyway, when someone shows me a sane use case of unionising non POD types
perhaps I'll see variant as something other than just more noise in the C++
spec.

Daniel

unread,
Dec 18, 2019, 12:36:31 PM12/18/19
to
On Monday, December 16, 2019 at 4:00:51 PM UTC-5, David Brown wrote:

> gcc supports 128-bit types even with "-std=c11 -pedantic"

But no support for <limits>, unless with -std=gnu++11

Daniel

unread,
Dec 18, 2019, 12:47:14 PM12/18/19
to
On Monday, December 16, 2019 at 1:21:40 PM UTC-5, Öö Tiib wrote:

> Union is
> rather good performance optimization in decent hands, variant is a easy
> to use reliable union, and optional is a variant between value and
> nothing. So "purist" who can wield such trivial tools typically
> wins such a sorry "latitudinarians" who can't.

optional has a pleasing interface, variant (in my opinion :)) less so.

I experimented with providing a non-throwing alternative interface with
optional, but value and nothing aren't usually the states that you want,
rather value and error condition.

Daniel

Mr Flibble

unread,
Dec 18, 2019, 2:12:03 PM12/18/19
to
On 18/12/2019 16:26, bol...@nowhere.co.uk wrote:
> On Tue, 17 Dec 2019 08:32:21 -0800 (PST)
> =?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>> On Tuesday, 17 December 2019 11:37:49 UTC+2, bol...@nowhere.co.uk wrote:
>>> Rubbish. Optional is just a lazy inefficient way of checking valid return
>>> object values. Any decently designed class thats going to have many instances
>>
>>> thrown around should have a valid flag or some other marker showing whether
>> it
>>> can be used so optional not required.
>>
>> "Invalid"/"Incomplete"/"Half-received"/"Erred" states of objects are the common
>>
>> source of errors. You are rare fan of iostreams failbit-badbit garbage it
>> seems?
>
> I never use iostreams unless I have no choice. They're just bloat inbetween
> my code and the OS I/O system.
>
> My point is any class likely to be flung around a program as instances usually
> has some kind of flag designating whether an instance is valid. Personally
> however I prefer pointers.
>
>>> And if you're dealing with pointers just
>>> return NULL.
>>
>> Per-object dynamic memory management is common source of inefficiency.
>
> And you think creating an instance of optional for a function return just to
> signify a valid return value and then having to call a method to get the actual
> contained instance is efficient?

How thick are you? That isn't using optionals properly; you really are clueless.

Daniel

unread,
Dec 18, 2019, 2:23:49 PM12/18/19
to
On Wednesday, December 18, 2019 at 2:12:03 PM UTC-5, Mr Flibble wrote:
> On 18/12/2019 16:26, bol...@nowhere.co.uk wrote:
> >
> > And you think creating an instance of optional for a function return just >> to signify a valid return value and then having to call a method to get the >> actual contained instance is efficient?
>
> That isn't using optionals properly
>
You wouldn't use optional for a function return?

Daniel

Öö Tiib

unread,
Dec 18, 2019, 2:28:52 PM12/18/19
to
On Wednesday, 18 December 2019 18:26:54 UTC+2, bol...@nowhere.co.uk wrote:
> On Tue, 17 Dec 2019 08:32:21 -0800 (PST)
> =?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
> >On Tuesday, 17 December 2019 11:37:49 UTC+2, bol...@nowhere.co.uk wrote:
> >> Rubbish. Optional is just a lazy inefficient way of checking valid return
> >> object values. Any decently designed class thats going to have many instances
> >
> >> thrown around should have a valid flag or some other marker showing whether
> >it
> >> can be used so optional not required.
> >
> >"Invalid"/"Incomplete"/"Half-received"/"Erred" states of objects are the common
> >
> >source of errors. You are rare fan of iostreams failbit-badbit garbage it
> >seems?
>
> I never use iostreams unless I have no choice. They're just bloat inbetween
> my code and the OS I/O system.
>
> My point is any class likely to be flung around a program as instances usually
> has some kind of flag designating whether an instance is valid. Personally
> however I prefer pointers.

What is the point to fling that invalid stuff around? That is another
typical performance drain, waste of storage and source of errors.
Pass reference to good stuff if you have it and call overload without
that parameter when you don't have it.

> >> And if you're dealing with pointers just
> >> return NULL.
> >
> >Per-object dynamic memory management is common source of inefficiency.
>
> And you think creating an instance of optional for a function return just to
> signify a valid return value and then having to call a method to get the actual
> contained instance is efficient?

Mr Flibble was correct, it is clueless response. Optional is not good
for that. Prefer exceptions as signals of failures. When failure is
common (and it matters) use std::variant<Value, FailureDetails> as
alternative to exceptions. Optional is good for *storing* values that
may be missing (or acquired lazily when needed) without dynamic memory
management and loss of locality.

> >> Non POD types should never be unioned unless memory space is so critical
> >that
> >> there is no other option. In which case why are you using C++ in the first
> >> place?
> >
> >Because C++ is often more efficient than alternatives and can union non-pod
> >types in variant. If C++ can't do it then there are no chance to do better job
> >with your Perl ... or what you program there.
>
> I don't use Perl.

Doesn't matter. You expressed opinion that C++ is wrong tool when
memory space is critical. So I just imagined that maybe some Perl
programmer can say such nonsense.

>
> >> And a standard union is fine for PODs and in all sane use cases for
> >> a union you HAVE to know what types you're dealing with as you're probably
> >> dealing with very low level operations so leaving it to the compiler makes
> >> no sense.
> >
> >PODs are useful only for dealing with those low level operations. Most
> >embedded controllers have tens of kilobytes of RAM (since it just costs
> >nothing) and so are often expected to be quite smart these days.
>
> No all of them. Some PICs have < 1K of RAM.
>
> Anyway, when someone shows me a sane use case of unionising non POD types
> perhaps I'll see variant as something other than just more noise in the C++
> spec.

Huh? PODs are really only useful for low level operations with data of
precise memory layout. On rest of cases a class with constructors,
destructors, private members, virtual member functions and/or non-POD
data members is more handy. Don't you use classes? Union
of classes can be great performance optimization.
It is loading more messages.
0 new messages