Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A "better" C++

807 views
Skip to first unread message

Juha Nieminen

unread,
Aug 19, 2015, 4:46:50 AM8/19/15
to
Consider this quote from Rob Pike, one of the developers of the Go
programming language, which he gave in a speech in 2012:

"Back around September 2007, I was doing some minor but central work
on an enormous Google C++ program, one you've all interacted with, and
my compilations were taking about 45 minutes on our huge distributed
compile cluster. An announcement came around that there was going to
be a talk presented by a couple of Google employees serving on the C++
standards committee. They were going to tell us what was coming in
C++11. In the span of an hour at that talk we heard about something
like 35 new features that were being planned. At this point I asked
myself a question: Did the C++ committee really believe that what was
wrong with C++ was that it didn't have enough features?"

I'm really wondering what exactly he's suggesting there.

But this is quite indicative of a rather long-running common trend.
It was so in 1995, and it is still so in 2015: People just love to hate
C++, and to try to create a "better" C++. They always find millions of
faults in C++, and then try to create a better version with all the
bad things removed, and then they pat themselves in the back for having
created a language that's so much better.

Of course the first thing to go is templates. Because, you know,
templates are evil and stuff. And thus you have no generic data
containers and no generic algorithms, and you go back to the days
of C (or, as is the case with Go, back to the 90's Java, where
containers have no type-safety to speak of, and you have to rely
on unsafe downcasting). Of course since generic data containers are
too useful to not have them, they eventually end up implementing a
limited form of templates just for that purpose (which is, of course,
"better" than templates. Because reasons. Perhaps because they are
not called "templates", as that word is evil.)

And the other thing that also obviously needs to go is multiple
inheritance. Because, you know, MI is evil and scary, and an invention
of the devil. But since multiple inheritance is so fundamental to
OO programming, they can't remove it completely, so they provide a
crippled version of it, where only one base class can have method
implementations and member variables. (Which, of course, means that
if an "interface" would benefit from a default implementation, you
are out of luck; you'll have to resort to code repetition.) This
half-ass implementation is, obviously, "better" than MI. Because
reasons. (And again, because it's not named "multiple inheritance",
even though it really is. That name is evil, so if we avoid it,
then we are A-ok.)

Of course in our "better" C++ objects can only be allocated dynamically.
Because that allows garbage collection and stuff. That's nice. Except
for the fact that memory is not the only resource that a class could
allocate (other common examples are file handles and sockets.) Thus
we end up with a C-like language where you have to manually release
those other resources or they may be leaked, just like memory in C.
(Then of course they will provide limited automation eg. in the form
of automatic destruction at the end of a code block. Which you usually
have to remember to write explicitly anyway, and it can only be used
within one single code block and doesn't work with shared objects
which may be destroyed somewhere else entirely. But it's still better
than C++! That's the most important part.) Then of course you have some
other more minor issues with refence-only objects (such as it being
more difficult to swap the contents of the objects), but those are not
important. And we don't mind that dynamically allocated objects tend
to consume more memory. RAM is cheap, just buy more of it.

And our "better" C++ compiling ten times faster than C++ is always
something to brag about.

Yet, somehow C++ persists. There have been probably at least two dozens
of "better" C++'s during the last 20 years. A couple of them have been
moderately successful, the vast majority of them have been forgotten.
But they are still better than C++, dammit!

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Miquel van Smoorenburg

unread,
Aug 19, 2015, 5:12:31 AM8/19/15
to
In article <mr1fpd$30ur$1...@adenine.netfront.net>,
Juha Nieminen <nos...@thanks.invalid> wrote:
>But this is quite indicative of a rather long-running common trend.
>It was so in 1995, and it is still so in 2015: People just love to hate
>C++, and to try to create a "better" C++.

https://www.rust-lang.org/

Mike.

Öö Tiib

unread,
Aug 19, 2015, 6:35:01 AM8/19/15
to
On Wednesday, 19 August 2015 11:46:50 UTC+3, Juha Nieminen wrote:
>
> But this is quite indicative of a rather long-running common trend.
> It was so in 1995, and it is still so in 2015: People just love to hate
> C++, and to try to create a "better" C++.

It means that C++ is tremendously inspiring programming language. ;-)
We ourselves create better C++ as well. We just do it differently by
simply using individual favorite subset of C++ and avoiding individual
disliked subset of C++.
You must admit that it takes considerable amount of time to find out
what details you like of C++ and what you don't and then to cut it into
useful shape for your purposes like that.

Juha Nieminen

unread,
Aug 19, 2015, 9:28:34 AM8/19/15
to
Stefan Ram <r...@zedat.fu-berlin.de> wrote:
> In 1995, C++95 was a different language. It was still
> immature. C++14 is better than C++95, maybe even better
> than C.

Was there any time in history where C++ was actually worse than C?

Scott Lurndal

unread,
Aug 19, 2015, 10:18:16 AM8/19/15
to
Juha Nieminen <nos...@thanks.invalid> writes:
>Consider this quote from Rob Pike, one of the developers of the Go
>programming language, which he gave in a speech in 2012:
>
>"Back around September 2007, I was doing some minor but central work
>on an enormous Google C++ program, one you've all interacted with, and
>my compilations were taking about 45 minutes on our huge distributed
>compile cluster. An announcement came around that there was going to
>be a talk presented by a couple of Google employees serving on the C++
>standards committee. They were going to tell us what was coming in
>C++11. In the span of an hour at that talk we heard about something
>like 35 new features that were being planned. At this point I asked
>myself a question: Did the C++ committee really believe that what was
>wrong with C++ was that it didn't have enough features?"
>
>I'm really wondering what exactly he's suggesting there.

Rob has a long history in the Unix world (e.g. SAM), and I tend to agree with
him on this point - "Did the C++ committee really believe that what

Bo Persson

unread,
Aug 19, 2015, 11:30:59 AM8/19/15
to
On 2015-08-19 15:49, Stefan Ram wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>>> In 1995, C++95 was a different language. It was still
>>> immature. C++14 is better than C++95, maybe even better
>>> than C.
>> Was there any time in history where C++ was actually worse than C?
>
> It depends on what one wants to do with it.
>
> Early C++ did not yet have the STL or possibly not even it's
> own string class, and when exceptions first came up, people
> did not yet know how to write exception-safe code or did not
> care. And even today, C++ is slower than C in the »programming
> language shootout« (the last time I looked it up).
>

Do you have any results newer than 2003?

http://dada.perl.it/shootout/


Bo Persson

Paavo Helde

unread,
Aug 19, 2015, 1:01:20 PM8/19/15
to
Juha Nieminen <nos...@thanks.invalid> wrote in news:mr1fpd$30ur$1
@adenine.netfront.net:

> Consider this quote from Rob Pike, one of the developers of the Go
> programming language, which he gave in a speech in 2012:
> "Did the C++ committee really believe that what was
> wrong with C++ was that it didn't have enough features?"

Adding new features is about the only thing which can be done to a
programming language without breaking back-compatibility. So that's the
only thing the C++ committee could really do, regardless of what they
believed in.

Cheers
Paavo

Marcel Mueller

unread,
Aug 19, 2015, 1:03:49 PM8/19/15
to
On 19.08.15 10.46, Juha Nieminen wrote:
[...]
> Yet, somehow C++ persists. There have been probably at least two dozens
> of "better" C++'s during the last 20 years. A couple of them have been
> moderately successful, the vast majority of them have been forgotten.
> But they are still better than C++, dammit!

:-))
I agree with you mostly.

But I have to admit that implementing type erasure is quite inconvenient
in C++, although it is an efficient solution in many cases. Of course,
you can write your type independent base class and wrap it with a type
safe wrapper that does the downcasts. But this is quite much of work
every time and also error prone to some degree.

And well, the iostream API is really crap. It looks mainly like a
demonstration of operator overloading in the early 90's.

But I still like C++ and especially the STL containers. RAII is really
helpful, and as long as you avoid to deal with raw pointers (including
char*) the code is almost as safe as in managed languages with more
metadata.

Looking into the future I think that the absence of a meta language
between the source and the binary could be a problem for C++. The time
of the homogenous and mainly compatible x86/x64 architecture are over.
And C++ source code distributions are unacceptable for several reasons.
First of all Maintainability of the installation procedure. Many CPU
features are unused most of the time because of this restrictions. A JIT
compiler in contrast could utilize almost any feature of the target
hardware. This is a points win for JIT languages.
So I think there is a need for common standards above the operating
system and hardware layer.


Marcel

Lynn McGuire

unread,
Aug 19, 2015, 1:30:13 PM8/19/15
to
All I want from C++ is a standard window based user interface toolkit that I can find on any platform. And yes, wxWidgets (
https://www.wxwidgets.org/ ) is nice but until it is released on all platforms, not the answer.

Lynn

Öö Tiib

unread,
Aug 19, 2015, 1:53:00 PM8/19/15
to
Qt framework, that is technically extending C++ (but adds way less
semantic garbage than Objective-C++) works on all desktops + iOS
and Android. If all you need are only GUI widgets then there are tons
of other classes in Qt that you don't need. Just ignore those and all
remains nice.

Ian Collins

unread,
Aug 19, 2015, 3:17:08 PM8/19/15
to
Stefan Ram wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>>> In 1995, C++95 was a different language. It was still
>>> immature. C++14 is better than C++95, maybe even better
>>> than C.
>> Was there any time in history where C++ was actually worse than C?
>
> It depends on what one wants to do with it.

Early C++ had destructors. If that had bee all it offered, it still
would have been a better C!

--
Ian Collins

Chris Vine

unread,
Aug 19, 2015, 4:31:41 PM8/19/15
to
On Wed, 19 Aug 2015 08:46:39 +0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
> Consider this quote from Rob Pike, one of the developers of the Go
> programming language, which he gave in a speech in 2012:
>
> "Back around September 2007, I was doing some minor but central work
> on an enormous Google C++ program, one you've all interacted with, and
> my compilations were taking about 45 minutes on our huge distributed
> compile cluster. An announcement came around that there was going to
> be a talk presented by a couple of Google employees serving on the C++
> standards committee. They were going to tell us what was coming in
> C++11. In the span of an hour at that talk we heard about something
> like 35 new features that were being planned. At this point I asked
> myself a question: Did the C++ committee really believe that what was
> wrong with C++ was that it didn't have enough features?"
>
> I'm really wondering what exactly he's suggesting there.

He is suggesting by implication that it did not need more features.
It is a shame - having to acquire new knowledge can be uncomfortable -
but unfortunately he was wrong: it did need them. It needed move
semantics, variadic templates, syntax for inline anonymous functions
(lambdas) and a memory model for multiple threads of execution.

In consequence of the inclusion of these in C++11/14, C++11/14 has been
a significant success.

[snip]
> Of course the first thing to go is templates. Because, you know,
> templates are evil and stuff. And thus you have no generic data
> containers and no generic algorithms, and you go back to the days
> of C (or, as is the case with Go, back to the 90's Java, where
> containers have no type-safety to speak of, and you have to rely
> on unsafe downcasting). Of course since generic data containers are
> too useful to not have them, they eventually end up implementing a
> limited form of templates just for that purpose (which is, of course,
> "better" than templates. Because reasons. Perhaps because they are
> not called "templates", as that word is evil.)

C++ templates are a strange beast and I can understand people's
aversion to them. They are almost unbuggable when used for
compile-time programming. C++ can never match the macro systems of the
homoiconic languages but the introduction of type classes (concepts) is
essential and will be delivered, and combining that with an extension of
constexpr for compile time computation would certainly help.

Chris

woodb...@gmail.com

unread,
Aug 19, 2015, 5:03:50 PM8/19/15
to
On Wednesday, August 19, 2015 at 3:46:50 AM UTC-5, Juha Nieminen wrote:
> Consider this quote from Rob Pike, one of the developers of the Go
> programming language, which he gave in a speech in 2012:
>
> "Back around September 2007, I was doing some minor but central work
> on an enormous Google C++ program, one you've all interacted with, and
> my compilations were taking about 45 minutes on our huge distributed
> compile cluster. An announcement came around that there was going to
> be a talk presented by a couple of Google employees serving on the C++
> standards committee. They were going to tell us what was coming in
> C++11. In the span of an hour at that talk we heard about something
> like 35 new features that were being planned. At this point I asked
> myself a question: Did the C++ committee really believe that what was
> wrong with C++ was that it didn't have enough features?"
>
> I'm really wondering what exactly he's suggesting there.
>

He's fighting for his own vision of things. I don't
blame him for that. He has some ideas and has spent
time working on them. Probably he's looking for some
people to drop C++ and use Go. That's fine, but I
don't think Go is perfect and people may find that C++
has advantages over it and encourage others to quit
using Go.

> But this is quite indicative of a rather long-running common trend.
> It was so in 1995, and it is still so in 2015: People just love to hate
> C++, and to try to create a "better" C++. They always find millions of
> faults in C++, and then try to create a better version with all the
> bad things removed, and then they pat themselves in the back for having
> created a language that's so much better.

I find many faults with C++ to this day.
It's normal part of business to try to knock off the king
of the hill. Thankfully C++ keeps doing fine.

Brian
Ebenezer Enterprises - "We few, we happy few, we band of brothers."
http://webEbenezer.net

jacobnavia

unread,
Aug 19, 2015, 5:30:26 PM8/19/15
to
Le 19/08/2015 10:46, Juha Nieminen a écrit :
> Consider this quote from Rob Pike, one of the developers of the Go
> programming language, which he gave in a speech in 2012:
>
> "Back around September 2007, I was doing some minor but central work
> on an enormous Google C++ program, one you've all interacted with, and
> my compilations were taking about 45 minutes on our huge distributed
> compile cluster. An announcement came around that there was going to
> be a talk presented by a couple of Google employees serving on the C++
> standards committee. They were going to tell us what was coming in
> C++11. In the span of an hour at that talk we heard about something
> like 35 new features that were being planned. At this point I asked
> myself a question: Did the C++ committee really believe that what was
> wrong with C++ was that it didn't have enough features?"

It is actually not the compilation time what is infuriating from this
situation. No, not at all.

It is the fact that in this newsgroup, when confronted with a piece of
code nobody knows how to read it and the only advise is to.... compile
it, of course.

What does gcc say?

There are so many variables and algorithms behind each symbol, token, in
the C++ code that it is completely impossible to do it without a machine.

For instance, the specifications for the interactions of classes and
operator overloading and overload resolution go for PAGES and PAGES
in the last edition of the C++ standard that I have read.

You need to do a topological sort of the classes and other
characteristics of the given types/objects. You need to have all those
pages of specifications in your mind when making the overload resoluton.

No wonder, even expert C++ programmers just *think* that they understood
what piece of software will be called when writing their code, but some
times you just did not saw an interaction between the types in some
situations and you end calling the wrong overload.

This type of problem is difficult to solve. Specially when appears after
the guy that wrote that went away, and the whole system was running
smoothly until this wrong overload appeared in this header file
inclusion, in that new class that was added after the author went away, etc.

Or worse, gcc changed its mind and what you wrote changed meaning or was
deemed obsolete. C++ is a moving target and the upgrade of the compiler
brings always new surprises. Mostly benign, yes, but not always...

Yes, new features are nice to have and anyway, evverybody is *adding*
stuff, writing new code etc. The committee also. And they do their job
and write new specs.

Is this justified?

I wrote a clone of the STL in C. A generic clone, using C macros. Yes,
everybody told me that doing that is impossible but actually... C macros
can take you quite far. I added vectors, lists, hash tables (C++ added
that shortly afterwards), implemented the visitor pattern, you have
iterators, etc. All in C.

This is an interesting fact. I repeat:

Is this complexity justified?

What is nice about C is that the risk of ambiguity is much reduced. C is
much easier to debug and very stable.

And please, there is no free lunch... C has its drawbacks too.

Christopher Pisz

unread,
Aug 19, 2015, 6:00:14 PM8/19/15
to
Because debugging C macros is even more fun...



--
I have chosen to troll filter/ignore all subthreads containing the
words: "Rick C. Hodgins", "Flibble", and "Islam"
So, I won't be able to see or respond to any such messages
---

jacobnavia

unread,
Aug 19, 2015, 6:03:33 PM8/19/15
to
Le 19/08/2015 22:31, Chris Vine a écrit :
> C++ templates are a strange beast and I can understand people's
> aversion to them. They are almost unbuggable when used for
> compile-time programming. C++ can never match the macro systems of the
> homoiconic languages but the introduction of type classes (concepts) is
> essential and will be delivered, and combining that with an extension of
> constexpr for compile time computation would certainly help.

That is a big step in the wrong direction. That means that a new
"concepts" hierarchy is created that complexifies the compiler yet a bit
more.

What is needed is a programming language for types that is event-driven.
Instead of putting everything in the compiler, you put the decisions
back to the people that know what they are doing:

The programmers.

You open up the compiler and standardize its compile-time environment.
You can write programs in the meta-language and debug them with the
compiler debugger.

And what is this "meta-language"?

Well obviously...

C.

One of the advantages of using that language for the meta-language is
that all C++ programmers know it.

:-)

The compiler is a stream of tokens, symbols, and grammatical EVENTS.

Remember GUI programming that brought this event-driven programming,
where you just sub-class a feature and add your stuff to it.

When, for instance, an event like "Function begin" is detected by the
stream reader (gcc/msvc/whatever) , a C routine is called that the user
specifies. It can access to all compiler data in all active scopes at
the point of call.

This compile time function, written in C++ can generate C++ code that is
inserted at that point, or injected later in the token stream.

Or it can directly call a set of APIs to do things like add a local
variable, or create a new data type.

This would be *really* something new.

The list of compiler events is fixed, and the compiler provides a
standardized API.

--> Function/Method entry or exit
--> Statement begin or end
--> Function call

and others.

This way you can also implement team POLICIES that are explicitely
formulated.

The templates offer a programming language but it is a very cumbersome
and difficult language. Much better would be writing meta-programs in C,
with all the information about the source code that you need to adapt
the library template into a specific situation.

Of course there are a myriad of problems that could arise, like security
of the compiler (the OS can be taken down by a device driver, as we all
know), debugging of meta-programs, specifying too much environment, etc.

For instance you can sub-class the syntax error event. Then you can
write whatever you want between some markers that you define and C++ can
be outdone by writing code that is in some language some programmer
wrote. That would be worst than the templates that we have now!

I am aware of those dangers, but, as always, with more power to the
programmer comes more possibilities for catastrophic errors.

This idea means that each C/C++ programmer could just subclass the
compiler itself and adapt the language to his/her needs directly.

If the interface is restricted and standardized, it would be much safer
to use.

C++ reloaded.

:-)

P.S. Why C?

Because when writing meta-programs we do not have meta-classes of types.
Maybe they could be added later, when we start screwing everything up again.



jacobnavia

unread,
Aug 19, 2015, 6:11:07 PM8/19/15
to
Le 20/08/2015 00:00, Christopher Pisz a écrit :
> debugging C macros is even more fun...

Don't know.

struct ITERATOR(DATA_TYPE) {
Iterator it;
VECTOR_TYPE *L;
size_t index;
unsigned timestamp;
unsigned long Flags;
DATA_TYPE *Current;
DATA_TYPE ElementBuffer;
int (*VectorReplace)(struct _Iterator *,void *data,int direction);
};

Is that too much for you?

Any error in the arguments (DATA_TYPE, whatever) will be reported clearly.

Mr Flibble

unread,
Aug 19, 2015, 6:48:53 PM8/19/15
to
On 19/08/2015 14:49, Stefan Ram wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>>> In 1995, C++95 was a different language. It was still
>>> immature. C++14 is better than C++95, maybe even better
>>> than C.
>> Was there any time in history where C++ was actually worse than C?
>
> It depends on what one wants to do with it.
>
> Early C++ did not yet have the STL or possibly not even it's
> own string class, and when exceptions first came up, people
> did not yet know how to write exception-safe code or did not
> care. And even today, C++ is slower than C in the »programming
> language shootout« (the last time I looked it up).

C++ is not slower than C in fact the opposite is true.

/Flibble


Chris Vine

unread,
Aug 19, 2015, 7:08:55 PM8/19/15
to
On Wed, 19 Aug 2015 23:30:13 +0200
jacobnavia <ja...@jacob.remcomp.fr> wrote:
[snip]
> I wrote a clone of the STL in C. A generic clone, using C macros.
> Yes, everybody told me that doing that is impossible but actually...
> C macros can take you quite far. I added vectors, lists, hash tables
> (C++ added that shortly afterwards), implemented the visitor pattern,
> you have iterators, etc. All in C.
>
> This is an interesting fact. I repeat:
>
> Is this complexity justified?
>
> What is nice about C is that the risk of ambiguity is much reduced. C
> is much easier to debug and very stable.
>
> And please, there is no free lunch... C has its drawbacks too.

It most certainly does. The main problem with using C macros in the
way you appear to have done is that they are completely unhygienic.
They both inject names into the user code, and accept names from the
scope in which they execute. You have to resort to trying to write
renaming conventions for macros by hand, but which in C is impossible
to do in an automated way because (to the best of my knowledge) the C
preprocessor offers no facilities to do so.

Chris

BGB

unread,
Aug 20, 2015, 12:19:47 AM8/20/15
to
On 8/19/2015 3:46 AM, Juha Nieminen wrote:
yes, 'generics'...

potentially though, a more expanded form of type-inference could be
another possible option (say, rather than using templates, you could
have 'auto' be able to work across function and method calls, or within
class members, ...). granted, this wont really buy a whole lot (more
just redistributing complexity, rather than eliminating it).

in such a case, internally classes and methods are specialized based on
how they are used, so calling a method with a given parameter list will
cause the compiler to generate a version of the method which accepts
those parameters, working on a version of the class with the members
using the inferred type. the compiler may still reject code for which
"impossible" situations arise in the type-inference (an inferred
variable being used as inconsistent types).


> And the other thing that also obviously needs to go is multiple
> inheritance. Because, you know, MI is evil and scary, and an invention
> of the devil. But since multiple inheritance is so fundamental to
> OO programming, they can't remove it completely, so they provide a
> crippled version of it, where only one base class can have method
> implementations and member variables. (Which, of course, means that
> if an "interface" would benefit from a default implementation, you
> are out of luck; you'll have to resort to code repetition.) This
> half-ass implementation is, obviously, "better" than MI. Because
> reasons. (And again, because it's not named "multiple inheritance",
> even though it really is. That name is evil, so if we avoid it,
> then we are A-ok.)
>

I personally suspect this is more about simplifying the implementation
than about simplifying use of the language.

MI (as it works in C++) is a fair bit more complicated at the
implementation level than SI+interfaces. class layouts are append only
with SI, and a lot of hairy edge cases and funky semantics in the
inheritance tree are simply not allowed.

nevermind if globing together a mostly no-op base class with a whole
bunch of interfaces providing default methods almost may as well be MI.


> Of course in our "better" C++ objects can only be allocated dynamically.
> Because that allows garbage collection and stuff. That's nice. Except
> for the fact that memory is not the only resource that a class could
> allocate (other common examples are file handles and sockets.) Thus
> we end up with a C-like language where you have to manually release
> those other resources or they may be leaked, just like memory in C.
> (Then of course they will provide limited automation eg. in the form
> of automatic destruction at the end of a code block. Which you usually
> have to remember to write explicitly anyway, and it can only be used
> within one single code block and doesn't work with shared objects
> which may be destroyed somewhere else entirely. But it's still better
> than C++! That's the most important part.) Then of course you have some
> other more minor issues with refence-only objects (such as it being
> more difficult to swap the contents of the objects), but those are not
> important. And we don't mind that dynamically allocated objects tend
> to consume more memory. RAM is cheap, just buy more of it.
>

yeah, this is one area where a lot of languages have fallen on their face.

some languages have fudged it, like in some implementations (mostly in
embedded land), there are implementations of Java which remove the GC
and sort of kludge-on manual memory management.


my own languages have basically just ended going in similar directions
to C++ on this front (using a mix of manual memory management and RAII
like patterns). but, they aren't really meant to replace C++ (rather
more for use in a niche use-case).


> And our "better" C++ compiling ten times faster than C++ is always
> something to brag about.
>

I suspect C and C++ really need some sort of good and standardized
precompiled header mechanism here. usually compilers cripple it by
either making it stupid ("stdafx.h" in MS land), or by having arbitrary
limitations in the name of making it behave exactly as the non-PCH case
(only works provided each compilation using has exactly the same
sequence of #include directives or similar).

IMO, "better" would be allowing it to be partially undefined how things
like preprocessor defines work across PCH boundaries (use of the PCH
mechanism in headers would be explicit and will essentially opt-out of
being sensitive to preceding preprocessor definitions). the header would
then contain some magic to tell the compiler ("hey, it is safe to use me
as a self-contained PCH").


> Yet, somehow C++ persists. There have been probably at least two dozens
> of "better" C++'s during the last 20 years. A couple of them have been
> moderately successful, the vast majority of them have been forgotten.
> But they are still better than C++, dammit!
>

this is because they either seriously screw something up, or try to aim
their claims too high (promoting their language as the new next best
thing, rather than as a language intended for a specific set of
application domains).


say, I have a scripting language which is aimed mostly at real-time
embedded on "medium end" 32-bit targets (MB of RAM and Flash and maybe
100s of MHz), while mostly overlooking lower-end 32-bit targets (kB of
RAM and Flash), or 8/16-bit targets (such as the MSP430 or AVR8).
meanwhile, it also loses out to its older "sibling" if doing basically
similar stuff on a PC (this older sibling basically doing things more
useful for use-cases on a PC).

it then exists mostly as a way to gloss over asynchronous execution on
top of an event-driven cooperative scheduler with lots of latency
constraints, which can otherwise get reasonably hairy in C and C++.


for something like an MSP430, there is really nothing a script-language
can offer over C, and ROM and RAM are enough of a premium that it
doesn't really make much sense to use a more complex runtime or an
interpreter. it is fairly cramped even with fairly trivial C programs,
like 500 lines and you have basically already used up the 2kB of ROM
space (in the MSP430G2232 or similar).


on a PC (or a higher-end ARM target), one generally doesn't need to deal
with tight latency constraints, and the RAM and address space aren't
particularly cramped, so one can afford using a bunch of RAM and a JIT
compiler.


but, saying that it mostly only makes sense on a range of ARM-based
controllers or similar is a lot less impressive sounding than claiming
it is the new language which does good for everything and will overthrow
whatever came before.

other languages aim for business or desktop applications, with pretty
much the same sort of issues.


Chris M. Thomasson

unread,
Aug 20, 2015, 12:27:59 AM8/20/15
to
> "Ian Collins" wrote in message news:d3k30t...@mid.individual.net...
lol!

:^D

jacobnavia

unread,
Aug 20, 2015, 2:40:00 AM8/20/15
to
Le 20/08/2015 01:08, Chris Vine a écrit :
> On Wed, 19 Aug 2015 23:30:13 +0200
> jacobnavia <ja...@jacob.remcomp.fr> wrote:
> [snip]
>> I wrote a clone of the STL in C. A generic clone, using C macros.
>> Yes, everybody told me that doing that is impossible but actually...
>> C macros can take you quite far. I added vectors, lists, hash tables
>> (C++ added that shortly afterwards), implemented the visitor pattern,
>> you have iterators, etc. All in C.
>>
>> This is an interesting fact. I repeat:
>>
>> Is this complexity justified?
>>
>> What is nice about C is that the risk of ambiguity is much reduced. C
>> is much easier to debug and very stable.
>>
>> And please, there is no free lunch... C has its drawbacks too.
>
> It most certainly does. The main problem with using C macros in the
> way you appear to have done is that they are completely unhygienic.
> They both inject names into the user code, and accept names from the
> scope in which they execute.

Yes. They inject names like "ITERATOR" or "DATA_TYPE" ( as macro names)
and they produce composed names like Iterator_MyStructure. The DATA_TYPE
macro doesn't inject any names since it will be replaced by the type
name you are using for the vector.

You have to resort to trying to write
> renaming conventions for macros by hand, but which in C is impossible
> to do in an automated way because (to the best of my knowledge) the C
> preprocessor offers no facilities to do so.

?

I do not follow that. What could happen with the example I gave:

Chris Vine

unread,
Aug 20, 2015, 5:42:48 AM8/20/15
to
This is not the point. The issue is with the macros you have written
for constructing, manipulating and iterating over your vectors (you
said that you wrote a generic clone of the STL, using C macros). The
fact that C macros are unhygienic is both unarguable and notorious.

A similar issue potentially arises with (the much more capable) lisp-2
macros, but there you can 'gensym' your way out of name injection (a
rebinding to temporary symbols, although that still doesn't deal with
input names from the surrounding scope shadowing names used in the
macro, which lisp-2 deals with by putting function names and variable
names in separate namespaces and by its module/packaging systems).

Chris

BGB

unread,
Aug 20, 2015, 8:23:46 AM8/20/15
to
hmm:
could add a preprocessor extension, say, __gensym__, to try to address
this. each time the preprocessor sees this, it generates a new unique
symbol name.


a person can then hack on their own extended preprocessor into the build
process (in place of, or in addition to, the one supplied by the
compiler), but admittedly I haven't usually found this worthwhile (have
only done this rarely, it is less common actually than running custom
tools which spit out C code/headers).

some other extensions had included:
block macros;
ability for block-macros to emit subsequent directives;
computed expressions;
ability to use #define's as PP variables;
addition of rudimentary scoping (local variables for macros);
...

in effect, looping and recursion were also possible in this
preprocessor, just a bit hairy looking.

some of this was more inspired by macro systems from assemblers though.


> Chris
>

Alf P. Steinbach

unread,
Aug 20, 2015, 7:10:16 PM8/20/15
to
On 19.08.2015 10:46, Juha Nieminen wrote:
>
> [snip]
> And our "better" C++ compiling ten times faster than C++ is always
> something to brag about.
>
> Yet, somehow C++ persists. There have been probably at least two dozens
> of "better" C++'s during the last 20 years. A couple of them have been
> moderately successful, the vast majority of them have been forgotten.
> But they are still better than C++, dammit!

Well there is D and Rust.

But really the problem with compilation time is that today's compilers
use 1960 batch build model.

Consider the extreme measure that some people take: including all their
source code files in a single file, which they compile. There are loads
of valid C++ source code sets that won't compile that way, mainly due to
name collisions (no compiler firewall here!), but still, since this
avoids the same header being processed in and every translation unit, it
reportedly cuts so much off build times that people find it worth doing.

Consider now when the COMPILER does what that very fragile and dirty
technique is meant to achieve, namely compiling each header file only
once, like it does it with a precompiled header, for a specified set of
translation units. Since this does away with the possibility of headers
generating different code due to different preprocessor symbols or
options for different translation units, this is something the
programmer needs to OK either by opting in or (most naturally, since you
won't be fiddling with per-unit symbols when you specify a bunch of
files to compile) opting out, perhaps needing to explicitly suppress a
warning.

Consider further, that this compiler also skips trying to keep track of
recovery information, but just gives up on first detected error.

Well, there you have it, decent compilation times also for C++, I believe.

All it requires is ... new compilers. He he. :)


Cheers & hth.,

- Alf


Ian Collins

unread,
Aug 20, 2015, 7:44:38 PM8/20/15
to
Alf P. Steinbach wrote:
> On 19.08.2015 10:46, Juha Nieminen wrote:
>>
>> [snip]
>> And our "better" C++ compiling ten times faster than C++ is always
>> something to brag about.
>>
>> Yet, somehow C++ persists. There have been probably at least two dozens
>> of "better" C++'s during the last 20 years. A couple of them have been
>> moderately successful, the vast majority of them have been forgotten.
>> But they are still better than C++, dammit!
>
> Well there is D and Rust.
>
> But really the problem with compilation time is that today's compilers
> use 1960 batch build model.
>
> Consider the extreme measure that some people take: including all their
> source code files in a single file, which they compile. There are loads
> of valid C++ source code sets that won't compile that way, mainly due to
> name collisions (no compiler firewall here!), but still, since this
> avoids the same header being processed in and every translation unit, it
> reportedly cuts so much off build times that people find it worth doing.

It sounds to me like the best way to blow out compile times and system
requirements. It may have been a valid hack a decade or so back, but
with decent tools on multi-core systems it's a really dumb idea.

> Consider now when the COMPILER does what that very fragile and dirty
> technique is meant to achieve, namely compiling each header file only
> once, like it does it with a precompiled header, for a specified set of
> translation units. Since this does away with the possibility of headers
> generating different code due to different preprocessor symbols or
> options for different translation units, this is something the
> programmer needs to OK either by opting in or (most naturally, since you
> won't be fiddling with per-unit symbols when you specify a bunch of
> files to compile) opting out, perhaps needing to explicitly suppress a
> warning.
>
> Consider further, that this compiler also skips trying to keep track of
> recovery information, but just gives up on first detected error.
>
> Well, there you have it, decent compilation times also for C++, I believe.
>
> All it requires is ... new compilers. He he. :)

:) Or faster computers!

--
Ian Collins

David Brown

unread,
Aug 21, 2015, 5:41:05 AM8/21/15
to
On 21/08/15 01:10, Alf P. Steinbach wrote:
> On 19.08.2015 10:46, Juha Nieminen wrote:
>>
>> [snip]
>> And our "better" C++ compiling ten times faster than C++ is always
>> something to brag about.
>>
>> Yet, somehow C++ persists. There have been probably at least two dozens
>> of "better" C++'s during the last 20 years. A couple of them have been
>> moderately successful, the vast majority of them have been forgotten.
>> But they are still better than C++, dammit!
>
> Well there is D and Rust.
>
> But really the problem with compilation time is that today's compilers
> use 1960 batch build model.
>
> Consider the extreme measure that some people take: including all their
> source code files in a single file, which they compile. There are loads
> of valid C++ source code sets that won't compile that way, mainly due to
> name collisions (no compiler firewall here!), but still, since this
> avoids the same header being processed in and every translation unit, it
> reportedly cuts so much off build times that people find it worth doing.
>

As Ian says, this will not take much advantage of a multi-core system.

One reason I have seen for people joining their C++ files into one is to
get control of the initialisation order of file-scope objects. The
order is determined (by placement in the file) within a compilation
unit, but not across compilation units. It would be nice if there were
a good way to handle this without the mess of initialise-on-first-use.
gcc has attributes to handle this, but they are not perfect - and of
course they are not portable.

Another reason is for optimisation - either code space (because older or
more limited compilers can generate duplicate code for templates used in
multiple files), or for speed optimisation (allowing cross-module
optimisations). Link-time optimisation is a better way to handle that
now, at least in many cases.


> Consider now when the COMPILER does what that very fragile and dirty
> technique is meant to achieve, namely compiling each header file only
> once, like it does it with a precompiled header, for a specified set of
> translation units. Since this does away with the possibility of headers
> generating different code due to different preprocessor symbols or
> options for different translation units, this is something the
> programmer needs to OK either by opting in or (most naturally, since you
> won't be fiddling with per-unit symbols when you specify a bunch of
> files to compile) opting out, perhaps needing to explicitly suppress a
> warning.

I think the best way forward here is to make modules part of the C++
language. clang/llvm has a "module" extension - I haven't studied it
much, and don't know whether it is the best solution, but it is the
right sort of idea. Headers and/or source files should be compilable
into a re-usable block that can be accessed from other headers and
source files without re-compiling everything from scratch. This will
place restrictions on what can be in such headers - but that would avoid
the complications that precompiled headers have, such as the influence
the pre-processor can have on the code.

Scott Lurndal

unread,
Aug 21, 2015, 9:40:48 AM8/21/15
to
David Brown <david...@hesbynett.no> writes:

>
>I think the best way forward here is to make modules part of the C++
>language. clang/llvm has a "module" extension - I haven't studied it
>much, and don't know whether it is the best solution, but it is the
>right sort of idea. Headers and/or source files should be compilable
>into a re-usable block that can be accessed from other headers and
>source files without re-compiling everything from scratch. This will
>place restrictions on what can be in such headers - but that would avoid
>the complications that precompiled headers have, such as the influence
>the pre-processor can have on the code.

You're describing the paradigm we used at Burroughs to build
the operating system (MCP) for a line of mainframes. The language
was called SPRITE (with algol-like and modula-like features) and
the central component was the Module Interface Description (MID)
which included all the types, constants, structure definitions,
global (and module local (i.e. file scope)) variables, module
definitions and function signatures (a la C++ classes).

The MID, once compiled, was used by the SPRITE compiler to resolve
all references when a module was subsequently compiled.

Centralization in this case had many advantages, and some disadvantages
(serialized access to the MID and full recompiles after MID changes;
and in those days a full build took overnight, even on our fastest
mainframe). This was before modern SCCS tools, so updates to the
MID needed to be carefully coordinated (done via patch decks) which
could be a significant bottleneck during the early development period
when the interfaces between modules were being designed.

Given modern memory sizes and the use of available memory as a file
cache by modern operating systems, for any project of size, the majority
of header files included more than once are cached in memory for the
duration of the compile. It takes a very large project (million+ LOC)
to have a substantial compile time on modern multicore desktop CPU's
using a make utility that supports parallelism (e.g. pmake, gmake).

With good dependency tools (e.g. makedepend), incremental compiles
are normal and full compiles rare today.

red floyd

unread,
Aug 21, 2015, 1:59:42 PM8/21/15
to
On 8/21/2015 6:40 AM, Scott Lurndal wrote:
>
> You're describing the paradigm we used at Burroughs to build
> the operating system (MCP) for a line of mainframes. The language
> was called SPRITE (with algol-like and modula-like features) and
> the central component was the Module Interface Description (MID)
> which included all the types, constants, structure definitions,
> global (and module local (i.e. file scope)) variables, module
> definitions and function signatures (a la C++ classes).
>
> The MID, once compiled, was used by the SPRITE compiler to resolve
> all references when a module was subsequently compiled.
>

Back in '84, when I was interviewing (out of college), I interviewed
at Burroughs (in Burbank, CA). They were using SPRITE. By chance,
my portfolio had a fairly large Modula-2 component!

Wound up not getting the job (forget why).





Scott Lurndal

unread,
Aug 21, 2015, 3:58:08 PM8/21/15
to
Hm.. I might have actually interviewed you then, depending on
which position you had applied for (I was in the MCP group).

We were in Pasadena, however, not Burbank. SPRITE was pretty much
used only in Pasadena (the MCP was written in SPRITE as were the
SPRITE compiler, SPRASM assembler and the LINKER).

scott

red floyd

unread,
Aug 21, 2015, 6:36:03 PM8/21/15
to
May have been Pasadena, was a hell of a long time ago. I was
interviewing with the compiler group.

Juha Nieminen

unread,
Aug 24, 2015, 8:07:19 AM8/24/15
to
Stefan Ram <r...@zedat.fu-berlin.de> wrote:
> And even today, C++ is slower than C in the »programming
> language shootout« (the last time I looked it up).

It's impossible for C++ to be slower than C, given that (almost) any
valid C program is also a valid C++ program. (Those "almost" exceptions
are not things that affect speed, mainly only syntax.)

The only reason compiling a C program as C++ would make it slower is
if the compiler does something stupid with it, but that's hardly the
language's problem. It's the compiler's problem.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Juha Nieminen

unread,
Aug 24, 2015, 8:13:32 AM8/24/15
to
jacobnavia <ja...@jacob.remcomp.fr> wrote:
> That is a big step in the wrong direction. That means that a new
> "concepts" hierarchy is created that complexifies the compiler yet a bit
> more.

Well, you have two choices:

1) Clear template syntax errors at the cost of slightly longer
compile times.

2) Slightly faster compile times at the cost of unclear template
syntax errors.

Which one do you prefer?

Juha Nieminen

unread,
Aug 24, 2015, 8:27:50 AM8/24/15
to
jacobnavia <ja...@jacob.remcomp.fr> wrote:
> I wrote a clone of the STL in C. A generic clone, using C macros. Yes,
> everybody told me that doing that is impossible but actually... C macros
> can take you quite far. I added vectors, lists, hash tables (C++ added
> that shortly afterwards), implemented the visitor pattern, you have
> iterators, etc. All in C.

The biggest problem is that without RAII, using such data containers
becomes unyieldingly complex.

Because there is no RAII, you have to free the data containers explicitly.
This requirement is "inherited" by anything that uses such data containers.
For example, if you put such a data container in a struct, the struct
automatically "inherits" the need for manual construction and destruction.

With complex data structures, it can become really complicated to track
which structs require manual construction and destruction, and which don't.

And we all know how error-prone (and tedious) non-RAII manual destruction
can be. It's one of the biggest sources of bugs in C programs out there.

And of course you also have the problem of having data structures as
members of data structures (for example you have a struct containing
some dynamic data structure, and you need to create another dynamic
data structure containing elements of that struct type.) The data
structure needs to know how to destroy those elements appropriately.

The classical way of doing that is to add "member functions" to structs
for construction and destruction... as function pointers. Which increase
the size of the struct, thus adding overhead. (In C++ member functions
incur no penalty on the size of the class or struct, especially if
they are not virtual. Even if they are virtual, the penalty is that
of one single pointer regardless of the number of virtual member
functions.)

And debugging macro-based dynamic data containers must be a joy.

David Brown

unread,
Aug 24, 2015, 8:27:58 AM8/24/15
to
On 24/08/15 14:06, Juha Nieminen wrote:
> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>> And even today, C++ is slower than C in the »programming
>> language shootout« (the last time I looked it up).
>
> It's impossible for C++ to be slower than C, given that (almost) any
> valid C program is also a valid C++ program. (Those "almost" exceptions
> are not things that affect speed, mainly only syntax.)
>
> The only reason compiling a C program as C++ would make it slower is
> if the compiler does something stupid with it, but that's hardly the
> language's problem. It's the compiler's problem.
>

That is not quite entirely true. There are a few nuances that make a
small difference. For example, if a function bar() contains a call to
an external function foo(), the code generated for bar() must take into
account the possibility that foo() throws an exception. This can mean
different requirements for stack layout or restrictions to optimisations
and re-organisations that would not apply in the case of plain C.
Compilers have got very good at optimising in the face of exceptions,
and minimising their impact - but there can still be a non-zero overhead.

This is, I think, at least one reason why benchmark comparisons
regularly show C++ to be a percent or two slower than C for the same
code, typically compiled with the same compiler. Usually those couple
of percent are not important, of course, and the use of C++ gives many
other benefits - and such benchmarks say nothing about how the code
could have been written better or faster using C++ rather than C.

(I haven't tried look at this in detail myself - maybe I will do so if I
get the time.)


BGB

unread,
Aug 24, 2015, 10:59:43 AM8/24/15
to
FWIW, in C, if you have a single method which may be shared among a
number of structs, it is typical to have another struct as a "vtable".

then you would access it as:
obj->vt->Whatever(obj, ...);

another variant is to have the object as a double-pointer to the vtable,
in chich case it is:
(*obj)->Whatever(obj, ...);


the main reason to put the function-pointers directly in the struct
would be if you want to be able to configure which functions are called
on a per-instance basis. all this depends mostly on what one is doing
with it.

Bo Persson

unread,
Aug 24, 2015, 11:01:16 AM8/24/15
to
On 2015-08-24 14:27, Stefan Ram wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>>> eAnd even today, C++ is slower than C in the »programming
>>> language shootout« (the last time I looked it up).
>> It's impossible for C++ to be slower than C, given that (almost) any
>> valid C program is also a valid C++ program. (Those "almost" exceptions
>> are not things that affect speed, mainly only syntax.)
>
> You are thinking of technical aspects in isolation.
>
> But programming also has psychological and social
> aspects.
>
> Assume that a group of C programmers and a group of
> C++ programmers is created. Assume that the judges
> try to be fair and to give both groups the same number
> of programmers with the same amount of experience as
> far as this is possible.
>
> Now, it /is/ possible that the solutions turned in
> by the C++ group run slower than the solutions turned
> in by the C group (for whatever reason).
>
> And this seemed to be what happened in the case of
> the programming language shootout the last time I
> looked at it IIRC.
>

From what I could see, the benchmarks were made up by taking a set of C
programs and converting them to C++ in such a way that they ran slower.
Not by malice it seems, just by incompetence.

For example, take a fixed size static array and replace it with a
std::vector. Then fill it with push_back. Now C++ is slower than C. :-)


Bo Persson

Wouter van Ooijen

unread,
Aug 24, 2015, 11:25:43 AM8/24/15
to
Op 24-Aug-15 om 5:10 PM schreef Stefan Ram:
> Bo Persson <b...@gmb.dk> writes:
>> From what I could see, the benchmarks were made up by taking a set of C
>> programs and converting them to C++ in such a way that they ran slower.
>> Not by malice it seems, just by incompetence.
>> For example, take a fixed size static array and replace it with a
>> std::vector. Then fill it with push_back. Now C++ is slower than C. :-)
>
> Agreed, but this is possibly hinting at the possibility that
> it might be more difficult to become so competent in C++ that one
> can write code as fast as C code than to become competent in C.

But every C programmer starts out that competent just by *not* knowing
any C++!

So if you are performance-wise in a tight spot, and you are
- sure that the C solution will be good enough
- unsure about whether your C++ attempt might be better or not

the solution is simple and clear.

This of course sweeps aside that a reasonable C++ solution might be
- easier to write
- faster

Wouter van Ooijen

Bo Persson

unread,
Aug 24, 2015, 1:31:58 PM8/24/15
to
On 2015-08-24 17:10, Stefan Ram wrote:
> Bo Persson <b...@gmb.dk> writes:
>> From what I could see, the benchmarks were made up by taking a set of C
>> programs and converting them to C++ in such a way that they ran slower.
>> Not by malice it seems, just by incompetence.
>> For example, take a fixed size static array and replace it with a
>> std::vector. Then fill it with push_back. Now C++ is slower than C. :-)
>
> Agreed, but this is possibly hinting at the possibility that
> it might be more difficult to become so competent in C++ that one
> can write code as fast as C code than to become competent in C.
>

It doesn't have to be. It could also mean that the benchmarks are
selected to contain things that are easy to do in C. What if they are hard?

This is an example from one of Bjarne's papers:

#include <vector>
#include <fstream>
#include <algorithm>
#include <string>

using namespace std;

int main(int argc,char* argv[])
{
char* file = argv[2]; // input file name
char* ofile = argv[3]; // output file name

vector<string> buf;

fstream fin(file, ios::in);
string d;
while(getline(fin, d)) buf.push_back(d); //add line to buf

sort(buf.begin(), buf.end());

fstream fout(ofile, ios::out);
copy(buf.begin(), buf.end(), ostream_iterator<string>(fout,"\n"));

}


Now the challenge is:

"You have a file with an unknown number of text lines of variable
lengths. Read the lines and write them out sorted in another file.

Please do this in less than 10x the number of lines of C code, and make
it run faster."


Why do we never see anything like this?



You can't trust a benchmark you haven't rigged yourself. :-)


Bo Persson

Christopher Pisz

unread,
Aug 24, 2015, 3:23:29 PM8/24/15
to
On 8/24/2015 1:28 PM, Stefan Ram wrote:
> Bo Persson <b...@gmb.dk> writes:
>> You have a file with an unknown number of text lines of variable
>> lengths. Read the lines and write them out sorted in another file.
>
> One needs a single helper function in C to do the hard work
> that is so general that I believe some library should
> already contain it:
>
> void * append
> ( void * array,
> size_t * pos,
> size_t * capacity,
> size_t compsize,
> void * component )
>
> for example:
>
> { int capacity = 10;
> int * a = malloc( capacity * sizeof *a );
> if( !a )abort();
> int position = 0;
> int value = 65;
>
> /* append value to a */
> a = append( a, &position, &capacity, sizeof *a, &value );
> if( !a )abort();
>
> ... }
>
> This is intended to append a value to the buffer a,
> reallocating if necessary.
>
> Using the preprocessor, we could get some more efficient
> »generic« implementations of »append« for specific component
> types.
>
> DEFINE_APPEND(append_int,int);
> DEFINE_APPEND(append_double,double);
>
> . The above call would then become
>
> a = append_int( a, &position, &capacity, value );
>
> I do not claim that this will be faster than the C++
> implementation. I am just thinking about how it will
> become possible at all in C.
>
> Another approach (that will not be efficient, however,
> however), would be to read the line sizes and number of
> lines on a first pass and write them into temporary (tmpnam)
> helper files. Then allocate the buffers from the sizes given
> in those helper files and do the rest of the work in a
> second pass. (You wrote »file« - not »stream«.)
>
> BTW: The original implementation of the famous UNIX
> »sort« utility should have been written in C, so one can
> have a look at this for a realistic implementation.
>


Can we use C++ in the C++ newsgroup?
malloc? really?
void * array?
void * return type?
void * component?
Macros?
abort?

How many bad habits can you teach someone with something as easy as
reading a file?

Scott Lurndal

unread,
Aug 24, 2015, 3:50:10 PM8/24/15
to
int main(int argc, const char **argv)
{
system ("sort -d inputfile > outputfile");
}

:=)

Christopher Pisz

unread,
Aug 24, 2015, 3:57:22 PM8/24/15
to
I realize you're intentionally writing craptastic C code for the sake of
yet another silly C++ vs C argument, but you cut the context out and now
noobs everywhere will be getting this post in their search for "C++ read
file"

jacobnavia

unread,
Aug 24, 2015, 7:15:12 PM8/24/15
to
Using the CCL (C containers library) we have: (from memory)

#include <containers.h>

int main(void)
{
strCollection *myData =
strCollection.CreateFromFile("mydata.txt");
strCollection.Sort(myData);
strCollection.WriteToFile(myData,"mySortedData.txt");
return 0;
}

The C containers library is written in C and features nice stuff. Fully
open source (no license) available at:

https://code.google.com/p/ccl/

Have fun!


Öö Tiib

unread,
Aug 24, 2015, 8:51:13 PM8/24/15
to
On Monday, 24 August 2015 15:27:58 UTC+3, David Brown wrote:
> On 24/08/15 14:06, Juha Nieminen wrote:
> > Stefan Ram <r...@zedat.fu-berlin.de> wrote:
> >> And even today, C++ is slower than C in the »programming
> >> language shootout« (the last time I looked it up).
> >
> > It's impossible for C++ to be slower than C, given that (almost) any
> > valid C program is also a valid C++ program. (Those "almost" exceptions
> > are not things that affect speed, mainly only syntax.)
> >
> > The only reason compiling a C program as C++ would make it slower is
> > if the compiler does something stupid with it, but that's hardly the
> > language's problem. It's the compiler's problem.
> >
>
> That is not quite entirely true. There are a few nuances that make a
> small difference. For example, if a function bar() contains a call to
> an external function foo(), the code generated for bar() must take into
> account the possibility that foo() throws an exception.

Exceptions are expensive only when used wrongly to signal about quite
common situations. If foo() throws exceptions, but rarely, (lets say
once from 20000 calls) then exception is actually measurably cheaper
(and also results with cleaner code) than to check return values and
propagate those up stack. It is because exception is cheaper than return
value checking in good case situation.

One issue with exception is when stack unwinding takes unacceptably
long time but we we need both to succeed and to fail fast. It is rare
case, some critical real time processing for example. For fixing that
we can still use return values in C++ like we anyway do with
non-exceptional edge cases.

> This can mean
> different requirements for stack layout or restrictions to optimisations
> and re-organisations that would not apply in the case of plain C.

That only applies when we are sure that there can't be any exceptional
failures. If foo() can not throw exceptions then we have to declare
foo() 'noexcept' in C++ to gain same benefit.

> Compilers have got very good at optimising in the face of exceptions,
> and minimising their impact - but there can still be a non-zero overhead.
>
> This is, I think, at least one reason why benchmark comparisons
> regularly show C++ to be a percent or two slower than C for the same
> code, typically compiled with the same compiler. Usually those couple
> of percent are not important, of course, and the use of C++ gives many
> other benefits - and such benchmarks say nothing about how the code
> could have been written better or faster using C++ rather than C.
>
> (I haven't tried look at this in detail myself - maybe I will do so if I
> get the time.)

For whatever braindead reasons even the destructors and extern "C"
functions are not made 'noexcept' by default. One could indicate with
'noexcept(false)' if there really is throwing destructor or extern "C"
function but standard is backward there. IOW we have to use 'noexcept'
massively if we want to get really rid of that one percent as well.


Richard Damon

unread,
Aug 24, 2015, 10:19:25 PM8/24/15
to
On 8/24/15 8:50 PM, Öö Tiib wrote:

> For whatever braindead reasons even the destructors and extern "C"
> functions are not made 'noexcept' by default. One could indicate with
> 'noexcept(false)' if there really is throwing destructor or extern "C"
> function but standard is backward there. IOW we have to use 'noexcept'
> massively if we want to get really rid of that one percent as well.
>
>

The issue is that there is nothing in the language to say that
destructors or extern "C" functions naturally can't throw.

We normally try not to have destructors throw, as a destructor throwings
in the midst of processing another exception can cause problems.

extern "C" functions may well be written in C++ or call C++ functions
which might throw, so we can't presume them not to throw.

The biggest issue is that the first version of C++ defaulted to anything
could throw (and I think it would be a mistake to require all functions
that might throw to indicate this), backwards compatibility make it hard
to change that default.

Öö Tiib

unread,
Aug 25, 2015, 3:08:05 AM8/25/15
to
On Tuesday, 25 August 2015 05:19:25 UTC+3, Richard Damon wrote:
> On 8/24/15 8:50 PM, Öö Tiib wrote:
>
> > For whatever braindead reasons even the destructors and extern "C"
> > functions are not made 'noexcept' by default. One could indicate with
> > 'noexcept(false)' if there really is throwing destructor or extern "C"
> > function but standard is backward there. IOW we have to use 'noexcept'
> > massively if we want to get really rid of that one percent as well.
> >
> >
>
> The issue is that there is nothing in the language to say that
> destructors or extern "C" functions naturally can't throw.

There are no things natural. Like that members or bases of class
(but not struct) are private by default. Why not protected?
Because standard say private, nothing inherently natural. Authors
thought that in such a way it is more convenient. So the question
only is: what is more convenient default 'noexcept(false)' or
'noexcept(true)'? I would say that for destructors and extern "C"
functions the more natural default is 'noexcept(true)' for all the
rest 'noexcept(false)'.

> We normally try not to have destructors throw, as a destructor throwings
> in the midst of processing another exception can cause problems.

So normal and usual (IOW default) is 'noexcept(true)'?

> extern "C" functions may well be written in C++ or call C++ functions
> which might throw, so we can't presume them not to throw.

However the normal and usual (IOW default) is 'noexcept(true)'?

> The biggest issue is that the first version of C++ defaulted to anything
> could throw (and I think it would be a mistake to require all functions
> that might throw to indicate this), backwards compatibility make it hard
> to change that default.

On such obvious cases it is the bad of backwards guys if they miss
an unit test that checks the test case where their throwing destructor
or extern "C" function does actually throw. It is typically causing
nuisance if someone throws C++ exceptions over C interface or out of
destructor so if it really is intended then it should be explicitly
marked that way as well IMHO.

Juha Nieminen

unread,
Aug 25, 2015, 4:16:31 AM8/25/15
to
BGB <cr8...@hotmail.com> wrote:
> FWIW, in C, if you have a single method which may be shared among a
> number of structs, it is typical to have another struct as a "vtable".
>
> then you would access it as:
> obj->vt->Whatever(obj, ...);

And then they say that C is "faster" than C++...

(Sure, that resembles virtual functions in C++, but a) in C++ you are
not forced to have every member function being virtual, which means
that the compiler elides the indirection, and may even be able to
inline the function, and b) even with virtual functions the compiler
may be able to perform optimizations that it can't do with an explicit
"virtual table" like that.)

jacobnavia

unread,
Aug 25, 2015, 4:28:28 AM8/25/15
to
Le 24/08/2015 14:27, Juha Nieminen a écrit :
> jacobnavia <ja...@jacob.remcomp.fr> wrote:
>> I wrote a clone of the STL in C. A generic clone, using C macros. Yes,
>> everybody told me that doing that is impossible but actually... C macros
>> can take you quite far. I added vectors, lists, hash tables (C++ added
>> that shortly afterwards), implemented the visitor pattern, you have
>> iterators, etc. All in C.
>
> The biggest problem is that without RAII, using such data containers
> becomes unyieldingly complex.
>
> Because there is no RAII, you have to free the data containers explicitly.
> This requirement is "inherited" by anything that uses such data containers.
> For example, if you put such a data container in a struct, the struct
> automatically "inherits" the need for manual construction and destruction.
>

Yes, you have to call the destructors explicitely. It is done using

container.Finalize // release all memory and the container itself
container.Clear // release all memory and reset the container


> With complex data structures, it can become really complicated to track
> which structs require manual construction and destruction, and which don't.
>

It is very simple. When you are finished with a container you should
call its Finalize method.

> And we all know how error-prone (and tedious) non-RAII manual destruction
> can be. It's one of the biggest sources of bugs in C programs out there.
>

Yes but this is the price to pay to avoid C++ complexity that is also a
source of many bugs.

> And of course you also have the problem of having data structures as
> members of data structures (for example you have a struct containing
> some dynamic data structure, and you need to create another dynamic
> data structure containing elements of that struct type.) The data
> structure needs to know how to destroy those elements appropriately.
>

No. Data is always copied into the container and the container manages
its own data space so you just have to call its Finalize/Clear method.
Tere are NO shared data between containers.

> The classical way of doing that is to add "member functions" to structs
> for construction and destruction... as function pointers. Which increase
> the size of the struct, thus adding overhead.

No. The way it is done in the CCL is that you have a SINGLE function
pointer table (the *interface* object for each container) and you call
the functions in there. Note that you can at any moment substitute the
given functions by your own.

> (In C++ member functions
> incur no penalty on the size of the class or struct, especially if
> they are not virtual. Even if they are virtual, the penalty is that
> of one single pointer regardless of the number of virtual member
> functions.)
>

In the CCL too. No size penalty for your structures that do NOT store
any pointers at all.


> And debugging macro-based dynamic data containers must be a joy.
>

It is. The source code is available and easy to debug. No templates, no
problems debugging template code. It is funny how you ignore the
difficulty of debugging any STL container or even JUST UNDERSTANDING the
error messages whe necessary.

For instance here is an example of the C macros used:

102 static int Sort(VECTOR_TYPE *AL)
103 {
104 CompareInfo ci;
105
106 if (AL == NULL) {
107 return NullPtrError("Sort");
108 }
109 ci.ContainerLeft = AL;
110 ci.ExtraArgs = NULL;
111 qsortEx(AL->contents, AL->count, AL->ElementSize,
AL->CompareFn, &ci);
112 return 1;
113 }

The only generic argument is the type name of the elements stored in the
vector: "VECTOR_TYPE".

Is this too much for you?

jacobnavia

unread,
Aug 25, 2015, 4:31:21 AM8/25/15
to
Le 25/08/2015 10:16, Juha Nieminen a écrit :
> BGB <cr8...@hotmail.com> wrote:
>> FWIW, in C, if you have a single method which may be shared among a
>> number of structs, it is typical to have another struct as a "vtable".
>>
>> then you would access it as:
>> obj->vt->Whatever(obj, ...);
>
> And then they say that C is "faster" than C++...
>

It is.

> (Sure, that resembles virtual functions in C++, but a) in C++ you are
> not forced to have every member function being virtual, which means
> that the compiler elides the indirection, and may even be able to
> inline the function,

Of course the C compilers could do exactly the same, nothing prevents a
C compiler for doing that optimization. They do not do it now because
there is not much usage in C for that.

The first C++ compilers did not have that optimization either.

and b) even with virtual functions the compiler
> may be able to perform optimizations that it can't do with an explicit
> "virtual table" like that.)
>

The C compilers could do that too.

Ian Collins

unread,
Aug 25, 2015, 4:36:19 AM8/25/15
to
jacobnavia wrote:
>
> It is. The source code is available and easy to debug. No templates, no
> problems debugging template code. It is funny how you ignore the
> difficulty of debugging any STL container or even JUST UNDERSTANDING the
> error messages whe necessary.

There is no problem debugging template code, it's just code. If you
want to avoid cryptic error messages, use a compiler other than gcc :)

--
Ian Collins

David Brown

unread,
Aug 25, 2015, 5:09:29 AM8/25/15
to
On 25/08/15 02:50, Öö Tiib wrote:
> On Monday, 24 August 2015 15:27:58 UTC+3, David Brown wrote:
>> On 24/08/15 14:06, Juha Nieminen wrote:
>>> Stefan Ram <r...@zedat.fu-berlin.de> wrote:
>>>> And even today, C++ is slower than C in the »programming
>>>> language shootout« (the last time I looked it up).
>>>
>>> It's impossible for C++ to be slower than C, given that (almost) any
>>> valid C program is also a valid C++ program. (Those "almost" exceptions
>>> are not things that affect speed, mainly only syntax.)
>>>
>>> The only reason compiling a C program as C++ would make it slower is
>>> if the compiler does something stupid with it, but that's hardly the
>>> language's problem. It's the compiler's problem.
>>>
>>
>> That is not quite entirely true. There are a few nuances that make a
>> small difference. For example, if a function bar() contains a call to
>> an external function foo(), the code generated for bar() must take into
>> account the possibility that foo() throws an exception.
>
> Exceptions are expensive only when used wrongly to signal about quite
> common situations. If foo() throws exceptions, but rarely, (lets say
> once from 20000 calls) then exception is actually measurably cheaper
> (and also results with cleaner code) than to check return values and
> propagate those up stack. It is because exception is cheaper than return
> value checking in good case situation.

That's all true. You can think of the exception and catching as being a
bit like checking for error return values, but heavily optimised for the
"no error" case. This makes exceptions faster than traditional C-style
error checking.

My point here is in comparison to /no/ error checking - because you know
there won't be an error, or because you don't care about what happens if
there is an error. In this case, the code generated for C++ that runs
when there is no exception may not be as optimal as if the compiler knew
that no exception could possibly happen.

>
> One issue with exception is when stack unwinding takes unacceptably
> long time but we we need both to succeed and to fail fast. It is rare
> case, some critical real time processing for example. For fixing that
> we can still use return values in C++ like we anyway do with
> non-exceptional edge cases.
>
>> This can mean
>> different requirements for stack layout or restrictions to optimisations
>> and re-organisations that would not apply in the case of plain C.
>
> That only applies when we are sure that there can't be any exceptional
> failures. If foo() can not throw exceptions then we have to declare
> foo() 'noexcept' in C++ to gain same benefit.
>

Yes, using "noexcept" should (I think) give these benefits. But you
would have to use it extensively, and AFAIUI it can cause overhead in
functions that are declared "noexcept" (since they have to check if
there are exceptions in functions that they call, and then call
std::terminate()).

>> Compilers have got very good at optimising in the face of exceptions,
>> and minimising their impact - but there can still be a non-zero overhead.
>>
>> This is, I think, at least one reason why benchmark comparisons
>> regularly show C++ to be a percent or two slower than C for the same
>> code, typically compiled with the same compiler. Usually those couple
>> of percent are not important, of course, and the use of C++ gives many
>> other benefits - and such benchmarks say nothing about how the code
>> could have been written better or faster using C++ rather than C.
>>
>> (I haven't tried look at this in detail myself - maybe I will do so if I
>> get the time.)
>
> For whatever braindead reasons even the destructors and extern "C"
> functions are not made 'noexcept' by default. One could indicate with
> 'noexcept(false)' if there really is throwing destructor or extern "C"
> function but standard is backward there. IOW we have to use 'noexcept'
> massively if we want to get really rid of that one percent as well.
>
>

I have read that destructors are implicitly "noexcept" in C++11, but I
can't say I follow all the details:

<https://akrzemi1.wordpress.com/2013/08/20/noexcept-destructors/>


The trouble with extern "C" functions being default "noexcept" is that
C++ code can call a C function which calls a C++ function which throws -
the exception should make its way back. For an extern "C" function to
be "noexcept", you have to be sure that any unhandled exceptions result
in an std::terminate() - and that will not be the case for all extern
"C" functions.

Thomas Richter

unread,
Aug 25, 2015, 10:08:02 AM8/25/15
to
On 25.08.2015 15:50, Stefan Ram wrote:
> Öö Tiib <oot...@hot.ee> writes:
>> function but standard is backward there. IOW we have to use 'noexcept'
>> massively if we want to get really rid of that one percent as well.
>
> I have little experience with this.
>
> When can I use the »noexcept« specification and not
> be lying to the implementation?

Whenever it is acceptable that a thrown exception terminates the program.

> In my C++ course, at the beginning, I use
>
> - arithmetic operators, can they throw?

If these are arithmetic operators on the built-in types, then no.

>
> - »::std::cout << ...«, can this throw?

Well. Here we already have something like a corner case. No, they don't
throw on I/O exceptions because std::cio predates exceptions, so a
different mechanism is used to indicate I/O errors.

*However* it does not sound too implausible that these functions are
library implementations that, as part of the implementation, use the
global new operator in one place or another, and *this* operator can throw.

I wish C++ would be more clear which library functions can throw which
exception, *including* errors from memory allocation.

I'm also unclear whether a C++ library is required to mark functions as
noexcept if it can guarantee that they are noexcept. Probably not.

> . Another approach would be to mark /every/ function
> »noexcept« in some programs, /even if/ it can throw.

This is usually a bad idea as makes any exception processing impossible.

> If I read 15.4p10 correctly, the worst thing that can
> happen is that »::std::unexpected()« or »::std::terminate()«

Actually, the latter, not the former. The former happens if an exception
tries to propagate outside of a function marked as throw() (which is
deprecated).

> is called, and for certain programs this might be ok.
> But if the exception in this case still /is/ "handled"
> in this sense, will it really be faster with the
> »noexcept« specification?

That I cannot answer. One of the reasons for deprecating throw() was
that it was essentially equivalent to

try {
... function body ...
} catch() {
std::unexpected();
}

so the compiler had to generate *more* code to process throw() instead
of less code, so it is usually of no advantage (speed-wise) to mark a
function as throw(). And what can you actually do in std:unexpected()
except printing an error and terminating the program anyhow?

Why that changes if std::unexpected() is replaced by std::terminate(),
as it happens if throw() is replaced by noexcept, is another question.
Probably because it does not require the compiler to unwind the stack,
i.e. the program may terminate right away. Whether that is any better I
do not know.

Actually, I believe that it might have probably been better just to say
that noexcept might have caused UB in case an exception tries to
propagate out of such a function, including behavior such as "works and
just propagates the exception", "does not destroy objects on the stack"
to "just crashes". It would be closer to the "you do not need to pay
what you do not need" approach C++ typically takes in such cases.

Greetings,
Thomas

BGB

unread,
Aug 25, 2015, 10:19:31 AM8/25/15
to
the non-virtual case is:
Foo_Whatever(obj, ...);

however, what one may note though, is that if an indirect call from a
certain location tends to always land on the same target, then the CPU
catches this case and makes it a bit faster.

otherwise, one is still faced with a similar cost for pretty much every
call for targets which use ELF (pretty much every non-inline call
tending to be implemented as an indirect call through the GOT).

Öö Tiib

unread,
Aug 25, 2015, 2:49:40 PM8/25/15
to
On Tuesday, 25 August 2015 12:09:29 UTC+3, David Brown wrote:
> On 25/08/15 02:50, Öö Tiib wrote:
> > On Monday, 24 August 2015 15:27:58 UTC+3, David Brown wrote:
> >> This can mean
> >> different requirements for stack layout or restrictions to optimisations
> >> and re-organisations that would not apply in the case of plain C.
> >
> > That only applies when we are sure that there can't be any exceptional
> > failures. If foo() can not throw exceptions then we have to declare
> > foo() 'noexcept' in C++ to gain same benefit.
> >
>
> Yes, using "noexcept" should (I think) give these benefits. But you
> would have to use it extensively, and AFAIUI it can cause overhead in
> functions that are declared "noexcept" (since they have to check if
> there are exceptions in functions that they call, and then call
> std::terminate()).

That check is most likely made by implementation of 'throw' not
by functions. We assume that 'throw' can't happen. If it happens
then it finds a noexcept function in stack and says "Phew, lucky me!
I do not need to unwind anything today. :) Terminate!".

>
> >> Compilers have got very good at optimising in the face of exceptions,
> >> and minimising their impact - but there can still be a non-zero overhead.
> >>
> >> This is, I think, at least one reason why benchmark comparisons
> >> regularly show C++ to be a percent or two slower than C for the same
> >> code, typically compiled with the same compiler. Usually those couple
> >> of percent are not important, of course, and the use of C++ gives many
> >> other benefits - and such benchmarks say nothing about how the code
> >> could have been written better or faster using C++ rather than C.
> >>
> >> (I haven't tried look at this in detail myself - maybe I will do so if I
> >> get the time.)
> >
> > For whatever braindead reasons even the destructors and extern "C"
> > functions are not made 'noexcept' by default. One could indicate with
> > 'noexcept(false)' if there really is throwing destructor or extern "C"
> > function but standard is backward there. IOW we have to use 'noexcept'
> > massively if we want to get really rid of that one percent as well.
> >
> >
>
> I have read that destructors are implicitly "noexcept" in C++11, but I
> can't say I follow all the details:
>
> <https://akrzemi1.wordpress.com/2013/08/20/noexcept-destructors/>

You are correct! I was wrong. Opposite behavior is bug in old version
of gcc that I had to use recently and it has been fixed since gcc 4.8.
So on conforming compiler the destructors are already 'noexcept(true)'
and I can take back that part of my complaint.

>
> The trouble with extern "C" functions being default "noexcept" is that
> C++ code can call a C function which calls a C++ function which throws -
> the exception should make its way back. For an extern "C" function to
> be "noexcept", you have to be sure that any unhandled exceptions result
> in an std::terminate() - and that will not be the case for all extern
> "C" functions.

I have heard that example before but it does not make sense to me.
C function should be capable of calling only extern "C" functions.
If such are 'nothrow(true)' then how those can throw into C?

If C code is somehow calling 'nothrow(false)' function and that throws
then C++ exception crosses language boundaries and that is undefined
behavior AFAIK. C function does have no ways to release (or close or
unlock) any resources it acquired since it knows nothing of C++
exceptions. If the exception passes it somehow and further is even
caught by C++ that called the C function then we have alive nasal
daemons instead of 'std::terminate'.


David Brown

unread,
Aug 25, 2015, 3:32:23 PM8/25/15
to
This is getting /way/ outside my level of C++ (I don't actually do much
C++ at the moment, and when I do it is mostly on embedded systems - and
exceptions are disabled entirely). But my understanding is that an
extern "C" function is not treated any differently with regard to
exceptions - the function can throw, and it can pass on thrown
exceptions down the chain. The extern "C" merely means there is no name
mangling (thus no overloading or default parameters), and may
conceivably alter calling conventions.

But as said, I am far from sure on this - hopefully someone more
knowledgeable will make things clear.

Öö Tiib

unread,
Aug 26, 2015, 2:37:24 AM8/26/15
to
On Tuesday, 25 August 2015 22:32:23 UTC+3, David Brown wrote:
> On 25/08/15 20:49, 嘱 Tiib wrote:
> > On Tuesday, 25 August 2015 12:09:29 UTC+3, David Brown wrote:
>
> >> The trouble with extern "C" functions being default "noexcept" is that
> >> C++ code can call a C function which calls a C++ function which throws -
> >> the exception should make its way back. For an extern "C" function to
> >> be "noexcept", you have to be sure that any unhandled exceptions result
> >> in an std::terminate() - and that will not be the case for all extern
> >> "C" functions.
> >
> > I have heard that example before but it does not make sense to me.
> > C function should be capable of calling only extern "C" functions.
> > If such are 'nothrow(true)' then how those can throw into C?
> >
> > If C code is somehow calling 'nothrow(false)' function and that throws
> > then C++ exception crosses language boundaries and that is undefined
> > behavior AFAIK. C function does have no ways to release (or close or
> > unlock) any resources it acquired since it knows nothing of C++
> > exceptions. If the exception passes it somehow and further is even
> > caught by C++ that called the C function then we have alive nasal
> > daemons instead of 'std::terminate'.
> >
> >
>
> This is getting /way/ outside my level of C++ (I don't actually do much
> C++ at the moment, and when I do it is mostly on embedded systems - and
> exceptions are disabled entirely).

I have also used C++ mostly for embedded development for more than year now
in latest new projects (some sort of peak of embedded development?).
Exceptions being disabled is often the case but not always and it is
pointless for PC or for mobile device applications.

> But my understanding is that an
> extern "C" function is not treated any differently with regard to
> exceptions - the function can throw, and it can pass on thrown
> exceptions down the chain. The extern "C" merely means there is no name
> mangling (thus no overloading or default parameters), and may
> conceivably alter calling conventions.

Yes that all is so to my knowledge as well. I just have position/opinion
that it is not good since extern "C" is mainly meant for interfacing
with modules not written in C++. How can such handle (or even expect)
C++ exceptions thrown from a supposedly C interface? So most intuitive
would be the functions defined extern "C" to be 'noexcept(true)'
implicitly.

>
> But as said, I am far from sure on this - hopefully someone more
> knowledgeable will make things clear.

In practice visual studio has command line options to set it one or other
way, gcc and clang are awfully vague about it and mine coding standard is
that throwing from extern "C" function is defect regardless what standard
tells.

Juha Nieminen

unread,
Aug 26, 2015, 4:12:00 AM8/26/15
to
BGB <cr8...@hotmail.com> wrote:
> the non-virtual case is:
> Foo_Whatever(obj, ...);

The point is that a "generic" data container in C can't make that kind of
call (because there's no function overloading in C.)

Wouter van Ooijen

unread,
Aug 26, 2015, 4:13:29 AM8/26/15
to
> Yes that all is so to my knowledge as well. I just have position/opinion
> that it is not good since extern "C" is mainly meant for interfacing
> with modules not written in C++. How can such handle (or even expect)
> C++ exceptions thrown from a supposedly C interface? So most intuitive
> would be the functions defined extern "C" to be 'noexcept(true)'
> implicitly.

That extern C function can easily call a C++ function (by name when it
is declared extern "C", or by passing it a function pointer), which can
raise an exception.

Wouter van Ooijen

Chris Vine

unread,
Aug 26, 2015, 5:21:52 AM8/26/15
to
Function pointers also have language linkage. It affects things like
the calling convention. If the interface in question expects to be
given function pointers to C functions then the function pointers should
be declared extern "C" if passed by a C++ program.

You cannot as a rule propagate exceptions across language boundaries.
A C library will know nothing about C++ exceptions.

There is however limited support by particular compilers for passing
C++ exceptions through functions with C language linkage in C++
programs. gcc allows it if the function with C language linkage is
compiled with the -fexceptions compiler option (this allows C++
exceptions to traverse the C stack frames). I believe that MSVC will
do something similar, and also enables C code to catch C++ exceptions
in a limited fashion.

Chris

Juha Nieminen

unread,
Aug 26, 2015, 6:27:30 AM8/26/15
to
On Tuesday, August 25, 2015 at 11:28:28 AM UTC+3, jacobnavia wrote:
> > With complex data structures, it can become really complicated to track
> > which structs require manual construction and destruction, and which don't.
>
> It is very simple. When you are finished with a container you should
> call its Finalize method.

You don't understand.

If you have something like "Container cont;" you have to destroy it manually.

If you have something like "struct A { Container cont; };" you now have to
make sure that every time you instantiate that struct, you construct and
destroy its member appropriately (and of course manually.)

If you have something like "struct B { struct A a1, a2; };" you now have to
also make sure to construct and destroy those members appropriately whenever
you use B (which isn't at all apparent from that declaration alone, as there is no indication that you need to do that.)

Moreover, and more damningly, if struct A did *not* have the generic data
container as a member originally, and there is a lot of code using B, and
then later you add the generic data container to A, now you would need to
go through all the code that uses B and A, and make sure that they are
always constructed and destroyed appropriately (something that's extremely tedious, time-consuming and error-prone.) Imagine millions of lines of code
using struct B, all of which you would now need to refactor, just because
you wanted to add a dynamic data container into struct A. Much of that code
using struct B might in in a form that's very laborious to refactor into
a form that makes sure that the containers are destroyed appropriately.

And these are extremely simplistic examples. In actual programs the data
structure hierarchies can become a lot more complicated.

Needless to say, none of this is necessary in C++. If you change C to contain
some generic data structure, none of the code using X needs to be changed
accordingly.

> > And we all know how error-prone (and tedious) non-RAII manual destruction
> > can be. It's one of the biggest sources of bugs in C programs out there.
> >
>
> Yes but this is the price to pay to avoid C++ complexity that is also a
> source of many bugs.

So you are replacing complexity with complexity.

Thanks, but no thanks.

> > And debugging macro-based dynamic data containers must be a joy.
>
> It is. The source code is available and easy to debug.

You don't even understand what I'm saying, do you?

We have a serious case of dogma here, it seems.

Johannes Bauer

unread,
Aug 26, 2015, 6:27:34 AM8/26/15
to
On 24.08.2015 14:06, Juha Nieminen wrote:

> It's impossible for C++ to be slower than C, given that (almost) any
> valid C program is also a valid C++ program. (Those "almost" exceptions
> are not things that affect speed, mainly only syntax.)

That is a *really* cheap cop-out. Basically what you're saying is: "C++
is as fast as C if we leave out everything C++ish and just rely on C."

But that's not the point. Then you're writing C and compiling it with a
C++ compiler.

The point would be to have better language constructs perform the same
task in a more maintainable/readable/object-oriented fashion while STILL
being as fast as C.

Cheers,
Johannes

--
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
- Karl Kaos über Rüdiger Thomas in dsa <hidbv3$om2$1...@speranza.aioe.org>

Juha Nieminen

unread,
Aug 26, 2015, 6:33:38 AM8/26/15
to
On Wednesday, August 26, 2015 at 1:27:34 PM UTC+3, Johannes Bauer wrote:
> The point would be to have better language constructs perform the same
> task in a more maintainable/readable/object-oriented fashion while STILL
> being as fast as C.

In my experience C++ achieves that beautifully. Classes generally do not incur any performance penalty over equivalent C code. (In fact, even virtual functions do not usually incur any measurable penalty. Not that you would need to use virtual functions very often anyway.)

Many things are actually more efficient in C++ than in C, due to the nature of C++. For example std::sort() is faster than qsort(), and one major reason for that is that the compiler can perform more optimizations based on the element type being sorted.

jacobnavia

unread,
Aug 26, 2015, 7:05:45 AM8/26/15
to
Le 25/08/2015 01:14, jacobnavia a écrit :
>
> Using the CCL (C containers library) we have: (from memory)
>
> #include <containers.h>
>
> int main(void)
> {
> strCollection *myData =
> strCollection.CreateFromFile("mydata.txt");
> strCollection.Sort(myData);
> strCollection.WriteToFile(myData,"mySortedData.txt");
> return 0;
> }
>
> The C containers library is written in C and features nice stuff. Fully
> open source (no license) available at:
>
> https://code.google.com/p/ccl/
>
> Have fun!
>
>

Interesting that none of the C++ heads objects to this :-)

jacobnavia

unread,
Aug 26, 2015, 8:11:29 AM8/26/15
to
Le 26/08/2015 12:27, Juha Nieminen a écrit :
> If you have something like "struct A { Container cont; };" you now have to
> make sure that every time you instantiate that struct, you construct and
> destroy its member appropriately (and of course manually.)

No. My C compiler system features garbage collection. Using Boehm's
software, you do not need to do anything "manually".

Just forget about reclaiming memory and the CCL wil work almost as C++
does. You do not need to reclaim anything. It will be reclamied
automatically.

jacobnavia

unread,
Aug 26, 2015, 8:13:20 AM8/26/15
to
Le 26/08/2015 12:27, Juha Nieminen a écrit :
>>> And debugging macro-based dynamic data containers must be a joy.
>> >
>> >It is. The source code is available and easy to debug.
> You don't even understand what I'm saying, do you?
>

You didn't even answer to the concrete example I showed:

Here is it again:
For instance here is an example of the C macros used:

102 static int Sort(VECTOR_TYPE *AL)
103 {
104 CompareInfo ci;
105
106 if (AL == NULL) {
107 return NullPtrError("Sort");
108 }
109 ci.ContainerLeft = AL;
110 ci.ExtraArgs = NULL;
111 qsortEx(AL->contents, AL->count, AL->ElementSize,
AL->CompareFn, &ci);
112 return 1;
113 }

The only generic argument is the type name of the elements stored in the
vector: "VECTOR_TYPE".

Is this too much for you?


Can you tell me what is here so difficult to debug????

> We have a serious case of dogma here, it seems.

Where is the dogma?

David Brown

unread,
Aug 26, 2015, 8:22:24 AM8/26/15
to
So you are saying that the CCL can do a similar job to the C++ container
library, and avoids error-prone C-style memory management, but only if
you are using your particular non-standard C compiler? And for normal C
compilers, the CCL works but all memory management must be handled
manually (in the way other posters here have described) ?

That's okay, of course - you are free to make your own "extended C"
compiler and libraries, and recommend their usage. But when discussing
C and how it compares to C++, you need to either stick to standard C, or
at least make it clear when you rely on features from a particular
non-standard compiler.

(The same would apply if someone were to suggest the gcc "cleanup"
attribute as a way of managing automatic resource cleanup.)

BGB

unread,
Aug 26, 2015, 8:56:22 AM8/26/15
to
On 8/26/2015 3:11 AM, Juha Nieminen wrote:
> BGB <cr8...@hotmail.com> wrote:
>> the non-virtual case is:
>> Foo_Whatever(obj, ...);
>
> The point is that a "generic" data container in C can't make that kind of
> call (because there's no function overloading in C.)
>

possible, unless one gets extra nasty with a lot of macros.
but, yeah, using function-pointers and plugging things together to make
logic often gets reasonably passable performance.


as-noted, I don't really use any "generic containers" personally, mostly
throwing together code specific to the task as-needed.

jacobnavia

unread,
Aug 26, 2015, 9:28:47 AM8/26/15
to
Le 26/08/2015 14:22, David Brown a écrit :
> So you are saying that the CCL can do a similar job to the C++ container
> library, and avoids error-prone C-style memory management, but only if
> you are using your particular non-standard C compiler?

No, I am not saying that. Since that is not true.

Boehm's GC works with all compilers around, gcc, MSVC, llvm, WHATEVER.

Besides, my compiler is not "not standard". It follows the C99 standard.

David Brown

unread,
Aug 26, 2015, 10:04:49 AM8/26/15
to
On 26/08/15 15:28, jacobnavia wrote:
> Le 26/08/2015 14:22, David Brown a écrit :
>> So you are saying that the CCL can do a similar job to the C++ container
>> library, and avoids error-prone C-style memory management, but only if
>> you are using your particular non-standard C compiler?
>
> No, I am not saying that. Since that is not true.
>
> Boehm's GC works with all compilers around, gcc, MSVC, llvm, WHATEVER.
>

Fair enough - but it would be very useful to explain that, since it is
(apparently) critical to the practical use of CCL, and the great
majority of C programming is done without a garbage collector.

> Besides, my compiler is not "not standard". It follows the C99 standard.

I was under the impression that the garbage collector was an extension
that your compiler included (since you wrote "My C compiler system
features garbage collection"), and since I know it provides a number of
other extensions.

But my wording was bad - if your compiler supports C99 standards (at
least to a reasonable level), then it is indeed "standard" even though
it also has extensions. I merely meant that if the CCL works best with
a compiler with extensions that are not part of standard C, then you
should say so here.

Öö Tiib

unread,
Aug 26, 2015, 1:40:44 PM8/26/15
to
We can use C++ fully within code of function defined extern "C".
My point was about letting exceptions to fly out from such
function. That is wrong since C++ exceptions are not C interface.
Function pointers of usual C++ and extern "C" functions are not
compatible. It is common extension that these are compatible
but C++ does not guarantee it.

Johannes Bauer

unread,
Aug 27, 2015, 5:56:54 AM8/27/15
to
Now you're shifting gears again, saying exactly what C++ people often
say: Yes, we can have much cleaner code while having even BETTER
performance.

Then, when pressed against a real benchmark where C++ is weaker than C,
you start crying and say "well we could have written a C variant that
TECHNICALLY still is C++".

You can't have it both ways.

In my experience, working heavily with STL and containers or working
with streams incurs a HEAVY performance penalty even when C++ advocates
claim the opposite.

Ian Collins

unread,
Aug 27, 2015, 6:00:46 AM8/27/15
to
Johannes Bauer wrote:
>
> In my experience, working heavily with STL and containers or working
> with streams incurs a HEAVY performance penalty even when C++ advocates
> claim the opposite.

Streams, yes, containers, not unless you pick the wrong one!

--
Ian Collins

Wouter van Ooijen

unread,
Aug 27, 2015, 6:12:09 AM8/27/15
to
Op 27-Aug-15 om 11:56 AM schreef Johannes Bauer:
> Now you're shifting gears again, saying exactly what C++ people often
> say: Yes, we can have much cleaner code while having even BETTER
> performance.
>
> Then, when pressed against a real benchmark where C++ is weaker than C,
> you start crying and say "well we could have written a C variant that
> TECHNICALLY still is C++".
>
> You can't have it both ways.

Yes we can have it both ways. If you are good you will use C++ to its
best, and be at least as fast as C. If you are less confident, you can
just use "C in C++" and still be as fast as C.

The only way you (or we, or I) loose is when C++ features are used
inappropriately.

Wouter van Ooijen

Johannes Bauer

unread,
Aug 27, 2015, 7:05:27 AM8/27/15
to
On 27.08.2015 12:12, Wouter van Ooijen wrote:

>> Then, when pressed against a real benchmark where C++ is weaker than C,
>> you start crying and say "well we could have written a C variant that
>> TECHNICALLY still is C++".
>>
>> You can't have it both ways.
>
> Yes we can have it both ways. If you are good you will use C++ to its
> best, and be at least as fast as C. If you are less confident, you can
> just use "C in C++" and still be as fast as C.

Imagine my programming language Q. In Q, you can write fantastically
abstract, object-oriented code. The only Q compilers suck really bad and
the language is very difficult to get optimized. Q, when used in its
OO-style, regularly finishes last in all benchmarks.

But Q has one redeeming features: In Q, one can include machine code by
the language standard. Therefore it is possible to create a Q program
that is at least as fast as ANY other implementation in any language.

Therefore, Q has both the benefit of being the FASTEST while still being
a super cool object-oriented language. Would you object to crowning Q
the best of all programming languages? Of course you would.

Because you *can't* have it both ways. Either you choose one set of
features of the lanauge (nice, abstract notation) or another (performance).

> The only way you (or we, or I) loose is when C++ features are used
> inappropriately.

This is a claim that is often used. That those benchmarks are just
written by complete idiots who don't know how to use C++ well enough.

If nothing else then this is a testament to the outrageous difficulty of
getting C++ right (i.e. good performance while still being good OO). I
wonder why nobody of the people who make those claims give it a shot and
try to produce a benchmark that greatly illustrates how abstract C++
code can be written while still being as fast as C? Surely this would be
a good demonstration of the language's abilities?

Johannes Bauer

unread,
Aug 27, 2015, 7:07:56 AM8/27/15
to
Or, apparently, if you *use* the container wrong.

Somewhere earlier in this thread someone was making fun of people
creating a vector and doing lots of push_back() operations. Because it
is COMPLETELY obvious that that's a stupid idea, right? And, apparently,
makes the performance go down the drain.

Melzzzzz

unread,
Aug 27, 2015, 7:26:17 AM8/27/15
to
I don't know why but when using g++ one have to do
sync_with_stdio(false) in order to get good performance with streams.

Martin Shobe

unread,
Aug 27, 2015, 7:43:25 AM8/27/15
to
On 8/27/2015 6:05 AM, Johannes Bauer wrote:
> On 27.08.2015 12:12, Wouter van Ooijen wrote:
>> Yes we can have it both ways. If you are good you will use C++ to its
>> best, and be at least as fast as C. If you are less confident, you can
>> just use "C in C++" and still be as fast as C.
>
> Imagine my programming language Q. In Q, you can write fantastically
> abstract, object-oriented code. The only Q compilers suck really bad and
> the language is very difficult to get optimized. Q, when used in its
> OO-style, regularly finishes last in all benchmarks.
>
> But Q has one redeeming features: In Q, one can include machine code by
> the language standard. Therefore it is possible to create a Q program
> that is at least as fast as ANY other implementation in any language.
>
> Therefore, Q has both the benefit of being the FASTEST while still being
> a super cool object-oriented language. Would you object to crowning Q
> the best of all programming languages? Of course you would.
>
> Because you *can't* have it both ways. Either you choose one set of
> features of the lanauge (nice, abstract notation) or another (performance).

False dichotomy. Another option is you get the nice, abstract notation
when you don't need performance and performance when you do.

Martin Shobe

Wouter van Ooijen

unread,
Aug 27, 2015, 8:06:08 AM8/27/15
to
Op 27-Aug-15 om 1:05 PM schreef Johannes Bauer:
> Imagine my programming language Q. In Q, you can write fantastically
> abstract, object-oriented code. The only Q compilers suck really bad and
> the language is very difficult to get optimized. Q, when used in its
> OO-style, regularly finishes last in all benchmarks.
>
> But Q has one redeeming features: In Q, one can include machine code by
> the language standard. Therefore it is possible to create a Q program
> that is at least as fast as ANY other implementation in any language.
>
> Therefore, Q has both the benefit of being the FASTEST while still being
> a super cool object-oriented language. Would you object to crowning Q
> the best of all programming languages? Of course you would.

I don't recall anyone calling C++ the best of all (possible) programming
languages? But on your 'gedankenexperiment': when that language is
compared to machine code, it sure wins! (Which is exactly the C / C++
situation).

> If nothing else then this is a testament to the outrageous difficulty of
> getting C++ right (i.e. good performance while still being good OO). I
> wonder why nobody of the people who make those claims give it a shot and
> try to produce a benchmark that greatly illustrates how abstract C++
> code can be written while still being as fast as C?

My attempt: "objects no thanks" https://www.youtube.com/watch?v=k8sRQMx2qUw

Wouter van Ooijen

Johannes Bauer

unread,
Aug 27, 2015, 9:19:32 AM8/27/15
to
On 27.08.2015 14:05, Wouter van Ooijen wrote:

>> Therefore, Q has both the benefit of being the FASTEST while still being
>> a super cool object-oriented language. Would you object to crowning Q
>> the best of all programming languages? Of course you would.
>
> I don't recall anyone calling C++ the best of all (possible) programming
> languages?

No, because it isn't. I never claimed it to be. Q clearly is.

> But on your 'gedankenexperiment': when that language is
> compared to machine code, it sure wins! (Which is exactly the C / C++
> situation).

You're missing the point: I compare Q to ALL other programming languages
and it beats them ALL in terms of performance. Also, it can be used to
do OO, which is nice.

>> If nothing else then this is a testament to the outrageous difficulty of
>> getting C++ right (i.e. good performance while still being good OO). I
>> wonder why nobody of the people who make those claims give it a shot and
>> try to produce a benchmark that greatly illustrates how abstract C++
>> code can be written while still being as fast as C?
>
> My attempt: "objects no thanks" https://www.youtube.com/watch?v=k8sRQMx2qUw

For MCUs you focus on memory and flash space consumption, which you show
in that talk. However the amount of work you have to do as a programmer
and the unintuitive highly templated copde (e.g. 28:30) results is not
really something that is convincing. You worked *hard* to get results
that you get for free in C++ and I find the gain (in your examples) a
bit questionable. Using C++ just for the sake of using C++ isn't
convincing reasoning.

Johannes Bauer

unread,
Aug 27, 2015, 9:21:59 AM8/27/15
to
On 27.08.2015 13:43, Martin Shobe wrote:

>> Because you *can't* have it both ways. Either you choose one set of
>> features of the lanauge (nice, abstract notation) or another
>> (performance).
>
> False dichotomy. Another option is you get the nice, abstract notation
> when you don't need performance and performance when you do.

Fair point! This is pretty much what I was trying to get at. You have a
langauge that can take two different roles, but in many cases it isn't
possible to fulfill both at once.

In C++ advocacy that honesty is oftentimes missing and people claim that
both is possible if only you're good enough at C++ programming.

Wouter van Ooijen

unread,
Aug 27, 2015, 9:38:56 AM8/27/15
to
Op 27-Aug-15 om 3:19 PM schreef Johannes Bauer:
> On 27.08.2015 14:05, Wouter van Ooijen wrote:
>
>>> Therefore, Q has both the benefit of being the FASTEST while still being
>>> a super cool object-oriented language. Would you object to crowning Q
>>> the best of all programming languages? Of course you would.
>>
>> I don't recall anyone calling C++ the best of all (possible) programming
>> languages?
>
> No, because it isn't. I never claimed it to be. Q clearly is.
>
>> But on your 'gedankenexperiment': when that language is
>> compared to machine code, it sure wins! (Which is exactly the C / C++
>> situation).
>
> You're missing the point: I compare Q to ALL other programming languages
> and it beats them ALL in terms of performance.

True, but not relevant. By that reasoning plain binary (or better yet:
nand gates and solder!) beats everything.

>> My attempt: "objects no thanks" https://www.youtube.com/watch?v=k8sRQMx2qUw

Kudos for actually looking at it!!

> For MCUs you focus on memory and flash space consumption,

space, but also execution time

> which you show
> in that talk. However the amount of work you have to do as a programmer
> and the unintuitive highly templated copde (e.g. 28:30) results is not
> really something that is convincing. You worked *hard* to get results
> that you get for free in C++ and I find the gain (in your examples) a

I think you mean C

> bit questionable. Using C++ just for the sake of using C++ isn't
> convincing reasoning.

That talk was for a C++ audience, so I concentrated on how C++ can be
used effectively on a small embedded system, which is quite unlike the
'mainstream' use of C++. The advantages of my approach are in composable
library elements, which is very difficult to (effectively) do in C, and
easy but inefficient in 'mainstream' C++.

For instance, to blink a LED that is on an I2C exender chip, which is
itself connected to two pins of another extender chip I write (with a
slightly differnt version of my library):

#include "targets/lpc1114fn28.hpp"
typedef hwcpp::lpc1114fn28< 48 * hwcpp::MHz > target;
typedef target::timing timing;

typedef hwcpp::pcf8574a<
hwcpp::i2c_bus_master_bb_scl_sda<
target::scl,
target::sda,
timing::kHz< 100 >
>
> chip1;

typedef hwcpp::pcf8574a<
hwcpp::i2c_bus_master_bb_scl_sda<
chip1::p6,
chip1::p7,
timing::kHz< 100 >
>
> chip2;

int main(){
hwcpp::blinking<
chip2::p4,
timing
>::blink();
}

I use the same library elements (hwcpp::pcf8574a and
hwcpp::i2c_bus_master_bb_scl_sda) twice, with different parameters.

There is a catch of course: the library elements must be configured at
compile time. But I am working on that :)

Wouter

Bo Persson

unread,
Aug 27, 2015, 11:39:53 AM8/27/15
to
On 2015-08-27 13:07, Johannes Bauer wrote:
> On 27.08.2015 12:00, Ian Collins wrote:
>> Johannes Bauer wrote:
>>>
>>> In my experience, working heavily with STL and containers or working
>>> with streams incurs a HEAVY performance penalty even when C++ advocates
>>> claim the opposite.
>>
>> Streams, yes, containers, not unless you pick the wrong one!
>
> Or, apparently, if you *use* the container wrong.
>
> Somewhere earlier in this thread someone was making fun of people
> creating a vector and doing lots of push_back() operations. Because it
> is COMPLETELY obvious that that's a stupid idea, right? And, apparently,
> makes the performance go down the drain.
>

Yes, that was me.

The silly thing was to convert a statically initialized, fixed size,
array in C to a dynamically resized array in C++, and blame that on the
language.

Bad code runs slowly in any language.


Bo Persson

Bo Persson

unread,
Aug 27, 2015, 1:33:57 PM8/27/15
to
On 2015-08-27 18:06, Stefan Ram wrote:
> Bo Persson <b...@gmb.dk> writes:
>> The silly thing was to convert a statically initialized, fixed size,
>> array in C to a dynamically resized array in C++, and blame that on the
>> language.
>
> »Bjarne Stroustrup:
>
> ...
>
> I think a better way of approaching C++ is to use some
> of the standard library facilities. For example, use a
> vector rather than an array.
>
> ...
>
> vectors really are as fast as arrays.«
>
>
> A Conversation with Bjarne Stroustrup, Part I
> by Bill Venners
> October 13, 2003
>

They are if you use them the same way. They are not if you do:

const int Array[1000000] = {0};

and

std::vector V;
for (i = 0; i != 1000000; ++i)
V.push_back(i);


So why do that in a benchmark?


Bjarne's favorite is

std::sort(V.begin(), V.end());

vs

qsort(Array, sizeof(something), sizeof(something), compare);


where std::sort is not only easier to use, but also A LOT faster because
it is easily inlined and tuned by the compiler.



Bo Persson

Bo Persson

unread,
Aug 27, 2015, 1:35:22 PM8/27/15
to
On 2015-08-27 19:33, Bo Persson wrote:
> On 2015-08-27 18:06, Stefan Ram wrote:
>> Bo Persson <b...@gmb.dk> writes:
>>> The silly thing was to convert a statically initialized, fixed size,
>>> array in C to a dynamically resized array in C++, and blame that on the
>>> language.
>>
>> »Bjarne Stroustrup:
>>
>> ...
>>
>> I think a better way of approaching C++ is to use some
>> of the standard library facilities. For example, use a
>> vector rather than an array.
>>
>> ...
>>
>> vectors really are as fast as arrays.«
>>
>>
>> A Conversation with Bjarne Stroustrup, Part I
>> by Bill Venners
>> October 13, 2003
>>
>
> They are if you use them the same way. They are not if you do:
>
> const int Array[1000000] = {0};
>
> and
>
> std::vector V;
> for (i = 0; i != 1000000; ++i)
> V.push_back(i);
^^^
or V.push_back(0);
as it were

Chris Vine

unread,
Aug 27, 2015, 1:43:48 PM8/27/15
to
On Thu, 27 Aug 2015 13:07:46 +0200
Johannes Bauer <dfnson...@gmx.de> wrote:
> Somewhere earlier in this thread someone was making fun of people
> creating a vector and doing lots of push_back() operations. Because it
> is COMPLETELY obvious that that's a stupid idea, right? And,
> apparently, makes the performance go down the drain.

I agree with your views about streams, but if this is your best point
about containers, then you have it wrong.

The example to which you refer was not deprecating using push_back() on
a vector (it is fine if you have reserved enough space in advance where
the size is known at run time, and just as efficient as the C
equivalent if you don't know the size in advance and therefore have to
reallocate at intervals). In fact using push_back() or emplace_back()
with reserve() is often the best way to do it because it avoids default
constructing the elements of the container when constructing the
container - you can do a single construction of each element 'in situ'.

The thing that was being deprecated in the post to which you
refer was comparing the performance of a statically sized C array with a
C++ vector subject to dynamic resizing. There is nothing wrong with
using a normal statically sized C array in C++ (I use them all the
time) but if you are allergic to them there is also std::array. The
advantage of std::array is that it suppresses pointer decay. C++ also
allows you to do object construction in uninitialized memory using
std::uninitialized_copy(), if you really need to.

Chris


Richard Damon

unread,
Aug 27, 2015, 10:36:36 PM8/27/15
to
The issue is that the standard doesn't guarantee it, but neither does it
prohibit it. There is nothing inherent about a C interface that says
that exceptions can't propagate over the interface.

The other big issue would be that extern "C" is just a special case of a
whole family of declaration (yes, the only one with a specific
definition, but others are allowed as implementation defined behavior),
and if you define that extern "C" functions have a different default
than normal functions, then by the same manner should that extend to
other linkages, after all, if C doesn't understand exceptions, then
probably extern "fortran" shouldn't either. Then you run into the
problem, what if you have a language that DOES naturally handle
exceptions, maybe even an extern "C++".

Paavo Helde

unread,
Aug 28, 2015, 1:02:54 AM8/28/15
to
Richard Damon <Ric...@Damon-Family.org> wrote in
news:AOPDx.558$2R4...@fx24.iad:

> The other big issue would be that extern "C" is just a special case of
> a whole family of declaration (yes, the only one with a specific
> definition, but others are allowed as implementation defined
> behavior), and if you define that extern "C" functions have a
> different default than normal functions, then by the same manner
> should that extend to other linkages, after all, if C doesn't
> understand exceptions, then probably extern "fortran" shouldn't
> either. Then you run into the problem, what if you have a language
> that DOES naturally handle exceptions, maybe even an extern "C++".

extern "abi" has been proposed, as a way to define portable C++ ABI
(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4028.pdf). This
should most definitely support exception pass-through.


BGB

unread,
Aug 28, 2015, 11:55:11 AM8/28/15
to
On 8/27/2015 8:38 AM, Wouter van Ooijen wrote:
> Op 27-Aug-15 om 3:19 PM schreef Johannes Bauer:
>> On 27.08.2015 14:05, Wouter van Ooijen wrote:
>>
>>>> Therefore, Q has both the benefit of being the FASTEST while still
>>>> being
>>>> a super cool object-oriented language. Would you object to crowning Q
>>>> the best of all programming languages? Of course you would.
>>>
>>> I don't recall anyone calling C++ the best of all (possible) programming
>>> languages?
>>
>> No, because it isn't. I never claimed it to be. Q clearly is.
>>
>>> But on your 'gedankenexperiment': when that language is
>>> compared to machine code, it sure wins! (Which is exactly the C / C++
>>> situation).
>>
>> You're missing the point: I compare Q to ALL other programming languages
>> and it beats them ALL in terms of performance.
>
> True, but not relevant. By that reasoning plain binary (or better yet:
> nand gates and solder!) beats everything.
>

I will partly disagree on the point that logic gate ICs tend to work out
as more expensive than using cheap MCUs for non-trivial logic, which may
still support higher internal clock speeds.

while a few gate chips may be a little cheaper, more are needed, vs the
MCU which can typically run pretty much all the logic (then one might
hook it up with external analog or power-components).

there is a gray area of transistor "logic" where generally a lot of
2N3904's or similar are used, such as to convert a small digital signal
into something that can signal some bigger transistors.


>>> My attempt: "objects no thanks"
>>> https://www.youtube.com/watch?v=k8sRQMx2qUw
>
> Kudos for actually looking at it!!
>
>> For MCUs you focus on memory and flash space consumption,
>
> space, but also execution time
>
> > which you show
>> in that talk. However the amount of work you have to do as a programmer
>> and the unintuitive highly templated copde (e.g. 28:30) results is not
>> really something that is convincing. You worked *hard* to get results
>> that you get for free in C++ and I find the gain (in your examples) a
>
> I think you mean C
>

I mostly still use C, and am using MCUs roughly between the two examples
of small MCUs given (epic 2kB of ROM and 256B of RAM).

I went the small one-off program route.
because, you know, a few hundred lines, who cares.



but, for example:
can't really copy-paste in the driver program I had used on a bigger
controller (which supported more motor types and could use sense
feedback to control motor frequency), as there is basically no way it
would fit on this MCU.

here is a (slightly outdated now):
http://pastebin.com/ftg01XWG

(I have since made changes, such as adding support for a VF curve rather
than using a constant duty-cycle).


though, my logic tends to run the things full speed and full-power, as
how I am using them I don't generally care if they use a little more
power, since the power they use is pretty much tiny vs the electronics
they are driving.

granted, a way to shave power-use off of some AC 3Ph induction motors
would be a little more useful. these things use a bit more power than an
MCU (even the transistor driver, where the bridge circuits tend to use a
lot more power just in leakage current).

like, 0.4mA for the MCU going at full power, is a lot less than 60mA
being leaked by the motor driver, is a lot less than 10 or 30 amps to
run the motor... (more so when one considers the significant voltage
differences between the MCU's 3.3V rail, and a 48VDC - 55VDC rail used
to power the motor).


a lot of this stuff is at fairly low voltages, vs the 208VAC or 440VAC
used by "real" 3-phase induction motors, but this stuff requires high
voltage components (600V or 1400V), as well as posing a lot more safety
risks.


>> bit questionable. Using C++ just for the sake of using C++ isn't
>> convincing reasoning.
>
> That talk was for a C++ audience, so I concentrated on how C++ can be
> used effectively on a small embedded system, which is quite unlike the
> 'mainstream' use of C++. The advantages of my approach are in composable
> library elements, which is very difficult to (effectively) do in C, and
> easy but inefficient in 'mainstream' C++.
>

yeah.

doing much embedded or real-time programming does somewhat point out how
these are not really the same thing as writing code for a PC (or phone).

annoying sometimes is needing to explain to people that, no, their
smartphone is not representative of a typical embedded microcontroller,
and in-fact even an old crappy feature-phone generally may as well be a
super-computer in comparison...

Wouter van Ooijen

unread,
Aug 28, 2015, 1:28:55 PM8/28/15
to
> annoying sometimes is needing to explain to people that, no, their
> smartphone is not representative of a typical embedded microcontroller,
> and in-fact even an old crappy feature-phone generally may as well be a
> super-computer in comparison...

Agreed. On the school were I teach (lecture?) PC and smart Phone
applications belong to the Informatics department, because in most cases
you can programn them without paying much attention to their
limitations. Small systems, and the realy critical parts of 'big'
systems (device drivers, kernels, etc) belong to the Technical
Informatics department.

But that was last year. Now they have decided that we want to be a broad
department, so the distinction will start to blurr :(

Wouter

BGB

unread,
Aug 29, 2015, 1:29:47 AM8/29/15
to
getting into this mostly as a hobbyist, I noticed a few things.

moving from generic coding to performance-sensitive PC coding, one
starts to realize just how many clock-cycles are squirreled away in
non-productive "internal busywork".


in "generic" code, most of the cycles are basically moving from place to
place and convoluted logic trees to eventually get to where the "actual
work" gets done (IOW: actual work being the subset of statements that
are actually relevant to the end result).

this is things like invoking 3rd party libraries to do simple
operations, often doing overly low-level operations that each only
really accomplish very little work.

a lot of "typical" programmers take this sort of thing for granted, and
then end up thinking data-entry dealing with several hundred clients is
a "heavy workload" because it bogs down their Core i7 or Xeon based
database server...


meanwhile, someone who actually writes more efficient code, can
essentially do on the order of thousands to millions of times as much
work on the same basic hardware. how? by writing the code that does
whatever more directly, rather than going through mountains of 3rd party
code in the process (as strange of a concept as this is to some people...).

like, say, real-time rigid body physics in a game, involves just a
little bit more computational work than doing data-entry.

and things like real-time video processing are in-turn more demanding,
and no, it is not about "the GPU being a bazillion times faster than the
CPU".

things like video encoding and chroma-key and so on can be done in
real-time on the CPU in HD resolutions with little more than plain C or
C++ code without even necessarily resorting to SIMD intrinsics, much
less needed hand-crafted ASM. maybe one can use integer and fixed point
rather than using floating point for anything which may conceivably have
a decimal point in it...


moving from performance-oriented code in soft-real-time to firm or hard
real-time similarly mostly involves suddenly needing to deal with the
matter of much smaller timescales, as well as latencies and deadlines.
suddenly code just randomly going off into a rabbit hole for a few
milliseconds every so often is no longer quite so acceptable.

and going off to microcontrollers has the matter of needing to deal with
keeping code sizes and memory use small, otherwise it isn't going to fit.


but, it is worth noting that it isn't *that* drastic.

like, it is more about working with the limitations and being
conservative with resources, rather than thinking of everything in terms
of trying to find the biggest possible hammer to hit each nail.


or such...

Ian Collins

unread,
Aug 29, 2015, 1:49:19 AM8/29/15
to
BGB wrote:
>
> meanwhile, someone who actually writes more efficient code, can
> essentially do on the order of thousands to millions of times as much
> work on the same basic hardware. how? by writing the code that does
> whatever more directly, rather than going through mountains of 3rd party
> code in the process (as strange of a concept as this is to some people...).

It's second nature to Java programmers!

--
Ian Collins

Öö Tiib

unread,
Aug 29, 2015, 8:46:52 AM8/29/15
to
On Friday, 28 August 2015 05:36:36 UTC+3, Richard Damon wrote:
> On 8/26/15 1:40 PM, Öö Tiib wrote:
> > On Wednesday, 26 August 2015 11:13:29 UTC+3, Wouter van Ooijen wrote:
> >>> Yes that all is so to my knowledge as well. I just have position/opinion
> >>> that it is not good since extern "C" is mainly meant for interfacing
> >>> with modules not written in C++. How can such handle (or even expect)
> >>> C++ exceptions thrown from a supposedly C interface? So most intuitive
> >>> would be the functions defined extern "C" to be 'noexcept(true)'
> >>> implicitly.
> >>
> >> That extern C function can easily call a C++ function (by name when it
> >> is declared extern "C", or by passing it a function pointer), which can
> >> raise an exception.
> >
> > We can use C++ fully within code of function defined extern "C".
> > My point was about letting exceptions to fly out from such
> > function. That is wrong since C++ exceptions are not C interface.
> > Function pointers of usual C++ and extern "C" functions are not
> > compatible. It is common extension that these are compatible
> > but C++ does not guarantee it.
> >
>
> The issue is that the standard doesn't guarantee it, but neither does it
> prohibit it. There is nothing inherent about a C interface that says
> that exceptions can't propagate over the interface.

By C standard exception is undefined behavior. All what I ask for is
that C++ takes a clear position that such undefined behavior is not
allowed implicitly (as by default) to be passed to caller of
extern "C" function over supposedly C interface since there is
'noexcept' now in language.

> The other big issue would be that extern "C" is just a special case of a
> whole family of declaration (yes, the only one with a specific
> definition, but others are allowed as implementation defined behavior),
> and if you define that extern "C" functions have a different default
> than normal functions, then by the same manner should that extend to
> other linkages, after all, if C doesn't understand exceptions, then
> probably extern "fortran" shouldn't either.

That is simpler since extern "fortran" is implementation-defined. So
implementation has to define what it does. I would only like that it
was *required* from implementations not to "forget" that exceptions
are part of interface.

> Then you run into the
> problem, what if you have a language that DOES naturally handle
> exceptions, maybe even an extern "C++".

Exceptions are part of interface for me on same level as calling
conventions. Exceptions can differ like calling conventions do
differ. So if C++ implemention declares that it can provide
extern "Ada" interface then I would expect either only Ada
exceptions from it or no exceptions at all.

I trust ABI designer can and does create differences there also
maliciously. For example on MSVC we have to '__except' SEH exceptions
and 'catch' C++ exceptions. In C# we can however 'catch' all three
as .net 'Exception'. SEH exception is 'SEHException', C++ exception is
'Win32Exception' and rest are visbasic exceptions.

To me such screw-ups leave impression that MS wants C++ developers
to choose other platforms like Linux, OS-X, Android, iOS,
Playstation or Wii for launching their products first and perhaps
when there are enough resources ... then also port it to MS
platforms later. C++ however isn't property of MS so it is not
required to be as screwed up as MS is.


BGB

unread,
Aug 29, 2015, 10:21:12 AM8/29/15
to
IME, Java programmers often tend to be more the sort to answer any sort
of coding challenge with "there is a library for that!" and/or proceed
to copy-paste some code fragments off the internet to do it. they tend
to be very adverse to actually writing any code themselves (and then
look down on and belittle anyone who does write their own code).

at least the general thinking in most other language communities is that
you use the language to actually write code in it.


Robert Wessel

unread,
Aug 29, 2015, 11:18:10 AM8/29/15
to
On Sat, 29 Aug 2015 05:46:31 -0700 (PDT), 嘱 Tiib <oot...@hot.ee>
wrote:

>On Friday, 28 August 2015 05:36:36 UTC+3, Richard Damon wrote:
I'm not sure I understand - C++ exception handlers are not supposed to
catch OS-type exceptions (such as an access violation). MS provides a
different method for that (SEH), *nix provides signals instead (which
don't quite do what SEH does, of course, but that's the mechanism).

Öö Tiib

unread,
Aug 29, 2015, 12:36:57 PM8/29/15
to
I try to elaborate bit more then. It is true (but is irrelevant and
orthogonal) that operating system raises certain SEH exceptions when it
detects issues like integer divide by zero or access violation.

Why it is irrelevant? Because SEH exception is not anyhow bound to
represent only OS level issues and faults. It is just an (alien to
C++ language) exception mechanism. SEH exception can be thrown with
operating system API function 'RaiseException':
https://msdn.microsoft.com/en-us/library/windows/desktop/ms680552%28v=vs.85%29.aspx

IMHO such alien exceptions should not fly in C++ by default. C++ exceptions
should not fly in other languages by default. C++ language specification
should be clear about it. Perhaps it can be made part of language how
to adjust translation mechanisms to/from alien exceptions at interface
border but I would hate '__try<"Ada">' garbage keywords all over the code
or the like. It is just pure ugliness for me ... YMMV.

Bo Persson

unread,
Aug 29, 2015, 12:50:49 PM8/29/15
to
On 2015-08-29 14:46, Öö Tiib wrote:
>
> To me such screw-ups leave impression that MS wants C++ developers
> to choose other platforms like Linux, OS-X, Android, iOS,
> Playstation or Wii for launching their products first and perhaps
> when there are enough resources ... then also port it to MS
> platforms later.

No, their intention was that we should all code for Windows first, and
then discover that the code isn't portable. That way they would get a
monopoly.

Isn't working out very well though.


Bo Persson

Robert Wessel

unread,
Aug 30, 2015, 1:01:07 AM8/30/15
to
On Sat, 29 Aug 2015 09:36:35 -0700 (PDT), 嘱 Tiib <oot...@hot.ee>
wrote:

>On Saturday, 29 August 2015 18:18:10 UTC+3, robert...@yahoo.com wrote:
>> On Sat, 29 Aug 2015 05:46:31 -0700 (PDT), ? Tiib <oot...@hot.ee>
>> wrote:
>>
>> >On Friday, 28 August 2015 05:36:36 UTC+3, Richard Damon wrote:
Until recently, threads were "alien" to C and C++. Perhaps those
should not have been exposed to C or C++ programs on platforms that
support threads?

Richard Damon

unread,
Aug 30, 2015, 9:01:06 AM8/30/15
to
The problem with this is that extern "C" functions are NOT controlled by
the C standard, but the C++ standard. It is the C++ standard that
defines the construct extern "xxx", and the meaning of it. It gives as a
purpose the interfacing to code using an "ABI" possibly different than
the one used by the C++ implementation. The standard provides little
requirements on these ABIs other than that the implementation must
document what ABIs it supports (and thus indirectly the ABIs themselves,
although that documentation may just be a pointer to the ABI
documentation) and a requirement that extern "C" be one of the
syntactically valid options. The only code that is required to be able
to call via extern "C" is a function compiled in C++ with the extern "C"
calling convention. There is no requirement that an actually C
implementation exists with that ABI. Normally there is, as most C++
compilers can also be put into a "C" mode (since C is largely a subset
of C++, especially if you are only supporting older C standards).

It is quite possible for the "C" abi (or any other "XXX" abi) to be
defined in a way that is compatible with passing C++ exceptions.

This says that exceptions through C code aren't just "undefined
behavior" but it is implement defined whether passing exceptions is
defined behavior. It would be presumptive of the C++ standard to say
that it is "improper" to allow exceptions over the implementation
defined ABI called "C".

We also have the specter of backwards compatibility. Standards
Committees take a hard look at changing things that break backwards
compatibility, especially quietly. Since prior to adding noexcept to the
language, it was quite possible to have extern "C" functions that did
throw exceptions, to silently change this so that such a function
becomes undiagnosed undefined behavior (undiagnosed as there is
fundamentally no way to know if a given extern "C" will throw or not,
one of the reasons to add noexcept to the language) would be a quiet
breaking change, something the committee tries hard not to do.

This is basically the same reason noexcept isn't the default on C++
functions too.

>
>> The other big issue would be that extern "C" is just a special case of a
>> whole family of declaration (yes, the only one with a specific
>> definition, but others are allowed as implementation defined behavior),
>> and if you define that extern "C" functions have a different default
>> than normal functions, then by the same manner should that extend to
>> other linkages, after all, if C doesn't understand exceptions, then
>> probably extern "fortran" shouldn't either.
>
> That is simpler since extern "fortran" is implementation-defined. So
> implementation has to define what it does. I would only like that it
> was *required* from implementations not to "forget" that exceptions
> are part of interface.

extern "C" is just as implementation defined, the only difference is
that extern "C" is mandated to exist, as opposed to its existence also
being implementation defined.
>
>> Then you run into the
>> problem, what if you have a language that DOES naturally handle
>> exceptions, maybe even an extern "C++".
>
> Exceptions are part of interface for me on same level as calling
> conventions. Exceptions can differ like calling conventions do
> differ. So if C++ implemention declares that it can provide
> extern "Ada" interface then I would expect either only Ada
> exceptions from it or no exceptions at all.
>

If C++ can call Ada, then Ada can call C++ by defining a function as
extern "Ada", thus the implementation needs to define how exceptions
cross the boundary (or that they can't). It may well be that some form
of mapping will occur so that each can understand the others exceptions
(or it doesn't make much sense to allow it).

Öö Tiib

unread,
Aug 30, 2015, 10:52:55 AM8/30/15
to
Perhaps you did not read what I wrote, (sorry if it was too long for
you) and formed instead some sort of strawman inside of your head
about my position. I am puzzled what have threads to do with alien
kinds of exceptions.

Threads are same for all languages and are not part of interface
between subroutines so those were just regulated by other standards
until recently.

Exceptions are different between languages and are part of interface
between subroutines. So how those are dealt with is responsibility
of inter-language interface. On case of C++ that has been
'extern "OtherLanguage"' since before its very standardization.

What is the connection here?

Öö Tiib

unread,
Aug 30, 2015, 12:33:11 PM8/30/15
to
Indeed and my position is that it does that weakly about extern "C".
That is bad because majority of pain in C++ is it containing duplicated
set of features for to be "backward compatible" with C. C with what C++
can't interface decently. Rest of extern "xxx" are implementation
defined anyway.

> It gives as a
> purpose the interfacing to code using an "ABI" possibly different than
> the one used by the C++ implementation. The standard provides little
> requirements on these ABIs other than that the implementation must
> document what ABIs it supports (and thus indirectly the ABIs themselves,
> although that documentation may just be a pointer to the ABI
> documentation) and a requirement that extern "C" be one of the
> syntactically valid options. The only code that is required to be able
> to call via extern "C" is a function compiled in C++ with the extern "C"
> calling convention. There is no requirement that an actually C
> implementation exists with that ABI. Normally there is, as most C++
> compilers can also be put into a "C" mode (since C is largely a subset
> of C++, especially if you are only supporting older C standards).

Yes and exceptions are exactly a part of subroutine call interface,
IOW that ABI. So how exceptions behave must be defined by the one who
defines the requirements for it. Should C++ standard be made by
software analysts and architects (who try to be clear and explicit)
or by language lawyers (who try to be verbose and vague)?

>
> It is quite possible for the "C" abi (or any other "XXX" abi) to be
> defined in a way that is compatible with passing C++ exceptions.

Where can I possibly claim that interface specification is impossible task?
Exceptions are not more special than any other part of interface. C++
has chosen to raise arbitrary C++ objects as exceptions. The
extern "C" function has limitations about parameters but none about
exceptions. Isn't it just work not done?

>
> This says that exceptions through C code aren't just "undefined
> behavior" but it is implement defined whether passing exceptions is
> defined behavior. It would be presumptive of the C++ standard to say
> that it is "improper" to allow exceptions over the implementation
> defined ABI called "C".
>
> We also have the specter of backwards compatibility. Standards
> Committees take a hard look at changing things that break backwards
> compatibility, especially quietly. Since prior to adding noexcept to the
> language, it was quite possible to have extern "C" functions that did
> throw exceptions, to silently change this so that such a function
> becomes undiagnosed undefined behavior (undiagnosed as there is
> fundamentally no way to know if a given extern "C" will throw or not,
> one of the reasons to add noexcept to the language) would be a quiet
> breaking change, something the committee tries hard not to do.

It is perhaps correct about C++ that exceptions have never been thought
out properly. What we need is still a tool with what the things can be
possibly made nicely in the future. If it is too risky then add
extern "C ABI" or whatever and deprecate extern "C" (with diagnostic
required), suits me.

> This is basically the same reason noexcept isn't the default on C++
> functions too.

Latter is still OK since lot of people use standard library in majority
of their code and it does throw occasionally. We may need to avoid
standard library in embedded system but there we turn exceptions
usually off anyway.
Exactly. Either no exceptions allowed or some sort of translation is in
place with possible constraints. What we have now is where exceptions
just maybe fly or maybe don't fly and there are no restrictions to their
binary nature.

Also I don't like what "implementation defined" means in C++. If
something is "implementation defined" then please make special
standardized traits available so our code can also find out compile
time how implementation did define it.
It is loading more messages.
0 new messages