Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Nearly a quarter-century later, why is C++ still so popular?"

214 views
Skip to first unread message

Lynn McGuire

unread,
Jul 27, 2021, 3:59:56 PM7/27/21
to
"Nearly a quarter-century later, why is C++ still so popular?"

https://sdtimes.com/softwaredev/nearly-a-quarter-century-later-why-is-c-still-so-popular/

"Despite C++’s downward trend on the TIOBE Programming Community index
since 2001, the language’s fall from the coveted top two slots in 2020,
vociferous and persistent claims that C++ is “dead like COBOL,” and the
inroads the Rust is making in developer circles – C++ is still as
viable, vital and relevant as ever."

Because it just works ?

And I am not impressed with Rust whatsoever.

Lynn

Siri Cruise

unread,
Jul 27, 2021, 4:48:34 PM7/27/21
to
In article <sdpojb$aq7$2...@dont-email.me>,
Lynn McGuire <lynnmc...@gmail.com> wrote:

> Because it just works ?

Because too many people still think garbage collection is
inefficient, and they can do better. See also Apple's dependence
on reference counting and thus the inability to do generic graph
data structures.

Object oriented programming and closures without mark/sweep
collection is a bloody pain in the nethers. Hence C++ and its
eternally broken finalisers.

--
:-<> Siri Seal of Disavowal #000-001. Disavowed. Denied. Deleted. @
'I desire mercy, not sacrifice.' /|\
Discordia: not just a religion but also a parody. This post / \
I am an Andrea Doria sockpuppet. insults Islam. Mohammed

Vir Campestris

unread,
Jul 27, 2021, 5:05:50 PM7/27/21
to
On 27/07/2021 21:48, Siri Cruise wrote:
> In article <sdpojb$aq7$2...@dont-email.me>,
> Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> Because it just works ?
>
> Because too many people still think garbage collection is
> inefficient, and they can do better. See also Apple's dependence
> on reference counting and thus the inability to do generic graph
> data structures.
>
> Object oriented programming and closures without mark/sweep
> collection is a bloody pain in the nethers. Hence C++ and its
> eternally broken finalisers.
>
But C++ doesn't have finalisers. That's a feature of GC languages.

Did you mean C#?

Andy

Keith Thompson

unread,
Jul 27, 2021, 5:23:26 PM7/27/21
to
C++ has destructors. I suspect that's what Siri Cruise was referring to.

Saying that garbage collection is a substitute for destructors suggests
that memory is the only resource that needs to be managed.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */

Cholo Lennon

unread,
Jul 27, 2021, 8:02:10 PM7/27/21
to
On 7/27/21 5:48 PM, Siri Cruise wrote:
> In article <sdpojb$aq7$2...@dont-email.me>,
> Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> Because it just works ?
>
> Because too many people still think garbage collection is
> inefficient, and they can do better.

It's not just what people think, there are a lot scenarios where garbage
collection is not a problem. I worked many years in the telecom industry
using C++ and Java... when the hardware got cheap and powerful, we
started to move a lot of services, protocol stacks, applications and
code base to Java. Why? Because the development and maintenance over
several platforms (Windows 32/64 bits, Solaris Intel/Sparc, Linux
32/64/RHEL 4-8) was ways more simple. How about the performance? Not a
problem, our application servers were still able to deal with thousands
of transactions per second without burning the CPUs. Of course, C++ was
still used, for legacy applications, or for low level layers in some,
not all, protocol stacks.

Regards

--
Cholo Lennon
Bs.As.
ARG

Bonita Montero

unread,
Jul 27, 2021, 10:05:23 PM7/27/21
to
Because its magnitudes more productive and maintainable than C
with the same performance.

Bo Persson

unread,
Jul 28, 2021, 2:20:06 AM7/28/21
to
Here "dead like COBOL" could mean "used a lot, but after 25+ years the
developers don't need to ask lots of questions on the internet".

So doesn't show up on TIOBE.

Juha Nieminen

unread,
Jul 28, 2021, 4:16:53 AM7/28/21
to
Siri Cruise <chine...@yahoo.com> wrote:
> In article <sdpojb$aq7$2...@dont-email.me>,
> Lynn McGuire <lynnmc...@gmail.com> wrote:
>
>> Because it just works ?
>
> Because too many people still think garbage collection is
> inefficient

Because it is.

Or, more precisely, perhaps GC *in itself* is not inefficient, but the
paradigm it's solving has turned out to be.

You see, garbage collection arose primarily from a practical problem in
object-oriented programming, and that problem is: When you dynamically
allocate tons and tons of individual objects, and they refer to each
other in a complicated mesh of references, how do you keep track of all
of this so that each object is properly destroyed once nothing refers to
it, and do it in a manner that's as easy for the programmer as possible,
as efficient and possible, and allows for circular dependencies without
causing leaks?

Thus, an incredible amount of academic and practical research work was
put into garbage collection algorithms and implementations that were as
efficient as possible.

Problem is, it's an efficient solution for a fundamentally inefficient
programming paradigm. It's a solution to something you actually shouldn't
be doing in the first place, in a modern computer system. What is this
thing you shouldn't be doing, you might ask?

Dynamically allocating tons and tons of individual objects. That's what.

In the 1980's and largely the 1990's, when OOP was the absolute king,
dynamically allocating tons of individual objects wasn't really such a
huge problem. CPUs didn't care how you were accessing memory, or how
the execution flow of the program jumped around. Each memory access was
equally slow regardless of which memory address you were using, and
every conditional jump was equally slow regardless of anything.

No longer.

CPUs started introducing memory caches, long instruction pipelines,
predictive branching, and a bunch of other optimizations which
suddenly started caring about how you access memory and how you do
your conditional jumps.

It has turned out that one of the greatest programming paradigms that has
ever existed, object-oriented programming, is also a performance killer
in modern CPUs. This is because OOP, at least the traditional approach
to it, is extremely cache-unfriendly, predictive-branching-unfriendly,
and pipeline-unfriendly, and thus tends to produce inefficient
executables.

For this reason for quite some time now many of the major projects out
there that require extreme performance, such as game engines, have
started moving away from an OOP design to a more data-oriented design
which optimizes for modern CPU architectures much better than OOP does.

And the thing about data-oriented design is that it so happens that it
doesn't benefit nor rely so much on a garbage collector, because you
are trying to avoid random dynamic memory allocations in the first
place (and prefer all data to be neatly in large arrays).

So yes, perhaps garbage collection *in itself* is not inefficient, but it's
fundamentally tied to a programming paradigm that is, and moving away from
that paradigm also lessens the need for GC.

MrSpud_o...@khlkg.info

unread,
Jul 28, 2021, 5:30:46 AM7/28/21
to
That depends who wrote it. Unfortunately a lot of C++ I see seems to be a load
of syntax circle jerking with pointless features being thrown into the code
that bring nothing to the table and simply obfuscate the code path simply
because the dev wanted to try them out. Eg recently I saw co-routines used
completely inappropriately when standard class methods and private variables
would have done the job far better and simpler.

Alf P. Steinbach

unread,
Jul 28, 2021, 10:22:30 AM7/28/21
to
Very clarifying. I agree with you. I just didn't think of it that way.

- Alf

Siri Cruise

unread,
Jul 28, 2021, 10:53:23 AM7/28/21
to
> On 28 Jul 2021 10:16, Juha Nieminen wrote:

> > In the 1980's and largely the 1990's, when OOP was the absolute king,
> > dynamically allocating tons of individual objects wasn't really such a
> > huge problem. CPUs didn't care how you were accessing memory, or how
> > the execution flow of the program jumped around. Each memory access was
> > equally slow regardless of which memory address you were using, and
> > every conditional jump was equally slow regardless of anything.

So C++ but no OOP?

Alf P. Steinbach

unread,
Jul 28, 2021, 3:24:00 PM7/28/21
to
On 28 Jul 2021 16:53, Siri Cruise wrote:
>> On 28 Jul 2021 10:16, Juha Nieminen wrote:
>
>>> In the 1980's and largely the 1990's, when OOP was the absolute king,
>>> dynamically allocating tons of individual objects wasn't really such a
>>> huge problem. CPUs didn't care how you were accessing memory, or how
>>> the execution flow of the program jumped around. Each memory access was
>>> equally slow regardless of which memory address you were using, and
>>> every conditional jump was equally slow regardless of anything.
>
> So C++ but no OOP?
>


Multi-paradigm. :)

- Alf (Oh, "I like your car"! :-o )

Siri Cruise

unread,
Jul 28, 2021, 5:17:03 PM7/28/21
to
In article <sdsaru$emf$1...@dont-email.me>,
My sql interface returns effectively arrays of unions. I also
have no deallocation interface. I do have explicit begin/end
because sql tables really need to have transactions ended or
aborted in a clear and definite manner.

Juha Nieminen

unread,
Jul 29, 2021, 2:54:13 AM7/29/21
to
Siri Cruise <chine...@yahoo.com> wrote:
>> On 28 Jul 2021 10:16, Juha Nieminen wrote:
>
>> > In the 1980's and largely the 1990's, when OOP was the absolute king,
>> > dynamically allocating tons of individual objects wasn't really such a
>> > huge problem. CPUs didn't care how you were accessing memory, or how
>> > the execution flow of the program jumped around. Each memory access was
>> > equally slow regardless of which memory address you were using, and
>> > every conditional jump was equally slow regardless of anything.
>
> So C++ but no OOP?

I suppose it depends a bit on your definition of "object-oriented
programming".

In the most traditional sense OOP means that your program has been
structured into classes and sub-classes, in other words, inheritance
hierarchies, and most often this involves dynamic binding (ie. virtual
functions). The abstraction that such an inheritance hierarchy
introduces implies that much of the code handles base class type
references/pointers, which point to dynamically allocated objects,
and which member function implementations are done with virtual
functions implemented in the derived classes.

Most GUI libraries implement very traditional OOP inheritance
hierarchies (I often like to say that it's almost as if OOP was
created precisely for GUI programming, because it's the most
prominent and perfect example.)

Of course, as it turns out, this paradigm is inefficient in modern CPUs
(because of caches, pipelines, predictive branching, etc.) The CPU
doesn't really like objects that are randomly placed in memory, thus
with data being accessed from random memory locations. It also doesn't
like branches it cannot predict. Or branching at all, especially
indirect branching. (Also, compilers have a hard time optimizing
code full of true function calls, because they can't see what the
function is doing, and it hinders things like autovectorization.)

Of course in C++ in particular there's a slightly better alternative
to this, which is to handle objects by value rather than by pointer.
This allows, for example, to put objects into arrays (ie. the arrays
don't just contain pointers to objects, but the objects themselves).
This isn't yet completely optimal, but it's a step in the right
direction.

Of course if you want to handle objects by value, you need to restrict
your object-oriented design. The pointer-to-base-object-handles-derived-
object becomes limited, and the use of virtual functions becomes more
limited.

You still can have inheritance, and a full inheritance hierarchy,
which makes it OOP-like, but there are some limitations.

(But, data-oriented design needs more than just being able to handle
objects by value. The problem with handling arrays of objects is that
the member variables you are interested in will be spaced out,
sometimes by more than the cache line size, which makes it almost
useless in terms of cache optimization.)

Chris Vine

unread,
Jul 29, 2021, 4:54:19 AM7/29/21
to
On Wed, 28 Jul 2021 07:53:05 -0700
Siri Cruise <chine...@yahoo.com> wrote:
> > On 28 Jul 2021 10:16, Juha Nieminen wrote:
>
> > > In the 1980's and largely the 1990's, when OOP was the absolute king,
> > > dynamically allocating tons of individual objects wasn't really such a
> > > huge problem. CPUs didn't care how you were accessing memory, or how
> > > the execution flow of the program jumped around. Each memory access was
> > > equally slow regardless of which memory address you were using, and
> > > every conditional jump was equally slow regardless of anything.
>
> So C++ but no OOP?

It's not just OOP which is the problem. Move semantics are also an
issue, because the purpose of moving a rvalue, instead of copying it,
is to enable the internal implementation of the object to be transferred
into another object. This usually involves allocating the parts of the
object to be transferred on free store so that pointers can be copied.

In cases where this is relevant, there is a contest between the
approaches of (i) moving internals which have been allocated on free
store, and (ii) copying internals which have been allocated on the
stack. In any given case, profiling is one way of determining which
approach is better.

Öö Tiib

unread,
Jul 29, 2021, 12:31:52 PM7/29/21
to
On Thursday, 29 July 2021 at 09:54:13 UTC+3, Juha Nieminen wrote:
>
> Of course, as it turns out, this paradigm is inefficient in modern CPUs
> (because of caches, pipelines, predictive branching, etc.) The CPU
> doesn't really like objects that are randomly placed in memory, thus
> with data being accessed from random memory locations. It also doesn't
> like branches it cannot predict. Or branching at all, especially
> indirect branching. (Also, compilers have a hard time optimizing
> code full of true function calls, because they can't see what the
> function is doing, and it hinders things like autovectorization.)

When it matters to performance then there are presumably
large containers of such objects? The boost::base_collection can help
there as instead of allocating each object dynamically in random memory
locations it keeps those in subcontainers by value.

MrSpu...@slwgrcbmd1l7yi1.org

unread,
Jul 30, 2021, 7:21:19 AM7/30/21
to
And where do these subcontainers allocate their memory from? The magic woo woo
heap in special moonbeam memory?

Manfred

unread,
Jul 30, 2021, 7:41:02 AM7/30/21
to
Good points, agreed.

Manfred

unread,
Jul 30, 2021, 7:51:31 AM7/30/21
to
On 7/28/2021 4:53 PM, Siri Cruise wrote:
>> On 28 Jul 2021 10:16, Juha Nieminen wrote:
>
>>> In the 1980's and largely the 1990's, when OOP was the absolute king,
>>> dynamically allocating tons of individual objects wasn't really such a
>>> huge problem. CPUs didn't care how you were accessing memory, or how
>>> the execution flow of the program jumped around. Each memory access was
>>> equally slow regardless of which memory address you were using, and
>>> every conditional jump was equally slow regardless of anything.
>
> So C++ but no OOP?
>

I wouldn't rule out OOP entirely, but it is true that OOP has been
overestimated in its early years.
It is not only a matter of efficiency and being hardware-friendly, it is
also about the fact that OOP is well suited for some classes of
problems, but not for others - Juha gave the good example of GUI
programming, but there is an entire world outside that area.

For example, problems that are inherently procedural or algorithm
oriented gain more trouble than benefit from an OOP model.

The strength of C++ is that it has lots to offer for several different
programming paradigms beyond OOP.

Juha Nieminen

unread,
Aug 1, 2021, 4:28:20 AM8/1/21
to
As mentioned earlier, that's a step in the right direction, but it doesn't
achieve optimal performance (with the possible exception of very small
objects whose every data member is accessed during the same iterative
process).

The reason why putting objects by value into arrays is better but not
optimal is because most often, for a given operation that you want
to perform on each object, you are only interested in one (or a few)
of their member variables. Quite often the objects will also contain
other members variables, which will cause the ones you are interested
in to be spaced out in the array, with big gaps.

So, even if you are traversing the array linearly from beginning to
end, you will be making jumps of the size of these gaps, which is
not optimal for cache locality. The number of cache misses will be
larger than if the data you are interested in was stored exclusively,
without any gaps.

This is one the unfortunate drawbacks of the otherwise brilliant idea
of modularity and object-oriented design: It groups related values into
the same class, which is nice designwise, but suboptimal for performance
because quite often you aren't handling *all* of these member values,
only certain ones, so you'll be jumping in larger steps than necessary.

Things like game engines and number-crunching applications want to
squeeze out every single clock cycle they can, so this design is not
good for them. They need a completely different approach from the
traditional class design.

Öö Tiib

unread,
Aug 1, 2021, 8:49:31 AM8/1/21
to
On Sunday, 1 August 2021 at 11:28:20 UTC+3, Juha Nieminen wrote:
The example must be always concrete as on general case there are
no optimal solutions. "Optimal" is always situational term and so no
such thing that is optimal for everything is possible.

The performance issues can never be decided even to exist without
profiling real data in real situations. When we profile then only small
fraction of code lines are executed vast majority of run-time and also
only small subset of types are those that instantiate vast majority of
run-time objects.

For vast majority of code nice modular and object-oriented design
has no meaningful drawbacks in overall product performance.
So that is one thing that makes C++ powerful. We can use nice
and flexible design most of the time without worrying.

> This is one the unfortunate drawbacks of the otherwise brilliant idea
> of modularity and object-oriented design: It groups related values into
> the same class, which is nice designwise, but suboptimal for performance
> because quite often you aren't handling *all* of these member values,
> only certain ones, so you'll be jumping in larger steps than necessary.

With that small performance-critical subset we can do in C++ whatever
performance tuning tricks are needed for the concrete situations. That
is other thing that makes C++ powerful. It is often possible to get several
times better overall performance by optimizing only small subset of code.

> Things like game engines and number-crunching applications want to
> squeeze out every single clock cycle they can, so this design is not
> good for them. They need a completely different approach from the
> traditional class design.

Indeed but I lack imagination how even to use that traditional class
design within some concrete number crunching example. Can you
bring any example? Linear algebra subroutines? Fourier transform?
Sparse matrices? Graph analytics? Compression/decompression?
The number crunching is usually quite abstract, specific and
constrained, not fit to process our arbitrary anything-goes data in
our OOP hierarchies and also it is often ran in GPU. So we need
translation layer that transforms data in our hierarchy for processing
and back anyway.

Vir Campestris

unread,
Aug 1, 2021, 4:40:57 PM8/1/21
to
On 29/07/2021 07:53, Juha Nieminen wrote:
> Of course in C++ in particular there's a slightly better alternative
> to this, which is to handle objects by value rather than by pointer.
> This allows, for example, to put objects into arrays (ie. the arrays
> don't just contain pointers to objects, but the objects themselves).
> This isn't yet completely optimal, but it's a step in the right
> direction.

My usual advice on collections is:

Use vector.
No, really use vector.
Are you sure you shouldn't use vector?

It's the right solution for most cases. One of the nice things is of
course that vector represents an array of objects. Allocated all in a
single block of contiguous memory.

Without needing pointer or references.

Yes, of course there are cases for using the other collections, but not
as common in my experience.

Andy

Juha Nieminen

unread,
Aug 2, 2021, 6:23:45 AM8/2/21
to
Öö Tiib <oot...@hot.ee> wrote:
> Indeed but I lack imagination how even to use that traditional class
> design within some concrete number crunching example.

Suppose you are handling large triangle meshes where each vertex has quite
a lot of data. Rather obviously you have the vertex 3D position, but you
may also have texture UV coordinates for that vertex, a normal vector at
that vertex, its color, its brightness, and any other number of values that
may be needed for rendering the triangle mesh.

It would be a nice design to create a vertex class (or perhaps struct)
where every one of those types of data related to a verted has been
collected. So it could look something like:

class Vertex
{
float position[3];
float uv_coords[2];
float environment_map_coords[2];
float normal_vector[3];
unsigned char color_rgba[4];

public:
// ...
};

That's really nice designwise. Not so nice in terms of performance.
Most often you want to do some operation to all vertices, and this
operation only cares about one or two of those member variables and
doesn't need the rest.

For example, suppose you want to transform the mesh in some manner,
by modifying its position values. Even if all the Vertex objects are
in an array, and even if you traverse the array linearly, you will
be jumping in large steps from one Vertex to the next, most probably
having a cache miss every time.

If, instead, you had an array that only contains the position coordinates
of all the vertices and nothing more, it will be much more compact, and
traversing it from beginning to end will cause significantly less
cache misses.

Or suppose you wanted to do some operations to the color of each vertex.
Again, you would be jumping in large steps from one Vertex object to
the next. If all the colors were exclusively in an array, then they
would be packed as compactly as possible, and a linear traversal would
thus be much more efficient.

(This is the idea with Data Oriented Design.)

MrSpud...@_h_.co.uk

unread,
Aug 2, 2021, 11:06:11 AM8/2/21
to
On Mon, 2 Aug 2021 10:23:27 -0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>Or suppose you wanted to do some operations to the color of each vertex.
>Again, you would be jumping in large steps from one Vertex object to
>the next. If all the colors were exclusively in an array, then they
>would be packed as compactly as possible, and a linear traversal would
>thus be much more efficient.
>
>(This is the idea with Data Oriented Design.)

Which is just a trendy name for the way code used to be written before
structured then OO design came along.

Öö Tiib

unread,
Aug 3, 2021, 7:57:53 AM8/3/21
to
On Monday, 2 August 2021 at 13:23:45 UTC+3, Juha Nieminen wrote:
> Öö Tiib <oot...@hot.ee> wrote:
> > Indeed but I lack imagination how even to use that traditional class
> > design within some concrete number crunching example.
> Suppose you are handling large triangle meshes where each vertex has quite
> a lot of data. Rather obviously you have the vertex 3D position, but you
> may also have texture UV coordinates for that vertex, a normal vector at
> that vertex, its color, its brightness, and any other number of values that
> may be needed for rendering the triangle mesh.
>
> It would be a nice design to create a vertex class (or perhaps struct)
> where every one of those types of data related to a verted has been
> collected. So it could look something like:
>
> class Vertex
> {
> float position[3];
> float uv_coords[2];
> float environment_map_coords[2];
> float normal_vector[3];
> unsigned char color_rgba[4];
>
> public:
> // ...
> };
>
> That's really nice designwise. Not so nice in terms of performance.
> Most often you want to do some operation to all vertices, and this
> operation only cares about one or two of those member variables and
> doesn't need the rest.

It is example of what I wrote that:
"The number crunching is usually quite abstract, specific and
constrained, not fit to process our arbitrary anything-goes data in
our OOP hierarchies and also it is often ran in GPU."

Feels rather unlikely that we want to do anything with such Vertex instance
in separation. Instead we want those to be processed (in massive
amounts) by some rendering engine (like OpenGL). So we either have
translation/generator functionality that produces those vertices in form
what engine needs or for simple static models can have engine primitives
directly in instance:

class StaticModel {
std::vector<glm::vec3> vertices;
std::vector<glm::vec2> uvs;
std::vector<glm::vec3> normals;
// etc...
public:
// etc...
};

> For example, suppose you want to transform the mesh in some manner,
> by modifying its position values. Even if all the Vertex objects are
> in an array, and even if you traverse the array linearly, you will
> be jumping in large steps from one Vertex to the next, most probably
> having a cache miss every time.
>
> If, instead, you had an array that only contains the position coordinates
> of all the vertices and nothing more, it will be much more compact, and
> traversing it from beginning to end will cause significantly less
> cache misses.

Majority of such per-vertex mass transformations are done by
rendering engine we just call a function. We can't pass our "nice" vertices
to engine anyway so the point is doubtful why to have those. OTOH when
our model consists of higher level 3D primitives like moving cylinders,
cones and spheres then we change properties of those instead of tinkering
with each vertex. There OOP helps greatly.

Juha Nieminen

unread,
Aug 3, 2021, 8:17:10 AM8/3/21
to
?? Tiib <oot...@hot.ee> wrote:
> Feels rather unlikely that we want to do anything with such Vertex instance
> in separation. Instead we want those to be processed (in massive
> amounts) by some rendering engine (like OpenGL).

I don't really understand why you have such a hard time imagining a situation
where something like that Vertex class would feel like a very convenient and
good design choice.

Not all programs that handle, for example, triangle meshes are doing so to
merely feed the data to OpenGL (or any other API). There may be myriads of
reasons why a program may want to handle triangle meshes in some manner,
and some of these applications may be something that benefit from extreme
speed (because they may need to do a lot of heavy operations to gigantic
triangle meshes).

It's very natural to think that since each vertex of the mesh has a lot
of data attached to it (such as its position, uv-coordinates, etc), to
group all this data into one class (or struct) for easy handling. After
all, if you need to, for example, copy a vertex, or remove a vertex,
or do many other types of operations to single vertex objects, it's
most convenient when the vertex is one single object.

Many operations become less convenient and more laborious, requiring
writing more code, when the data of the vertices has been split into
separate arrays.

But the thing is, if you want maximal efficiency, sometimes you need to
do some compromises regarding convenience and nice abstractions.

Öö Tiib

unread,
Aug 3, 2021, 9:01:50 AM8/3/21
to
On Tuesday, 3 August 2021 at 15:17:10 UTC+3, Juha Nieminen wrote:
> Öö Tiib <oot...@hot.ee> wrote:
> > Feels rather unlikely that we want to do anything with such Vertex instance
> > in separation. Instead we want those to be processed (in massive
> > amounts) by some rendering engine (like OpenGL).
> I don't really understand why you have such a hard time imagining a situation
> where something like that Vertex class would feel like a very convenient and
> good design choice.
>
> Not all programs that handle, for example, triangle meshes are doing so to
> merely feed the data to OpenGL (or any other API). There may be myriads of
> reasons why a program may want to handle triangle meshes in some manner,
> and some of these applications may be something that benefit from extreme
> speed (because they may need to do a lot of heavy operations to gigantic
> triangle meshes).

As rule we want to render those too. If for nothing else then for debugging.
Therefore if we really process 3D triangles directly then it is more convenient
to keep the data layout suitable for rendering API, even if we do part of
processing outside of that API as well.

> It's very natural to think that since each vertex of the mesh has a lot
> of data attached to it (such as its position, uv-coordinates, etc), to
> group all this data into one class (or struct) for easy handling. After
> all, if you need to, for example, copy a vertex, or remove a vertex,
> or do many other types of operations to single vertex objects, it's
> most convenient when the vertex is one single object.
>
> Many operations become less convenient and more laborious, requiring
> writing more code, when the data of the vertices has been split into
> separate arrays.
>
> But the thing is, if you want maximal efficiency, sometimes you need to
> do some compromises regarding convenience and nice abstractions.

Yes, sometimes it can be inconvenient. It does not matter that I don't see
positive case with these vertices. I've seen it with other things. That all
is in conformance with what I wrote: "With that small performance-critical
subset we can do in C++ whatever performance tuning tricks are needed
for the concrete situations. That is other thing that makes C++ powerful.
It is often possible to get several times better overall performance by
optimizing only small subset of code." It does not mean that we need to
switch all our data into inconvenient to reason about format.

Chris M. Thomasson

unread,
Aug 3, 2021, 4:10:38 PM8/3/21
to
On 7/27/2021 12:59 PM, Lynn McGuire wrote:
> "Nearly a quarter-century later, why is C++ still so popular?"
>
> https://sdtimes.com/softwaredev/nearly-a-quarter-century-later-why-is-c-still-so-popular/
>
>
> "Despite C++’s downward trend on the TIOBE Programming Community index
> since 2001, the language’s fall from the coveted top two slots in 2020,
> vociferous and persistent claims that C++ is “dead like COBOL,” and the
> inroads the Rust is making in developer circles – C++ is still as
> viable, vital and relevant as ever."
>
> Because it just works ?
>
> And I am not impressed with Rust whatsoever.
>

Never used Rust, humm. Perhaps I will give it a go, maybe start with
some online compilers:

https://play.rust-lang.org

Humm, it should be fun for me to try to port some of my existing
programs into Rust.

C++ is an excellent language that can be used to create many awesome
things. Want to write a low level subsystem, or a runtime for your own
language, C++ is there. Want to create really fast low level server
code, C++ is there. Are you looking for low level, fairly fine grain
access to std threads and atomics/membars, C++ is there!

C++ is there for a lot of things. Want to create an OS? C++ can come in
handy. Keep in mind that nobody has to use all of the "fancy" features.
Wrt the OS case, one can use "just" enough C++ to get the job done.

Lynn McGuire

unread,
Aug 3, 2021, 7:37:28 PM8/3/21
to
The newest Firefox, version 90, is reputedly now majority Rust. I am
having problems with it using enormous amounts of ram in its ten Win64
processes on my Windows 7x64 Pro PC, up to 9 GB of ram now. I have been
shutting Firefox down and restarting it twice a day as a solution. The
Firefox developers are not interested.

Lynn

Richard

unread,
Aug 5, 2021, 3:49:17 PM8/5/21
to
[Please do not mail me a copy of your followup]

Keith Thompson <Keith.S.T...@gmail.com> spake the secret code
<871r7ji...@nosuchdomain.example.com> thusly:

>Saying that garbage collection is a substitute for destructors suggests
>that memory is the only resource that needs to be managed.

I'd say that it also implies an incomplete/shallow understanding of
both destructors and garbage collection.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Cholo Lennon

unread,
Aug 5, 2021, 5:17:22 PM8/5/21
to
Well, the problem seems to be the developers not the language. I
suffered the same memory issues with several versions of Firefox years
before Rust.


--
Cholo Lennon
Bs.As.
ARG

Ian Collins

unread,
Aug 7, 2021, 9:03:40 PM8/7/21
to
On 04/08/2021 01:01, Öö Tiib wrote:
> On Tuesday, 3 August 2021 at 15:17:10 UTC+3, Juha Nieminen wrote:
>> Öö Tiib <oot...@hot.ee> wrote:
>>> Feels rather unlikely that we want to do anything with such Vertex instance
>>> in separation. Instead we want those to be processed (in massive
>>> amounts) by some rendering engine (like OpenGL).
>> I don't really understand why you have such a hard time imagining a situation
>> where something like that Vertex class would feel like a very convenient and
>> good design choice.
>>
>> Not all programs that handle, for example, triangle meshes are doing so to
>> merely feed the data to OpenGL (or any other API). There may be myriads of
>> reasons why a program may want to handle triangle meshes in some manner,
>> and some of these applications may be something that benefit from extreme
>> speed (because they may need to do a lot of heavy operations to gigantic
>> triangle meshes).
>
> As rule we want to render those too. If for nothing else then for debugging.
> Therefore if we really process 3D triangles directly then it is more convenient
> to keep the data layout suitable for rendering API, even if we do part of
> processing outside of that API as well.

Our application's positioning engine does not render the triangle meshes
we use for mapping. Its job is to work out where targets should be on a
surface and to generate guidance lines on the map. The rendering is
done in using OpenGL on another device.

I'm sure this situation isn't uncommon.

--
Ian.

Chris M. Thomasson

unread,
Aug 8, 2021, 1:43:38 AM8/8/21
to
Not uncommon at all. Not sure if the following scenario is "comparable"
or not, however... I have pure C++ that generates scenes for another
application to render, PovRay. In this case, I did not implement a
raytracer. Imvvho, PovRay is pretty fun to work with.

https://youtu.be/skGUAXAx6eg

Other times, I will implement a crude distance estimator for 3d work:

https://www.shadertoy.com/view/fdBSzm

Blue Hat

unread,
Aug 8, 2021, 2:34:19 PM8/8/21
to
Lynn McGuire <lynnmc...@gmail.com> Wrote in message:r
> "Nearly a quarter-century later, why is C++ still so popular?" https://sdtimes.com/softwaredev/nearly-a-quarter-century-later-why-is-c-still-so-popular/"Despite C++?s downward trend on the TIOBE Programming Community index since 2001, the language?s fall from the coveted top two slots in 2020, vociferous and persistent claims that C++ is ?dead like COBOL,? and the inroads the Rust is making in developer circles ? C++ is still as viable, vital and relevant as ever."Because it just works ?And I am not impressed with Rust whatsoever.Lynn

Neither am I.
--


----Android NewsGroup Reader----
https://piaohong.s3-us-west-2.amazonaws.com/usenet/index.html

Jorgen Grahn

unread,
Aug 19, 2021, 4:08:57 PM8/19/21
to
On Thu, 2021-08-05, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Keith Thompson <Keith.S.T...@gmail.com> spake the secret code
> <871r7ji...@nosuchdomain.example.com> thusly:
>
>>Saying that garbage collection is a substitute for destructors suggests
>>that memory is the only resource that needs to be managed.
>
> I'd say that it also implies an incomplete/shallow understanding of
> both destructors and garbage collection.

I suspect the difference is higher up, on the design level. When I
design my C++ code I pay attention to ownership and lifetime, and I
rarely or never find a need for garbage collection or reference
counting/shared_ptr.

Perhaps others optimize their designs for other aspects, and end up
with a legitimate need for those things. And perhaps some libraries
and frameworks force such designs (for example, I don't do GUIs).

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Siri Cruise

unread,
Aug 19, 2021, 9:20:42 PM8/19/21
to
In article <slrnshtei9.5...@frailea.sa.invalid>,
Jorgen Grahn <grahn...@snipabacken.se> wrote:

> I suspect the difference is higher up, on the design level. When I
> design my C++ code I pay attention to ownership and lifetime, and I
> rarely or never find a need for garbage collection or reference
> counting/shared_ptr.

Implement a directed graph with dynamic edges and nodes which has
no naturally distinguished edges from back edges. Try to figure
out how to pay careful enough attention to avoid reference
tracing or counting. So do reference counting. Now when you
delete an edge to a node, you have to check if it only receives
back edges, and if so converts a back edge to an edge. This
conversion has nothing to do with the directed graph itself but
an artifact of reference counting.

Juha Nieminen

unread,
Aug 20, 2021, 2:13:42 AM8/20/21
to
Siri Cruise <chine...@yahoo.com> wrote:
> Implement a directed graph with dynamic edges and nodes which has
> no naturally distinguished edges from back edges. Try to figure
> out how to pay careful enough attention to avoid reference
> tracing or counting. So do reference counting. Now when you
> delete an edge to a node, you have to check if it only receives
> back edges, and if so converts a back edge to an edge. This
> conversion has nothing to do with the directed graph itself but
> an artifact of reference counting.

I can't think of a data structure where the elements require
reference counting, when those elements are solely used within
that data structure itself.

Usually GC or reference counting is needed when *something else*
may have a reference/pointer to the object that may last longer
than the data container.

Jorgen Grahn

unread,
Aug 20, 2021, 2:17:37 AM8/20/21
to
On Fri, 2021-08-20, Siri Cruise wrote:
> In article <slrnshtei9.5...@frailea.sa.invalid>,
> Jorgen Grahn <grahn...@snipabacken.se> wrote:
>
>> I suspect the difference is higher up, on the design level. When I
>> design my C++ code I pay attention to ownership and lifetime, and I
>> rarely or never find a need for garbage collection or reference
>> counting/shared_ptr.
>
> Implement a directed graph with dynamic edges and nodes which has
> no naturally distinguished edges from back edges.

No doubt there are scenarios where Java-style garbage collection is
the best solution -- it's just that I have never been in that
situation.

When I've done graphs, it has been enough to let a Graph object (or
just a std::set<Edge>) own everything.

Siri Cruise

unread,
Aug 20, 2021, 3:04:23 AM8/20/21
to
In article <slrnshui7h.5...@frailea.sa.invalid>,
Jorgen Grahn <grahn...@snipabacken.se> wrote:

> On Fri, 2021-08-20, Siri Cruise wrote:
> > In article <slrnshtei9.5...@frailea.sa.invalid>,
> > Jorgen Grahn <grahn...@snipabacken.se> wrote:
> >
> >> I suspect the difference is higher up, on the design level. When I
> >> design my C++ code I pay attention to ownership and lifetime, and I
> >> rarely or never find a need for garbage collection or reference
> >> counting/shared_ptr.
> >
> > Implement a directed graph with dynamic edges and nodes which has
> > no naturally distinguished edges from back edges.
>
> No doubt there are scenarios where Java-style garbage collection is
> the best solution -- it's just that I have never been in that
> situation.

Directed graphs are an ideal way of handling complex relations.
Encode the relation into a graph; use Tarjan's algorithm to find
strongly connected component; factor the graph by SCCs. The
resulting reduced graph is a DAG which is the partial order with
each SCC being an equivalence class. Do topological sorting on
the DAG. You can then deal with the original relation equivalence
class by equivalence class either from the sources or sinks. From
the sinks means every edge in the SCC is to the same SCC or to a
node of a previously processed equivalence class.

Many programming problems are about distributing information
around a relation of nodes. The usual method is to find fix
points on bit vectors.

> When I've done graphs, it has been enough to let a Graph object (or
> just a std::set<Edge>) own everything.

That just means someone else had to deal with back edges.

Juha Nieminen

unread,
Aug 20, 2021, 7:04:20 AM8/20/21
to
Siri Cruise <chine...@yahoo.com> wrote:
> Directed graphs are an ideal way of handling complex relations.
> Encode the relation into a graph; use Tarjan's algorithm to find
> strongly connected component; factor the graph by SCCs. The
> resulting reduced graph is a DAG which is the partial order with
> each SCC being an equivalence class. Do topological sorting on
> the DAG. You can then deal with the original relation equivalence
> class by equivalence class either from the sources or sinks. From
> the sinks means every edge in the SCC is to the same SCC or to a
> node of a previously processed equivalence class.

If you need to be constantly adding and removing elements from that
graph during the runtime of the program, then sure, it's probably
best to allocate individual elements and handle them by reference
(ie. pointer). Or, if for some reason the elements can be of
different types.

However, if what you describe is "build the graph at startup and
then just read its contents during the execution of the program",
it may well be more efficient to put all the elements in one
single array, and then have connections between the elements
with pointers or indices. This requires no memory management
of any kind (other than, obviously, freeing the array when
the graph isn't needed anymore).

There might even be room for low-level optimization by
rearranging the elements in an optimal way in the array so
that they are accessed as linearly as possible in most cases.
(With individually allocated elements you cannot do this
kind of optimization even if you wanted to.)

Moreover, this approach allows for Data-Oriented Design.
If the graph nodes contain several member variables, and
usually you are only interested in particular ones, you could
do the DOD thing and split the elements into their own
arrays, further increasing cache locality.

Alf P. Steinbach

unread,
Aug 20, 2021, 8:26:36 AM8/20/21
to
On 20 Aug 2021 03:20, Siri Cruise wrote:
> In article <slrnshtei9.5...@frailea.sa.invalid>,
> Jorgen Grahn <grahn...@snipabacken.se> wrote:
>
>> I suspect the difference is higher up, on the design level. When I
>> design my C++ code I pay attention to ownership and lifetime, and I
>> rarely or never find a need for garbage collection or reference
>> counting/shared_ptr.
>
> Implement a directed graph with dynamic edges and nodes which has
> no naturally distinguished edges from back edges. Try to figure
> out how to pay careful enough attention to avoid reference
> tracing or counting.

I don't see why one would have to think or pay attention to avoid
reference counting for a directed graph.

Just use Boost Graph, or a DIY thing based on e.g. `std::vector`.

Is it that when you write “dynamic edges and nodes” you mean dynamically
allocated node objects, with the edges represented as pointers to nodes?

We did that at college, early 1980's, finding shortest route between any
two cities in Norway. We had one DEC Rainbow workstation to do color
graphics on. Otherwise had to use HP monochrome graphics terminals :(

Anyway, if that's what you're thinking of then that is an inefficient
way to do things. E.g. I doubt that Boost Graph does that internally.
But even when that approach is adopted I still fail to see the alleged
practical need for reference counting. The pointers are internal in the
structure. They're not owning pointers. (In passing, for efficient
removal of nodes just let each node keep a list of all edges to it, not
just a list of all edges from it, but better: use Boost Graph).


> So do reference counting. Now when you
> delete an edge to a node, you have to check if it only receives
> back edges, and if so converts a back edge to an edge.

Huh? (Forgive me if I'm dumb here, not yet on first coffee.)


> This
> conversion has nothing to do with the directed graph itself but
> an artifact of reference counting.

Someone at one time gave a slightly similar (if I understood you)
example that /actually/ required garbage collection, or else incurring
some heavy complexity.

It was about function representations with parts of functions being
referenced and reused with abandon.

The person who came up with it had a background in functional
programming and hence, I believe, mathematics, and the example was
convincing.


Cheers,

- Alf (BTDT)

mick...@downthefarm.com

unread,
Aug 20, 2021, 11:19:22 AM8/20/21
to
On Fri, 20 Aug 2021 11:03:59 -0000 (UTC)
Juha Nieminen <nos...@thanks.invalid> wrote:
>Siri Cruise <chine...@yahoo.com> wrote:
>> Directed graphs are an ideal way of handling complex relations.
>> Encode the relation into a graph; use Tarjan's algorithm to find
>> strongly connected component; factor the graph by SCCs. The
>> resulting reduced graph is a DAG which is the partial order with
>> each SCC being an equivalence class. Do topological sorting on
>> the DAG. You can then deal with the original relation equivalence
>> class by equivalence class either from the sources or sinks. From
>> the sinks means every edge in the SCC is to the same SCC or to a
>> node of a previously processed equivalence class.
>
>If you need to be constantly adding and removing elements from that
>graph during the runtime of the program, then sure, it's probably
>best to allocate individual elements and handle them by reference
>(ie. pointer). Or, if for some reason the elements can be of
>different types.
>
>However, if what you describe is "build the graph at startup and
>then just read its contents during the execution of the program",
>it may well be more efficient to put all the elements in one
>single array, and then have connections between the elements
>with pointers or indices. This requires no memory management
>of any kind (other than, obviously, freeing the array when
>the graph isn't needed anymore).

If only there was a standard library that came with C++ that had containers
that could do something like that.... hmmm....

Öö Tiib

unread,
Aug 30, 2021, 2:41:01 AM8/30/21
to
Typical graph manipulation libraries (like that Boost.Graph) allow parts of
graph to be simply disconnected from each other. There are no requirements
that when connecting edge is missing (or was erased) then also the whole
disconnected part must be removed. Perhaps Siri implied such requirement?
0 new messages