Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

static vector (c style)

300 views
Skip to first unread message

alessio211734

unread,
Jan 12, 2015, 9:30:26 AM1/12/15
to
Hello,

I would like create a static member in my class as a c style vector.

class MyClass
{

...
static NearData nearCells[32];

};



I don't know how declare in .cpp file the static vector.
I would that when I have two instance of MyClass m1,m2 and
when I do m1=m2 the NearData avoid to be copied.

MyClass m1;
MyClass m2;

m1=m2;

Victor Bazarov

unread,
Jan 12, 2015, 10:13:00 AM1/12/15
to
On 1/12/2015 9:30 AM, alessio211734 wrote:
> I would like create a static member in my class as a c style vector.

I think there is no such thing as "a c style vector". I believe the
term is "an array", and it's the same in both C and C++.

>
> class MyClass
> {
>
> ...
> static NearData nearCells[32];
>
> };
>
>
>
> I don't know how declare in .cpp file the static vector.

NearData MyClass::nearCells[32];

(assuming that 'NearData' is available to the compiler here).

> I would that when I have two instance of MyClass m1,m2 and
> when I do m1=m2 the NearData avoid to be copied.

Yes, any static data members stay put when instances are copied (either
by construction or assignment).

>
> MyClass m1;
> MyClass m2;
>
> m1=m2;
>

V
--
I do not respond to top-posted replies, please don't ask

jak

unread,
Jan 12, 2015, 10:35:48 AM1/12/15
to
class myVect
{
public:
char vector[50];
myVect operator=(myVect v)
{
memcpy(vector, v.vector, sizeof(vector));
return *this;
}
};

void f()
{
myVect m1, m2;
strcpy_s(m2.vector, "Alessio");
m1 = m2;
cout << m1.vector << endl;
system("pause");
return;
}

---
Questa e-mail è stata controllata per individuare virus con Avast antivirus.
http://www.avast.com

Louis Krupp

unread,
Jan 12, 2015, 11:01:32 AM1/12/15
to
If I understand correctly, you've *declared* nearCells in your header
file, and you'll want to *define* it in the .cpp file that implements
MyClass:

NearData MyClass::nearCells[32];

This might help:

http://www.bogotobogo.com/cplusplus/statics.php

Louis

Richard

unread,
Jan 12, 2015, 12:09:53 PM1/12/15
to
[Please do not mail me a copy of your followup]

jak <ple...@nospam.tnx> spake the secret code
<m90pk2$1sg$1...@speranza.aioe.org> thusly:

> memcpy(vector, v.vector, sizeof(vector));
> strcpy_s(m2.vector, "Alessio");

Please stop writing C code and calling it C++.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Richard

unread,
Jan 12, 2015, 12:10:41 PM1/12/15
to
[Please do not mail me a copy of your followup]

Louis Krupp <lkr...@nospam.pssw.com.invalid> spake the secret code
<djr7ba18ce38ekgie...@4ax.com> thusly:

>If I understand correctly, you've *declared* nearCells in your header
>file, and you'll want to *define* it in the .cpp file that implements
>MyClass:
>
>NearData MyClass::nearCells[32];
>
>This might help:
>
>http://www.bogotobogo.com/cplusplus/statics.php

This too: <http://en.cppreference.com/w/cpp/language/static>

Melzzzzz

unread,
Jan 12, 2015, 3:16:18 PM1/12/15
to
On Mon, 12 Jan 2015 17:09:43 +0000 (UTC)
legaliz...@mail.xmission.com (Richard) wrote:

> [Please do not mail me a copy of your followup]
>
> jak <ple...@nospam.tnx> spake the secret code
> <m90pk2$1sg$1...@speranza.aioe.org> thusly:
>
> > memcpy(vector, v.vector, sizeof(vector));
> > strcpy_s(m2.vector, "Alessio");
>
> Please stop writing C code and calling it C++.

This is not C++ code?

Vir Campestris

unread,
Jan 12, 2015, 4:42:05 PM1/12/15
to
It is. But... only really because C++ is largely a superset of C. It's
obsolete and dangerous. (for example, because there's no check that
m2.vector is big enough to hold "Alessio")

Andy

Chris Vine

unread,
Jan 12, 2015, 7:48:34 PM1/12/15
to
Because the string is a literal and the size of the array is a compile
time constant larger than the size of the literal, then it must be big
enough. It is still unpleasant code though.

Another objection might be that:

a) The "answer" doesn't actually meet the OP's original question,
which was about static members;

b) The proposed assignment operator of myVect is completely redundant,
since a non-static array member of a class is by rule required to be
copied element by element by the default assignment operator (it does
not decay to a pointer copy as seems to have been assumed).

Chris

jak

unread,
Jan 13, 2015, 5:38:01 AM1/13/15
to
Il 12/01/2015 18:09, Richard ha scritto:
> [Please do not mail me a copy of your followup]
>
> jak <ple...@nospam.tnx> spake the secret code
> <m90pk2$1sg$1...@speranza.aioe.org> thusly:
>
>> memcpy(vector, v.vector, sizeof(vector));
>> strcpy_s(m2.vector, "Alessio");
>
> Please stop writing C code and calling it C++.
>

class myVect
{
public:
static char s_vector[50];
char us_vector[50];
static bool trunc;

myVect operator=(char s[])
{
trunc = (((string)s).length() + 1 > sizeof s_vector) ? false : true;

for (int i = 0; i < sizeof s_vector; i++)
s_vector[i] = us_vector[i] = s[i];

return *this;
}
};
char myVect::s_vector[50];
bool myVect::trunc;

void f(void)
{
static myVect m1, m2;
m1 = "Mickey Mouse";
m2 = "Donald Duck";
cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
m1 = m2;
cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
system("pause");
return;
}

jak

unread,
Jan 13, 2015, 6:24:34 AM1/13/15
to
A C compiler does not compile those lines of code while a C++ compiler
can. He tells lies. He probably wanted to say that he does not like that
way of writing code because it is too similar to the C code and if he
does not accept that the true power of C++ is C, then he can leave the
C++ and choose for him any other OOP language.

Paavo Helde

unread,
Jan 13, 2015, 11:55:52 AM1/13/15
to
jak <ple...@nospam.tnx> wrote in news:m92shj$b0k$1...@speranza.aioe.org:

> class myVect
> {
> public:
> static char s_vector[50];
> char us_vector[50];
> static bool trunc;
>
> myVect operator=(char s[])
> {
> trunc = (((string)s).length() + 1 > sizeof s_vector) ? false :
true;
>
> for (int i = 0; i < sizeof s_vector; i++)
> s_vector[i] = us_vector[i] = s[i];
>
> return *this;
> }
> };
> char myVect::s_vector[50];
> bool myVect::trunc;
>
> void f(void)
> {
> static myVect m1, m2;
> m1 = "Mickey Mouse";
> m2 = "Donald Duck";
> cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
> m1 = m2;
> cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
> system("pause");
> return;
> }

What's this supposed to be? An example how to NOT write C++? Note that
this example illustrates very well the dangers of C-style programming,
the loop accesses the s[] array beyound the array end and causes UB.

In addition, operator= makes a copy of the object in the return
statement, this is not what it should do.

In C++ this would read (and note that you don't need to truncate
anything, so trunc member is not needed):

class myVect
{
private:
static std::string s_vector;
std::string us_vector;

public:

myVect& operator=(char s[])
{
s_vector = us_vector = s;
return *this;
}

// ...

hth
Paavo




jak

unread,
Jan 13, 2015, 12:45:17 PM1/13/15
to
Oi... un po' più di rispetto. Sei qui per dare supporto o per esibire la
tua cafonaggine?

Note that
> this example illustrates very well the dangers of C-style programming,
> the loop acceses the s[] array beyound the array end and causes UB.
>
--
> In addition, operator= makes a copy of the object in the return
> statement, this is not what it should do.
>
Grazie. Questo è interessante.
--
> In C++ this would read (and note that you don't need to truncate
> anything, so trunc member is not needed):

L'esempio è mio e se voglio non usare string, non lo uso.

>
> class myVect
> {
> private:
> static std::string s_vector;
> std::string us_vector;
>
> public:
>
> myVect& operator=(char s[])
> {
> s_vector = us_vector = s;
> return *this;
> }
>
> // ...
>
> hth
> Paavo
>
A parte ciò che ho ritenuto interessante il resto è opinione
e in quanto tale lascia il tempo che trova.


Message has been deleted

jak

unread,
Jan 13, 2015, 1:53:34 PM1/13/15
to
Il 13/01/2015 19:01, Stefan Ram ha scritto:
> jak <ple...@nospam.tnx> writes:
>> Oi... un po' più di rispetto. Sei qui per dare supporto o per esibire la
>> tua cafonaggine?
>
> Gli altri lettori di questo gruppo forse non capiscono questo!
> Quindi, vorrei spiegare:
>
> cafonaggine: l'essere cafone
>
> cafone: maleducato, volgare, di cattivo gusto.
>

I thank you, but the translation was not necessary. I have written in
Italian because I lost my appreciation for this discussion group. Here,
there are, by now, few people helping and discusses and too many people
ready to use his wand on the fingers of those who seek clarification
about this language.I address my questions elsewhere.

Best regards.

Paavo Helde

unread,
Jan 13, 2015, 2:22:10 PM1/13/15
to
jak <ple...@nospam.tnx> wrote in news:m93lis$emo$1...@speranza.aioe.org:

> Il 13/01/2015 17:55, Paavo Helde ha scritto:
>>
>> What's this supposed to be? An example how to NOT write C++?
>
> Oi... un po' pi? di rispetto. Sei qui per dare supporto o per esibire la
> tua cafonaggine?

Sorry, I did not realize you are the same alessio211734 who posted the
original question. I thought you are trying to propose some kind of
solution to him instead.

> L'esempio ? mio e se voglio non usare string, non lo uso.

But you already did use it!

trunc = (((string)s).length() + 1 > sizeof s_vector) ? false :
^^^^^^

Cheers
Paavo

asetof...@gmail.com

unread,
Jan 14, 2015, 4:32:29 AM1/14/15
to
class myVect
{
public:
static char s_vector[50];
char us_vector[50];
static bool trunc;

myVect operator=(char* s)
{int i;

for(i=0; i<49 && s[i]!=0; i++)
s_vector[i]=us_vector[i]=s[i];
s_vector[i]=us_vector[i]=0;
trunc=(s[i]==0?0:1);
return *this;
}
};

asetof...@gmail.com

unread,
Jan 14, 2015, 4:57:41 AM1/14/15
to
myVect& operator=(char* s)

asetof...@gmail.com

unread,
Jan 14, 2015, 5:53:58 AM1/14/15
to
Paavo wrote:
class myVect
{
private:
static std::string s_vector;
std::string us_vector;
public:

myVect& operator=(char s[])
{
s_vector = us_vector = s;
return *this;
}

// ...

hth
Paavo
#static std::string s_vector;
#how a c++ string
#one resizing object
#can be static?
#for me static means that I
#in .exe result of compilation
#can identify one piece of that
#file contain space for that
#string...

jak

unread,
Jan 14, 2015, 6:50:54 AM1/14/15
to
Yes, this is true but I've only used to joke with Richard, who
accused me of writing like C. So to avoid the use of the function
strlen, I thought to use that recast. :)
I do not speak good English, so I struggle to explain to you just
to observe the class. I wrote the function only to test the class
and I use strings for simplification only. Into the program, they will
be replaced by an array of structures like this:

/* C style */
struct {
int id;
int pnumber;
unsigned long dev_num;
...
char name[64];
} myStructure[32];

And then, try to be understanding with me. I have written programs using
the C language for a quarter century.


Best Regards.

asetof...@gmail.com

unread,
Jan 14, 2015, 7:38:50 AM1/14/15
to
Jak wrote:
class myVect
{
public:
static char s_vector[50];
char us_vector[50];
static bool trunc;

myVect operator=(char s[])
{
trunc = (((string)s).length() + 1 > sizeof s_vector) ? false : true;

for (int i = 0; i < sizeof s_vector; i++)
s_vector[i] = us_vector[i] = s[i];

return *this;
}
};
char myVect::s_vector[50];
bool myVect::trunc;

void f(void)
{
static myVect m1, m2;
m1 = "Mickey Mouse";
m2 = "Donald Duck";
cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
m1 = m2;
cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;
system("pause");
return;
}
#this code here not compile
#and I think should not compile
#the copy string algo is wrong
#the static is not good used etc
#[at last for compiler I use]
#people question on some macro
#I used I make question on
#how people like you write code
#difficult what it is 100% plain
#easy it is not matter the language
#one write

Louis Krupp

unread,
Jan 14, 2015, 10:36:10 AM1/14/15
to
On Wed, 14 Jan 2015 02:53:44 -0800 (PST), asetof...@gmail.com
wrote:
The keyword "static" is "overloaded."

This might help explain it better than I can:

http://www.cprogramming.com/tutorial/statickeyword.html

Louis

jak

unread,
Jan 14, 2015, 12:31:20 PM1/14/15
to
Perhaps does not compile because the example I have not left the various
files included:

#include <stdio>
#include <string>
#include <iostream>

using namespace std;

I have not included them because I saw that nobody does this.

...or because the main function is absent but it is only
written in this way:

main() { f(); }


However I can assure you that what I have written compiles and runs. I'm
using the compiler of visual studio express 2013.

> #the copy string algo is wrong
> #the static is not good used etc

forgive me for my english, but this part is not clear for me

> #[at last for compiler I use]
> #people question on some macro
-----
> #I used I make question on
> #how people like you write code
> #difficult what it is 100% plain
> #easy it is not matter the language
> #one write

What can I answer you? In reality what I would like to ask you is much
more complicated then I try to cut the problem into smaller pieces and
understand something of each piece. :(
you could see with your own eyes how difficult it was for me get answers
to simple questions.

regards.

Christopher Pisz

unread,
Jan 14, 2015, 1:39:10 PM1/14/15
to
The true downfall of C++ is C. Power my arse.
80% of the bugs I encounter in bug trackers stem from C-style code from
people with that mentality whom refuse to move on.

Paavo Helde

unread,
Jan 14, 2015, 1:42:21 PM1/14/15
to
jak <ple...@nospam.tnx> wrote in news:m9694o$mgb$1...@speranza.aioe.org:

> Perhaps does not compile because the example I have not left the
> various files included:
>
> #include <stdio>
> #include <string>
> #include <iostream>
>
> using namespace std;
>
> I have not included them because I saw that nobody does this.
>
> ...or because the main function is absent but it is only
> written in this way:
>
> main() { f(); }

If you want more and better feedback it is a good idea to provide
compilable examples.

> you could see with your own eyes how difficult it was for me get
> answers to simple questions.

What do you mean? Your first questions were answered immediately by
Victor Bazarov, and your later posts mostly lacked any questions
whatsoever. Sorry, we are not mentals here, if you don't ask any
questions nobody can answer them!

Cheers
Paavo

Jorgen Grahn

unread,
Jan 14, 2015, 2:14:24 PM1/14/15
to
On Wed, 2015-01-14, Christopher Pisz wrote:
> On 1/13/2015 5:24 AM, jak wrote:
>> Il 12/01/2015 21:16, Melzzzzz ha scritto:
>>> On Mon, 12 Jan 2015 17:09:43 +0000 (UTC)
>>> legaliz...@mail.xmission.com (Richard) wrote:
...
>>>> Please stop writing C code and calling it C++.
>>>
>>> This is not C++ code?
>>
>> A C compiler does not compile those lines of code while a C++ compiler
>> can. He tells lies. He probably wanted to say that he does not like that
>> way of writing code because it is too similar to the C code

Probably it's not the /similarity/ per se, but the waste ... why not
use the tools available to you? They're decades old, so not having
time to learn them is a poor excuse.

>> and if he
>> does not accept that the true power of C++ is C, then he can leave the
>> C++ and choose for him any other OOP language.

> The true downfall of C++ is C. Power my arse.
> 80% of the bugs I encounter in bug trackers stem from C-style code from
> people with that mentality whom refuse to move on.

If you want to go philosophical, the true power of C++ is to
understand C, accept its good and bad sides, and move on using
C++, while keeping the good aspects of C in mind.

I see C++ as a language which fixes the obvious flaws in C -- and I
see C as a crippled and, in 2015, unnecessary (but not useless)
dialect of C++.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

asetof...@gmail.com

unread,
Jan 14, 2015, 3:36:58 PM1/14/15
to

Yes if I add the headers all compile
well... But for me the code it
is too obscure... I'm not so expert
So I post how I would write it
in the way I understand some thing...
#include <stdio.h>

#include <string.h>

#include <iostream.h>




class myVect

{

public:

static char s_vector[50];

char us_vector[50];

static bool trunc;




myVect operator=(char s[])

{int i;

trunc = (((string)s).length() + 1 > sizeof s_vector) ? false : true;




for (i = 0; i < sizeof s_vector; i++)

s_vector[i] = us_vector[i] = s[i];




return *this;

}

};




char myVect::s_vector[50];

bool myVect::trunc;




int main(void)

{

static myVect m1, m2;

m1 = "Mickey Mouse";

m2 = "Donald Duck";

cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;

m1 = m2;

cout << "s: " << m1.s_vector << " us: " << m1.us_vector << endl;

system("pause");

return 0;

}

__________________________________

#include <stdio.h>

#include <iostream.h>




char s_vectors[50*50]; // 0..2499

int index=0; /* index 0..49 50 string static

of max 50 chars */




class myVect

{public:

char* s_vector;

int trunc;

char us_vector[50];



myVect()

{s_vector=(char*)s_vectors+(50*index);

if(index>=2449) index = 0;

else index+=50;

}



~myVect(){}




myVect& operator=(char* s)

{int i;




for(i=0; i<49 && s[i]!=0; i++)

{s_vector[i]=s[i]; us_vector[i]=s[i];}

s_vector[i]=0;

us_vector[i]=0;

trunc=(s[i]==0?0:1);

return *this;

}




myVect& operator=(myVect& m)

{int i;

for(i=0; i<50; ++i)

{ s_vector[i]= m.s_vector[i];

us_vector[i]=m.us_vector[i];

}

trunc=m.trunc;

return *this;

}




};




int main(void)

{myVect m1, m2;



m1 = "Mickey Mouse";

m2 = "Donald Duck";




cout<<"s: "<<m1.s_vector<<" us: "<< m1.us_vector<<endl;

m1 = m2;

cout<<"s: "<< m1.s_vector<<" us: "<< m1.us_vector<< endl;

return 0;

}

asetof...@gmail.com

unread,
Jan 14, 2015, 3:59:03 PM1/14/15
to
asetofsymbols wrote:
> s_vector=(char*)s_vectors+(50*index);

it should be :
s_vector=(char*)s_vectors+index;
Excuse if the copy -paste put
many blank lines I not seen them

jak

unread,
Jan 14, 2015, 4:24:46 PM1/14/15
to
Il 14/01/2015 19:42, Paavo Helde ha scritto:
> jak <ple...@nospam.tnx> wrote in news:m9694o$mgb$1...@speranza.aioe.org:
>
>> Perhaps does not compile because the example I have not left the
>> various files included:
>>
>> #include <stdio>
>> #include <string>
>> #include <iostream>
>>
>> using namespace std;
>>
>> I have not included them because I saw that nobody does this.
>>
>> ...or because the main function is absent but it is only
>> written in this way:
>>
>> main() { f(); }
>
> If you want more and better feedback it is a good idea to provide
> compilable examples.
>

ok. I will keep your words as a treasure for the next time. thank you.

>> you could see with your own eyes how difficult it was for me get
>> answers to simple questions.
>
> What do you mean? Your first questions were answered immediately by
> Victor Bazarov, and your later posts mostly lacked any questions
> whatsoever. Sorry, we are not mentals here, if you don't ask any
> questions nobody can answer them!
>
> Cheers
> Paavo
>

In truth I had trouble figuring out who I answered and if answered me.
So I abandoned the browser and used a program that would manage
newsgroups with trees. This is because Here there is the strange habit
that when you talk to a person, the answer comes from another person.
The impression is ugly. You feel a prey in the middle of a pack of wolves.

jak

unread,
Jan 14, 2015, 4:38:50 PM1/14/15
to
I am really sorry to note that you can not believe that is true the
opposite. I also think that you were very unlucky to have to put your
hands in the code who have written clueless.

regards

jak

unread,
Jan 14, 2015, 4:42:20 PM1/14/15
to
right now I was picking up your code in an editor to compact it. :-)

jak

unread,
Jan 14, 2015, 5:09:25 PM1/14/15
to
Excuse me but I rewrite the phrase. What is written is not what I meant:

I'm really sorry that you can not believe that the opposite is also true
and I also think that you were very unlucky to put your hands in the
code which was written by people incapable.

(I write English helped by google translator and sometimes the result is
different from what I want to write. sorry.)

Christopher Pisz

unread,
Jan 14, 2015, 6:59:20 PM1/14/15
to
As an intelligent creature, I like to think that I learn from
experience, as all human beings should, while the political correctness
movement would have us believe such things are terrible (judgement!).

I would only believe it to be "unlucky" had it been one project, or two
projects, or one person, or two people, or even a handful of either. The
case is that in 20 years, I've moved around quite a bit, I've worked
with hundreds of people and on at least 20 significant projects. While I
am not trying to toot my horn, I am trying to say that I have a more
than adequate sample size to draw such a conclusion at least to the
point where it is worth voicing.

I am not pulling 80% out of my ass. I literally sat with the bug
trackers, read through the bug reports, the back and forth, and the
solutions, for thousands of tickets and in 80% of the cases it pointed
to C style code. I kept track out of the need to prove that firing said
C-style coder was worthwhile. In 4 of the 5 places I did this, there was
_one_ and only _one_ C style coder on a C++ team. Usually an old man
that had been there for years and years, which had made such a mess that
the company could not get rid of him for fear that no one could ever
figure out the secret lucky charms encoder rings he had littered the
code with.

These guys would always have arguments to the point spit was flying
through the air and the veins in foreheads looked like they were going
to burst. They would scream and yell "Efficiency!" I hate working with
those people. I truly do. I cannot fathom how many man hours were wasted
and how much money customers were charged, because some old stubborn
bastard would not learn new tricks. When I see any trace of that
mentality even in the newsgroup it enrages me and I must say something.

In summary, don't be sorry. Stop the c-style coding instead.



Öö Tiib

unread,
Jan 15, 2015, 1:55:01 AM1/15/15
to
On Thursday, 15 January 2015 01:59:20 UTC+2, Christopher Pisz wrote:
> As an intelligent creature, I like to think that I learn from
> experience, as all human beings should, while the political correctness
> movement would have us believe such things are terrible (judgement!).

In my experience far worst piece of code that I have ever seen was
written in C#. On the other hand the code of Python interpreter for
example I like, despite it is C. It is not language's fault that
programmer does not care about readability and robustness.

jacob navia

unread,
Jan 15, 2015, 2:01:04 AM1/15/15
to
Le 15/01/2015 00:59, Christopher Pisz a écrit :
> I literally sat with the bug trackers, read through the bug reports, the
> back and forth, and the solutions, for thousands of tickets and in 80%
> of the cases it pointed to C++ style code. I kept track out of the need to
> prove that firing said C++-style coder was worthwhile. In 4 of the 5
> places I did this, there was _one_ and only _one_ C++ style coder on a HyperBasic
> team. Usually an old man that had been there for years and years, which
> had made such a mess that the company could not get rid of him for fear
> that no one could ever figure out the secret lucky charms encoder rings
> he had littered the code with.

The 15/01/2035 00:59, we spoke about him in the cafeteria.

John: What was his name?

David: Who? The C++ coder? I think it was Pisz, Christopher Pisz. Why?

John: Ahh that old guy? I remember seeing him at the cafeteria, always
talking about C++ and why HyperBasic was wrong all along... I think he
couldn't adapt to the new coding standards the company has to support...

David: Yes, I remember, he would always have arguments to the point spit
was flying through the air and the veins in foreheads looked like they
were going to burst. He would scream and yell "Efficiency!" and other
irrelevant stuff. I mean, who cares about that now?

John: Well, he came from the time when HyperBasic wasn't even
conceivable, it would have taken a computer of that time a whole DAY to
print hello world in HyperBasic. Yes, he was completely outdated that
guy. It was a good decision to fire him.

David: Anyway, I prefer young people for programming. They never knew
about programming languages, and HyperBasic restricted english input
comes naturally to their generation that has grown up with computers
since they were born. All those old hands that needed a computer
language are now obsolete thanks goodness.

John: Yes, anyone can be a programmer now, HyperBasic will understand
any instructions in a few miliseconds using the new generation quantum
machines. Programming is finished as a profession.

John and David: Yes, that's the way of the future!




... the future of youth is old age Christopher. I wish you a long life.


Jorgen Grahn

unread,
Jan 15, 2015, 2:04:11 AM1/15/15
to
On Wed, 2015-01-14, Christopher Pisz wrote:
> On 1/14/2015 3:38 PM, jak wrote:
...
>> I also think that you were very unlucky to have to put your
>> hands in the code who have written clueless.

I haven't had /exactly/ your experience, but I've seen enough to find
what you're saying believable. Sadly.

Jorgen Grahn

unread,
Jan 15, 2015, 2:17:26 AM1/15/15
to
I don't think anyone argues that you cannot use C well[0]: the problem
here was people using C++ as if it was (more or less) C.

/Jorgen

[0] Well, not in this part of this thread, anyway.

Jorgen Grahn

unread,
Jan 15, 2015, 2:25:18 AM1/15/15
to
On Thu, 2015-01-15, jacob navia wrote:
> Le 15/01/2015 00:59, Christopher Pisz a écrit :
>> I literally sat with the bug trackers, read through the bug reports, the
>> back and forth, and the solutions, for thousands of tickets and in 80%
>> of the cases it pointed to C++ style code. I kept track out of the need to
>> prove that firing said C++-style coder was worthwhile. In 4 of the 5
>> places I did this, there was _one_ and only _one_ C++ style coder on a HyperBasic
>> team. Usually an old man that had been there for years and years, which
>> had made such a mess that the company could not get rid of him for fear
>> that no one could ever figure out the secret lucky charms encoder rings
>> he had littered the code with.

You're falsifying what he wrote, which was:

> [...] I literally sat with the bug
> trackers, read through the bug reports, the back and forth, and the
> solutions, for thousands of tickets and in 80% of the cases it pointed
> to C style code. I kept track out of the need to prove that firing said
> C-style coder was worthwhile. In 4 of the 5 places I did this, there was
> _one_ and only _one_ C style coder on a C++ team. Usually an old man
> that had been there for years and years, which had made such a mess that
> the company could not get rid of him for fear that no one could ever
> figure out the secret lucky charms encoder rings he had littered the
> code with.

So *plonk*. For good, this time.

/Jorgen

David Brown

unread,
Jan 15, 2015, 3:23:12 AM1/15/15
to
C++ fixes /some/ of the obvious flaws in C, but not all - and like any
system that is big, complex and powerful, it has flaws of its own.
IMHO, the true power of C++ is the same as with any other programming
language - you aim to understand it and find a good balance of the
features that suit your needs and abilities for the work at hand.
Understanding the C subset and background is part of that, but so are
other C++ features. You should use C++ features where they improve on
the code compared to C-style features - but equally one should not avoid
a C-style array just because it happens to look like C code!

C is in no way a "crippled" language, and it is not unnecessary - but it
has remained largely static since C99. Tools have changed and improved,
but the language is mostly the same - and that is one of its key
advantages. If I write modern C++, I need to make sure I have a very
recent compiler that supports the features - and anyone working with the
code has to have learned the new features. But if I write good, clear,
modern C code, it will compile on anything, and any C programmer can
work with it.

C++ has been getting many more features over time - and that is an
advantage for C++. You can write neater, clearer, faster, and safer
code with C++11 than with C++03, and C++14 and C++17 continue to improve
the language.

So while I see that there is steadily more scope for the use of C++ in
traditional C domains (such as in an increasing proportion of embedded
development), C is far from "crippled" or "unnecessary".


jacob navia

unread,
Jan 15, 2015, 3:25:16 AM1/15/15
to
Le 15/01/2015 08:25, Jorgen Grahn a écrit :
> You're falsifying what he wrote

How could he have written about the HyperBasic team?
That's so obvious nobody would have doubted that I wrote that citation.
I did not want to edit it much though, just some MINIMAL changes and the
introduction of the HyperBasic team.

I am just setting his words in the context of 35 years in the future.

And my post is about AGE too, because he always says "OLD" C
programmers, as he would stay young forever, what he apparently believes.

HYPERBASIC is coming, and with it the end of all need for computers to
have some CODE to understand the wishes of the humans. That will be hard
to swallow for programmers, when all what they were doing becomes
completely obsolete.

Yes, my post is about our relationship to technical advances and how we
treat them, and above all how we treat people that apparently do not
want to change their ways of "coding".

How easy is then for you to ignore everything completely, misunderstand
some stuff and then, of course,

*PLONK*

Because you just can't argue anything, do not want to understand
anything, and yes, you, like him, are in great fear of becoming obsolete
and the only way of "avoiding obsolescence" is FIRING people that you do
not like or understand. That's why you do not want to discuss here anything.

What could you possible say?


David Brown

unread,
Jan 15, 2015, 3:34:43 AM1/15/15
to
On 15/01/15 08:17, Jorgen Grahn wrote:
> On Thu, 2015-01-15, 嘱 Tiib wrote:
>> On Thursday, 15 January 2015 01:59:20 UTC+2, Christopher Pisz wrote:
>>> As an intelligent creature, I like to think that I learn from
>>> experience, as all human beings should, while the political correctness
>>> movement would have us believe such things are terrible (judgement!).
>>
>> In my experience far worst piece of code that I have ever seen was
>> written in C#. On the other hand the code of Python interpreter for
>> example I like, despite it is C. It is not language's fault that
>> programmer does not care about readability and robustness.
>
> I don't think anyone argues that you cannot use C well[0]: the problem
> here was people using C++ as if it was (more or less) C.
>

I have seen C++ used successfully as "ABC", or "A better C". But then
it was arguably using the C++ compiler to add features to C (in the same
way that "const" was copied from C++ to C), rather than writing C++ in a
C style. You can use namespaces, strong typing, const, function
overloads, default arguments, and various other C++ features as a way of
improving C - while avoiding C++ features such as classes,
constructors/destructors, new/delete, templates, and exceptions.

Is this a good idea? I don't know - but I have seen it used well. As
has been noted, you can write good and bad code in any language,
including a mixture of languages.


Melzzzzz

unread,
Jan 15, 2015, 6:54:07 AM1/15/15
to
On 14 Jan 2015 19:14:11 GMT
Hm, lot of programs are written in C. According to TIOBE, C is most
popular language, more so than C++. One thing that keeps me program
in C++ is that it has great compatibility with C and therefore
very well interfaced with C libraries while also keeping
higher level code which is less lines.
I use C++ mainly because I have to write less lines of code.
Power of C++ over C is in less lines of code to do same thing , IMO ;)
I mean easier to abstract things...

>
> /Jorgen
>


jak

unread,
Jan 15, 2015, 9:16:35 AM1/15/15
to
Il 15/01/2015 00:59, Christopher Pisz ha scritto:
> On 1/14/2015 3:38 PM, jak wrote:
>> Il 14/01/2015 19:38, Christopher Pisz ha scritto:
>>> On 1/13/2015 5:24 AM, jak wrote:
>>>> Il 12/01/2015 21:16, Melzzzzz ha scritto:
>>>>> On Mon, 12 Jan 2015 17:09:43 +0000 (UTC)
>>>>> legaliz...@mail.xmission.com (Richard) wrote:
>>>>>
>>>>>> [Please do not mail me a copy of your followup]
>>>>>>
>>>>>> jak <ple...@nospam.tnx> spake the secret code
>>>>>> <m90pk2$1sg$1...@speranza.aioe.org> thusly:
...
First of all I would like for you to be clear that I am one Italian and
I am trying to respond to you in English:
You talked in your speech about your working path. Okay. I'd like you to
understand that if I am here with questions, the reason is because I do
not want to put limits to my knowledge about programming. I would like
to let you know that my programming experience with C language is quite
vast. I was writing drivers for linux and demons when his name was
xenix. yet no one had thought to remove the cobwebs from the code of it
and give it a new name. I know of other OOP languages like c-sahrp, for
example. But now, unfortunately, people have asked me to fix a program
written in C++, so you can see that the real problem is not the
programming language but the head of those who use it. :-)

Melzzzzz

unread,
Jan 15, 2015, 9:51:47 AM1/15/15
to
On 1/15/15 3:16 PM, jak wrote:
I was writing drivers for linux and demons when his name was
> xenix.

Hahahahhahahha ;)

Ian Collins

unread,
Jan 15, 2015, 12:58:24 PM1/15/15
to
Christopher Pisz wrote:
>
> These guys would always have arguments to the point spit was flying
> through the air and the veins in foreheads looked like they were going
> to burst. They would scream and yell "Efficiency!" I hate working with
> those people. I truly do. I cannot fathom how many man hours were wasted
> and how much money customers were charged, because some old stubborn
> bastard would not learn new tricks. When I see any trace of that
> mentality even in the newsgroup it enrages me and I must say something.

Those dinosaurs would be (and possible still are!) having the same
arguments on C projects. That mentality transcends language boundaries.

--
Ian Collins

Richard

unread,
Jan 15, 2015, 1:26:29 PM1/15/15
to
[Please do not mail me a copy of your followup]

Christopher Pisz <nos...@notanaddress.com> spake the secret code
<m96d36$cpa$1...@dont-email.me> thusly:

>The true downfall of C++ is C. Power my arse.
>80% of the bugs I encounter in bug trackers stem from C-style code from
>people with that mentality whom refuse to move on.

This is the point I was making. Listen to Stroustrup's advice (sorry,
haven't gotten permission from A-W to put it online yet). If you
listen to the guy who *created* the language he will tell you to stop
using C-isms (memset, C-style strings, <string.h>, strcpy, etc.) and
use the features that have been in C++ for over 15 years.

You ignore his advice at your own peril.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

jak

unread,
Jan 15, 2015, 1:28:16 PM1/15/15
to
(rofl)

jak

unread,
Jan 15, 2015, 1:45:56 PM1/15/15
to
Il 15/01/2015 15:51, Melzzzzz ha scritto:
I still retain a copy of it in floppy disks by 5 1/4" and their players
in a drawer. I am a romantic ... :-D

asetof...@gmail.com

unread,
Jan 15, 2015, 4:27:09 PM1/15/15
to
Here in http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
When one see language graphics at end of that page
C language in years 2002-> 2014 is almost[quasi]
constant instead C++ in the same years seems
descend as java ... Or did I see it wrong?

David Brown

unread,
Jan 15, 2015, 4:56:13 PM1/15/15
to
I have certainly seen the same thing from assembly programmers when
discussing C.

Richard

unread,
Jan 15, 2015, 6:17:00 PM1/15/15
to
[Please do not mail me a copy of your followup]

Ian Collins <ian-...@hotmail.com> spake the secret code
<chqddi...@mid.individual.net> thusly:
And the whole thing is silly unless you have performance measurements
to back up your assertions. Otherwise, it's all just wishful thinking.

I'm all in favor of doing things efficiently. I work in graphics and
efficiency matters for these kinds of systems. However, armchair
pontificating about efficiency or obsessing about single bytes or
function calls is not how you get the efficiency that matters. If you
care about efficiency and want to be real about it, then I suggest
this book:

"The Software Optimization Cookbook: High Performance Recipes for
IA-32 Platforms", 2nd Edition
by Richard Gerber, Aart J. C. Bik, Kevin Smith, Xinmin Tian
<http://amzn.to/1xuccG9>

It's from 2005, but the discussion is still relevant today because it
mostly boils down to proper care and feeding of the data and
instruction caches on a CPU. When game developers scream "no virtual
functions!" that is their simplistic takeway from the real advice of
"keep your cache hot". The latter is what you need to remember, not
simplistic bugaboos about particular language features.

Christopher Pisz

unread,
Jan 15, 2015, 7:57:36 PM1/15/15
to
I've got about 15 years left before I don't care anymore what happens
and what the technology is. I'm no spring chicken, but I am not clinging
to what I did in 1970 either.

I already have another generation wanting to talk to me about
reflection, NuGet, class extensions, and other things I haven't had the
pleasure of bringing into my bubble. The difference is that they are
.NET programmers and they program their .NET projects. I am not claiming
my C++ code is .NET, would work in .NET, is more efficient than .NET,
and they aren't claiming anything similar. There is no "C++ is valid
.NET code" argument to be had.

I don't go into their projects and litter them with bugs floundering
around without knowing their way of doing things. If I need to go and
edit something, I will gladly go get the 20 something year old developer
and tell him, "I am not sure if this is the right way of doing it or not
in your .NET project, so I wanted you to take a look."
I do not turn to them in arrogance and say things like, "Humph, this
whole reflection business can't be very efficient! I don't think we
should ever use this in our code. Let's instead embed class IDs in every
class we write and make up an encoding system so we can squeeze it into
5 bytes or less. We did that in 1995 and it worked great."

http://christopherpisz.ddns.net/Programming/CPlusPlu/WhatIsCSlashCplusPlus.aspx





David Brown

unread,
Jan 16, 2015, 4:38:52 AM1/16/15
to
<http://christopherpisz.ddns.net/Programming/CPlusPlus/WhatIsCSlashCplusPlus.aspx>

(I haven't read the page yet, but I will do so.)


Jorgen Grahn

unread,
Jan 17, 2015, 5:00:55 AM1/17/15
to
Surely almost anyone can get at least to the C++98 point today?
I've been on Linux for too long (a decade) but it seems to me the
market for non-Unix embedded environments is shrinking, and that gcc
(including the C++ compiler) is replacing proprietary C compilers
inside that market, too.

> - and anyone working with the
> code has to have learned the new features. But if I write good, clear,
> modern C code, it will compile on anything, and any C programmer can
> work with it.

You're right: /that/ last thing is a reason C isn't unnecessary.
There are plenty of good C programmers around, and you sometimes want
to cooperate with them. No, scratch that: you /frequently/ want to
cooperate with them.

At the same time, with those programmers, I cannot help sometimes
thinking "wow, she would get even more stuff done if she spent some
time learning C++!". C++ is, to me, a logical next step. If you're
good with C, you'll be even better with C++, and nothing you've
learned so far is wasted.

> C++ has been getting many more features over time - and that is an
> advantage for C++. You can write neater, clearer, faster, and safer
> code with C++11 than with C++03, and C++14 and C++17 continue to improve
> the language.
>
> So while I see that there is steadily more scope for the use of C++ in
> traditional C domains (such as in an increasing proportion of embedded
> development), C is far from "crippled" or "unnecessary".

I retract my statement about "unnecessary", but I still maintain the
"crippled" part. It's not derogatory. It's just how I see my own C
programming: I know how I would have written the code in C++, but I
have to translate it to less expressive C code.

C can, objectively, be seen as more or less a small subset of C++, and
C++ is readily available for free to almost everybody. In that sense C
is crippled.

Of course I have to make my C code more or less idiomatic C -- if I
tried to emulate C++ in C I would be as rude as the C programmers
writing C++ code as if it was C -- but I cannot ignore the lessons C++
taught me, about the merits of type safety and so on.

And I cannot pretend, when I'm writing yet another linked-list
implementation, that I'm not wasting my hours on this earth on
something that's strictly not necessary.

Martijn Lievaart

unread,
Jan 17, 2015, 7:15:17 AM1/17/15
to
On Thu, 15 Jan 2015 23:16:49 +0000, Richard wrote:

> [Please do not mail me a copy of your followup]
>
> Ian Collins <ian-...@hotmail.com> spake the secret code
> <chqddi...@mid.individual.net> thusly:
>
>>Christopher Pisz wrote:
>>>
>>> These guys would always have arguments to the point spit was flying
>>> through the air and the veins in foreheads looked like they were going
>>> to burst. They would scream and yell "Efficiency!" I hate working with
>>> those people. I truly do. I cannot fathom how many man hours were
>>> wasted and how much money customers were charged, because some old
>>> stubborn bastard would not learn new tricks. When I see any trace of
>>> that mentality even in the newsgroup it enrages me and I must say
>>> something.
>>
>>Those dinosaurs would be (and possible still are!) having the same
>>arguments on C projects. That mentality transcends language boundaries.
>
> And the whole thing is silly unless you have performance measurements to
> back up your assertions. Otherwise, it's all just wishful thinking.

Yes, the three laws of optimization: Profile, profile, profile.

I currently work with SO-DIMM sized embedded computers. The things are so
damned powerful they run Debian and I do work in Perl and shell. No need
to shave of another tenth of a millisecond if my application is fine with
response times around a second and I get millisecond response times
anyway.

If performance matters for your application you should know why and
hopefully are able to measure it.

There is a case where efficiency matters and you may not even be able to
measure it (meaningful, easily). That is when writing general stuff.
OSses, general libraries and interpreted languages f.i.

That sucks, but it's still possible to get some kind of measurements to
make some statements about efficiency. See for instance how the Perl and
Linux communities (and probably dozens of others) handle this.

In any case, anecdotal evidence is the worst kind of optimization
technique. It just does not work. It introduces all kinds of all new hard
to find bugs while probably only slowing things down and defeating
maintainability.

There is only one good way to do good optimization. Understand what you
are doing and measure to tell if you are correct and it really works. If
that is impossible, you may fall back to you don't know what you are
doing, but will gain insight through measurements.

[snip]

> It's from 2005, but the discussion is still relevant today because it
> mostly boils down to proper care and feeding of the data and instruction
> caches on a CPU. When game developers scream "no virtual functions!"
> that is their simplistic takeway from the real advice of "keep your
> cache hot". The latter is what you need to remember, not simplistic
> bugaboos about particular language features.

I'll take your word for it, as you seem to know what you are talking
about (and keep your caches hot is on the micro level the most important
optimization). However, in this particular instance, I remember that some
older (extinct) microprocessors had rather slow indirect function calls,
so that may also spark those 'no virtual functions' rants. In the end,
profile, profile, profile.

M4

Jorgen Grahn

unread,
Jan 17, 2015, 9:30:07 AM1/17/15
to
On Thu, 2015-01-15, jacob navia wrote:
> Le 15/01/2015 08:25, Jorgen Grahn a écrit :
>> You're falsifying what he wrote
>
> How could he have written about the HyperBasic team?
> That's so obvious nobody would have doubted that I wrote that citation.

If you quote someone and use the > convention, I'm assuming that what
you're quoting is exactly what he actually wrote and what he intended
to convey, modulo spacing. line breaks and the odd [...] abbreviation.
That's what I learned in school, too -- quoting is a serious business.

When reading that stuff, I was really confused for a while: "Does
Christopher P. really hate C++? That's not the impression I got
from his earlier postings." My newsreader has a "previous in
thread" feature which showed me that you were misquoting him -- and
it now seems now you did it on purpose.

> I did not want to edit it much though, just some MINIMAL changes and the
> introduction of the HyperBasic team.
>
> I am just setting his words in the context of 35 years in the future.

And that's why I'm no longer interested in seeing messages from you.
I wish you all the luck, but I don't have anything to say to you from
now on.

David Brown

unread,
Jan 19, 2015, 3:37:36 AM1/19/15
to
By "modern C++", I meant "C++11" or above, rather than C++98/C++03.
There are a fair number of features introduced in C++11, and enhanced in
C++14 (and C++17), that greatly improve the language (in my humble and
not too experienced opinion).

> I've been on Linux for too long (a decade) but it seems to me the
> market for non-Unix embedded environments is shrinking, and that gcc
> (including the C++ compiler) is replacing proprietary C compilers
> inside that market, too.

gcc (and occasionally clang) is certainly taking more of the embedded C
and C++ space. In particular, it dominates the ARM development market,
and is also very popular for AVR and msp430, and for new architecture,
gcc and/or clang is almost always the starting point for the tools. But
there are lots of parts of the embedded world where gcc is rarely seen.
In safety-critical or automotive industries, companies are extremely
conservative - suppliers like Green Hills or IAR dominate even when gcc
gives a better compiler. In legal terms, if something goes wrong, "I
paid lots of money for an "industry standard" certified development
tool" is a better defence than "I picked the tool with fewest bugs".
And for smaller cpus, like the infamous 8051, there are no gcc ports.

One of the reasons for this is that C++ is getting increasingly
difficult to parse and implement. Proprietary vendors simply cannot
keep up with the development costs, and do not try.

And for targets which are not dominated by gcc, C++ support is generally
limited and expensive. It seldom goes beyond C++03 standards, and C++
support can cost many times the price of a C compiler. (For example,
Metrowerks for Freescale's chips is free for most targets up to a quite
usable code size limit, as long as you stick to C - write a single line
of C++, and you need $5K for a permanent license.)


Much of the embedded industry can be incredibly backwards regarding
language support. Two or three years ago I saw an advert for the latest
version of a DSP development suite, costing multiple $K - "now
supporting most of C99" (no C++ of any sort). MISRA-C, the coding
standard required for automotive development in the EU, upgraded to C99
support in 2012. At our company we have newly-purchased development
tools (for conformance testing of a communications stack) that can only
be run on DOS - without any virtualisation.

And while in the *nix world it is not uncommon to find C code whose
heritage goes back 20 years, in the embedded world the same code needs
to be compiled on the 20 year old tools for the 20 year old
architecture. (I think modification of 19 year old code is my personal
record.)

For this sort of thing, language stability in C is clearly an advantage.
Arguably, picking the current best supported C++ version and sticking
with that for the next 20 years is a better choice - and in some cases,
that is true. But a new C programmer educated and trained in C11 would
have little problem understanding and working with C90 code, while a new
C++ programmer trained only in C++14 would find C++98 code a different
language. There are pros and cons here.

For other types of embedded development, the power and flexibility of
modern C++ is essential - at the other end of the scale from these
long-term systems, there are embedded products that get 3 months of
development, three months of sales, and then they are obsolete.


>
>> - and anyone working with the
>> code has to have learned the new features. But if I write good, clear,
>> modern C code, it will compile on anything, and any C programmer can
>> work with it.
>
> You're right: /that/ last thing is a reason C isn't unnecessary.
> There are plenty of good C programmers around, and you sometimes want
> to cooperate with them. No, scratch that: you /frequently/ want to
> cooperate with them.

(Of course, there are plenty of /bad/ C programmers around too!)

>
> At the same time, with those programmers, I cannot help sometimes
> thinking "wow, she would get even more stuff done if she spent some
> time learning C++!". C++ is, to me, a logical next step. If you're
> good with C, you'll be even better with C++, and nothing you've
> learned so far is wasted.

I am not sure that is true. C is a far simpler language than C++ - for
the same amount of time and effort, you can be an expert C programmer or
a mediocre C++ programmer. Which is best for the job? The skills
needed for good C programming and good C++ programming are highly
related, but not identical.

And while a good C++ program has many advantages over a good C program,
a /bad/ C program is, I think, easier to understand and fix than a /bad/
C++ program.

>
>> C++ has been getting many more features over time - and that is an
>> advantage for C++. You can write neater, clearer, faster, and safer
>> code with C++11 than with C++03, and C++14 and C++17 continue to improve
>> the language.
>>
>> So while I see that there is steadily more scope for the use of C++ in
>> traditional C domains (such as in an increasing proportion of embedded
>> development), C is far from "crippled" or "unnecessary".
>
> I retract my statement about "unnecessary", but I still maintain the
> "crippled" part. It's not derogatory. It's just how I see my own C
> programming: I know how I would have written the code in C++, but I
> have to translate it to less expressive C code.

"Crippled" implies that the language has changed from being a fully
working and useful language into something more limited and sub-optimal.
The rise of an alternative language, C++, that is often a better choice
than C does not mean that C is crippled. It merely means that C has
stayed still while C++ has moved on.

>
> C can, objectively, be seen as more or less a small subset of C++, and
> C++ is readily available for free to almost everybody. In that sense C
> is crippled.

No. EC++ was crippled, because they started with C++ and took out many
useful parts for no good reason. C is not crippled, because it is the
same as it always has been.

>
> Of course I have to make my C code more or less idiomatic C -- if I
> tried to emulate C++ in C I would be as rude as the C programmers
> writing C++ code as if it was C -- but I cannot ignore the lessons C++
> taught me, about the merits of type safety and so on.

There is no single "correct" way to write C or C++ - and IMHO
well-written code in any language uses the language, the compiler and
other tools as "safely" as possible.

Richard

unread,
Jan 19, 2015, 6:39:11 PM1/19/15
to
[Please do not mail me a copy of your followup]

Martijn Lievaart <m...@rtij.nl.invlalid> spake the secret code
<6b0qob-...@news.rtij.nl> thusly:

>On Thu, 15 Jan 2015 23:16:49 +0000, Richard wrote:
>
>> [...] When game developers scream "no virtual functions!"
>> that is their simplistic takeway from the real advice of "keep your
>> cache hot". The latter is what you need to remember, not simplistic
>> bugaboos about particular language features.
>
>I'll take your word for it, as you seem to know what you are talking
>about (and keep your caches hot is on the micro level the most important
>optimization). However, in this particular instance, I remember that some
>older (extinct) microprocessors had rather slow indirect function calls,
>so that may also spark those 'no virtual functions' rants. In the end,
>profile, profile, profile.

In this case, I am talking about a particular discussion I had with
some game developers where they yelled out how virtual functions were
bad and I drilled down to find out the real problem which was keeping
your cache hot. Virtual functions and hot caches are not
incompatible, but if you blindly use virtual functions all over the
place without regards to how it affects your cache, then you can have
problems. They may think that they "solved" the problem by banishing
virtual functions, but when you banish virtual functions you're forced
to organize your code differently and it was the different
organization that gave them hot caches, not the banishment of virtual
functions.

Again, it comes down to measurement and understanding system performance
as a whole and not simplistically avoiding things like virtual functions,
std::vector or C++ for that matter. But hey, it's more "exciting" to
screech against virtual functions than it is to repeat the
time-honored advice of keeping your caches hot.

Öö Tiib

unread,
Jan 19, 2015, 7:48:48 PM1/19/15
to
The problem is when people use 'virtual' where run-time polymorphism isn't
needed at all. If run-time polymorphism is needed then virtual functions are
commonly more efficient than the typical alternatives. Typical alternatives
are done with "type" or "kind" member and then switch-case or if-else or
lookup-in-table to find out the correct behaviour. Such "polymorphism"
is worse to read and slower than one level of additional indirection from
virtual call.

Martijn Lievaart

unread,
Jan 20, 2015, 6:46:00 AM1/20/15
to
On Mon, 19 Jan 2015 16:48:32 -0800, Öö Tiib wrote:

> The problem is when people use 'virtual' where run-time polymorphism
> isn't needed at all. If run-time polymorphism is needed then virtual
> functions are commonly more efficient than the typical alternatives.
> Typical alternatives are done with "type" or "kind" member and then
> switch-case or if-else or lookup-in-table to find out the correct
> behaviour. Such "polymorphism"
> is worse to read and slower than one level of additional indirection
> from virtual call.

True, but on one point. Such "polymorphism" MAY be slower. I have seen
plenty of cases where it wasn't.

Profile, profile, profile.

M4

Öö Tiib

unread,
Jan 20, 2015, 11:59:49 AM1/20/15
to
Dinosaur switch-cases have been always much slower.

> Profile, profile, profile.

Profiling typically reveals that way higher level algorithms are naive.
People have wasted their time to micro-optimise functions that
are called most often and forgot to search for the obvious opportunity
to reduce the count of calls 20 times.

Martijn Lievaart

unread,
Jan 20, 2015, 2:31:10 PM1/20/15
to
On Tue, 20 Jan 2015 08:59:28 -0800, Öö Tiib wrote:

> On Tuesday, 20 January 2015 13:46:00 UTC+2, Martijn Lievaart wrote:
>> On Mon, 19 Jan 2015 16:48:32 -0800, Öö Tiib wrote:
>>
>> > The problem is when people use 'virtual' where run-time polymorphism
>> > isn't needed at all. If run-time polymorphism is needed then virtual
>> > functions are commonly more efficient than the typical alternatives.
>> > Typical alternatives are done with "type" or "kind" member and then
>> > switch-case or if-else or lookup-in-table to find out the correct
>> > behaviour. Such "polymorphism"
>> > is worse to read and slower than one level of additional indirection
>> > from virtual call.
>>
>> True, but on one point. Such "polymorphism" MAY be slower. I have seen
>> plenty of cases where it wasn't.
>
> Dinosaur switch-cases have been always much slower.

Nope. This is cargo cult programming. IF it is really important (that
should always be the first question) AND there are no more algorithmic
gains (should be second question) then, and only then, don't assume,
measure.

>
>> Profile, profile, profile.
>
> Profiling typically reveals that way higher level algorithms are naive.
> People have wasted their time to micro-optimise functions that are
> called most often and forgot to search for the obvious opportunity to
> reduce the count of calls 20 times.

True, but besides the point.

M4

Öö Tiib

unread,
Jan 20, 2015, 3:18:54 PM1/20/15
to
On Tuesday, 20 January 2015 21:31:10 UTC+2, Martijn Lievaart wrote:
> On Tue, 20 Jan 2015 08:59:28 -0800, Öö Tiib wrote:
>
> > On Tuesday, 20 January 2015 13:46:00 UTC+2, Martijn Lievaart wrote:
> >> On Mon, 19 Jan 2015 16:48:32 -0800, Öö Tiib wrote:
> >>
> >> > The problem is when people use 'virtual' where run-time polymorphism
> >> > isn't needed at all. If run-time polymorphism is needed then virtual
> >> > functions are commonly more efficient than the typical alternatives.
> >> > Typical alternatives are done with "type" or "kind" member and then
> >> > switch-case or if-else or lookup-in-table to find out the correct
> >> > behaviour. Such "polymorphism"
> >> > is worse to read and slower than one level of additional indirection
> >> > from virtual call.
> >>
> >> True, but on one point. Such "polymorphism" MAY be slower. I have seen
> >> plenty of cases where it wasn't.
> >
> > Dinosaur switch-cases have been always much slower.
>
> Nope. This is cargo cult programming. IF it is really important (that
> should always be the first question) AND there are no more algorithmic
> gains (should be second question) then, and only then, don't assume,
> measure.

Done for decades. Result was told to you: "Dinosaur switch-cases have been
always much slower than virtual functions."
It is also logical. Otherwise compiler would generate for (at least some of)
virtual calls such switch-cases under the hood.

Scott Lurndal

unread,
Jan 20, 2015, 3:52:49 PM1/20/15
to
=?ISO-8859-1?Q?=D6=F6_Tiib?= <oot...@hot.ee> writes:
>On Tuesday, 20 January 2015 21:31:10 UTC+2, Martijn Lievaart wrote:
>> On Tue, 20 Jan 2015 08:59:28 -0800, =D6=F6 Tiib wrote:
>>=20
>> > On Tuesday, 20 January 2015 13:46:00 UTC+2, Martijn Lievaart wrote:
>> >> On Mon, 19 Jan 2015 16:48:32 -0800, =D6=F6 Tiib wrote:
>> >>=20
>> >> > The problem is when people use 'virtual' where run-time polymorphism
>> >> > isn't needed at all. If run-time polymorphism is needed then virtual
>> >> > functions are commonly more efficient than the typical alternatives.
>> >> > Typical alternatives are done with "type" or "kind" member and then
>> >> > switch-case or if-else or lookup-in-table to find out the correct
>> >> > behaviour. Such "polymorphism"
>> >> > is worse to read and slower than one level of additional indirection
>> >> > from virtual call.
>> >>=20
>> >> True, but on one point. Such "polymorphism" MAY be slower. I have seen
>> >> plenty of cases where it wasn't.
>> >=20
>> > Dinosaur switch-cases have been always much slower.
>>=20
>> Nope. This is cargo cult programming. IF it is really important (that=20
>> should always be the first question) AND there are no more algorithmic=20
>> gains (should be second question) then, and only then, don't assume,=20
>> measure.
>
>Done for decades. Result was told to you: "Dinosaur switch-cases have been
>always much slower than virtual functions."
>It is also logical. Otherwise compiler would generate for (at least some of=
>)=20
>virtual calls such switch-cases under the hood.

I'm afraid it is difficult to take your word for this without
any data to support it.

It's clearly dependent upon each program. Considering that a
case statement (where the case index is non-sequential or sparse) is
generally a sequence of compares and branches, a good branch
predictor will keep the instruction pipeline full. A virtual
function call, being a non-predicable branch, will not only
result in a pipeline flush, but will also often, even likely
for large objects, hit a completely different cacheline to access
the vtbl for the object which, depending on residency in the LLC,
may result in a delay of between 80 and 400 instructions to fill.

Of course, a case statement where the indexes are relatively
sequential will often be generated as a simple table lookup
followed by an indirect branch within the current instruction
stream. Two or three instructions, likely icache resident and
no LLC fill required.

the "otherwise compiler would generate such switch-cases under
the hood" statement is ridiculous, as that would not be
optimal in any case (pun intended) and clearly incompatible
with the relevent ABI's.

Melzzzzz

unread,
Jan 20, 2015, 4:05:21 PM1/20/15
to
There is compiler that does not use VTBL's, rather tree of tests:
http://smarteiffel.loria.fr/
following your same idea.


Öö Tiib

unread,
Jan 20, 2015, 4:23:39 PM1/20/15
to
If we are talking about virtual calls done in some inner
cycles then vtables of classes involved are hot in cache.
If we are talking about rare virtual calls then those do not
affect performance.

> Of course, a case statement where the indexes are relatively
> sequential will often be generated as a simple table lookup
> followed by an indirect branch within the current instruction
> stream. Two or three instructions, likely icache resident and
> no LLC fill required.
>
> the "otherwise compiler would generate such switch-cases under
> the hood" statement is ridiculous, as that would not be
> optimal in any case (pun intended) and clearly incompatible
> with the relevent ABI's.

What ABI's? C++ compiler does pretty much what it only wants to as
long as externally observable behavior stays same. If it is certain
about object's type then it calls the virtual functions non-virtually.
It (or linker) may even inline those.

Juha Nieminen

unread,
Jan 21, 2015, 6:08:36 AM1/21/15
to
Öö Tiib <oot...@hot.ee> wrote:
> Done for decades. Result was told to you: "Dinosaur switch-cases have been
> always much slower than virtual functions."
> It is also logical. Otherwise compiler would generate for (at least some of)
> virtual calls such switch-cases under the hood.

It depends on what the values of the cases are in the switch block.
If they are appropriate, most compilers will generate a jump table
instead of just a chain of conditionals.

--- news://freenews.netfront.net/ - complaints: ne...@netfront.net ---

Martijn Lievaart

unread,
Jan 21, 2015, 2:30:30 PM1/21/15
to
On Tue, 20 Jan 2015 20:52:36 +0000, Scott Lurndal wrote:

> =?ISO-8859-1?Q?=D6=F6_Tiib?= <oot...@hot.ee> writes:

>>Done for decades. Result was told to you: "Dinosaur switch-cases have
>>been always much slower than virtual functions."
>>It is also logical. Otherwise compiler would generate for (at least some
>>of=
>>)=20 virtual calls such switch-cases under the hood.
>
> I'm afraid it is difficult to take your word for this without any data
> to support it.

Thanks for the support.

> It's clearly dependent upon each program. Considering that a case
> statement (where the case index is non-sequential or sparse) is
> generally a sequence of compares and branches, a good branch predictor
> will keep the instruction pipeline full. A virtual function call,
> being a non-predicable branch, will not only result in a pipeline flush,
> but will also often, even likely for large objects, hit a completely
> different cacheline to access the vtbl for the object which, depending
> on residency in the LLC, may result in a delay of between 80 and 400
> instructions to fill.

This is one important case where virtual functions can be (much) slower.

Another one is when the cost of the virtual destructor offsets the gains
from the virtual function. Common in designs where many small objects get
allocated and destructed frequently.

There probably are many other scenarios where virtual functions are not
faster than the alternatives.

Do understand that I advocate using virtual functions where they are
appropriate.

Only when virtual functions really seem to be the bottleneck (so after
deciding that it really is not fast enough, and after any algorithmic
gains, and after any other quick performance wins, and after optimizing
IO patterns, and after ...) only then it MAY be worthwhile to think about
whether virtual functions are the problem or the solution.

M4

Öö Tiib

unread,
Jan 21, 2015, 4:10:56 PM1/21/15
to
On Wednesday, 21 January 2015 21:30:30 UTC+2, Martijn Lievaart wrote:
> On Tue, 20 Jan 2015 20:52:36 +0000, Scott Lurndal wrote:
>
> > =?ISO-8859-1?Q?=D6=F6_Tiib?= <oot...@hot.ee> writes:
>
> >>Done for decades. Result was told to you: "Dinosaur switch-cases have
> >>been always much slower than virtual functions."
> >>It is also logical. Otherwise compiler would generate for (at least some
> >>of=
> >>)=20 virtual calls such switch-cases under the hood.
> >
> > I'm afraid it is difficult to take your word for this without any data
> > to support it.
>
> Thanks for the support.
>
> > It's clearly dependent upon each program. Considering that a case
> > statement (where the case index is non-sequential or sparse) is
> > generally a sequence of compares and branches, a good branch predictor
> > will keep the instruction pipeline full. A virtual function call,
> > being a non-predicable branch, will not only result in a pipeline flush,
> > but will also often, even likely for large objects, hit a completely
> > different cacheline to access the vtbl for the object which, depending
> > on residency in the LLC, may result in a delay of between 80 and 400
> > instructions to fill.
>
> This is one important case where virtual functions can be (much) slower.
>
> Another one is when the cost of the virtual destructor offsets the gains
> from the virtual function. Common in designs where many small objects get
> allocated and destructed frequently.

On that case the performance bottle-neck has always been (in my tests)
that dynamic memory management itself, not the virtual functions (or
destructors). Memory managers are improved over time but their work
is still way slower than additional indirection from virtual call.

> There probably are many other scenarios where virtual functions are not
> faster than the alternatives.
>
> Do understand that I advocate using virtual functions where they are
> appropriate.
>
> Only when virtual functions really seem to be the bottleneck (so after
> deciding that it really is not fast enough, and after any algorithmic
> gains, and after any other quick performance wins, and after optimizing
> IO patterns, and after ...) only then it MAY be worthwhile to think about
> whether virtual functions are the problem or the solution.

Do understand me too, I am not saying that all switch-cases or if-else-if
chains can be replaced with dynamic polymorphism. However some that I have
seen were originally written instead of virtuals (or had grown over time
into replacement of virtuals) and virtuals did improve performance on such
cases in my tests.

Martijn Lievaart

unread,
Jan 23, 2015, 3:25:15 AM1/23/15
to
On Wed, 21 Jan 2015 13:10:35 -0800, Öö Tiib wrote:

>> Another one is when the cost of the virtual destructor offsets the
>> gains from the virtual function. Common in designs where many small
>> objects get allocated and destructed frequently.
>
> On that case the performance bottle-neck has always been (in my tests)
> that dynamic memory management itself, not the virtual functions (or
> destructors). Memory managers are improved over time but their work is
> still way slower than additional indirection from virtual call.

Even although specific allocators can be very efficient, this is true. I
was thinking about stack allocation though.

>> Only when virtual functions really seem to be the bottleneck (so after
>> deciding that it really is not fast enough, and after any algorithmic
>> gains, and after any other quick performance wins, and after optimizing
>> IO patterns, and after ...) only then it MAY be worthwhile to think
>> about whether virtual functions are the problem or the solution.
>
> Do understand me too, I am not saying that all switch-cases or
> if-else-if chains can be replaced with dynamic polymorphism. However
> some that I have seen were originally written instead of virtuals (or
> had grown over time into replacement of virtuals) and virtuals did
> improve performance on such cases in my tests.

Oh yes, in general they do. However:

1) In general is not always. It may even change over time, see f.i. the
costs of exceptions.

2) Good design is much more important than micro optimization, at least
at first. That should guide your decision whether to use virtuals, not
cargo cult about performance.

3) "virtuals did improve performance on such cases in my tests." That's
exactly my point. Profile.

M4

Öö Tiib

unread,
Jan 23, 2015, 7:51:02 AM1/23/15
to
On Friday, 23 January 2015 10:25:15 UTC+2, Martijn Lievaart wrote:
> On Wed, 21 Jan 2015 13:10:35 -0800, Öö Tiib wrote:
>
> >> Another one is when the cost of the virtual destructor offsets the
> >> gains from the virtual function. Common in designs where many small
> >> objects get allocated and destructed frequently.
> >
> > On that case the performance bottle-neck has always been (in my tests)
> > that dynamic memory management itself, not the virtual functions (or
> > destructors). Memory managers are improved over time but their work is
> > still way slower than additional indirection from virtual call.
>
> Even although specific allocators can be very efficient, this is true. I
> was thinking about stack allocation though.
>
> >> Only when virtual functions really seem to be the bottleneck (so after
> >> deciding that it really is not fast enough, and after any algorithmic
> >> gains, and after any other quick performance wins, and after optimizing
> >> IO patterns, and after ...) only then it MAY be worthwhile to think
> >> about whether virtual functions are the problem or the solution.
> >
> > Do understand me too, I am not saying that all switch-cases or
> > if-else-if chains can be replaced with dynamic polymorphism. However
> > some that I have seen were originally written instead of virtuals (or
> > had grown over time into replacement of virtuals) and virtuals did
> > improve performance on such cases in my tests.
>
> Oh yes, in general they do. However:
>
> 1) In general is not always. It may even change over time, see f.i. the
> costs of exceptions.

If you read what I wrote then it was that alternatives to virtuals have been
commonly slower. Later I wrote that switch cases I have measured
were always slower.

> 2) Good design is much more important than micro optimization, at least
> at first. That should guide your decision whether to use virtuals, not
> cargo cult about performance.

Usage of virtual functions can make design more complex and harder to follow
or maintain. One should not use virtual functions without need. Objective-C
for example is inherently difficult programming language in that sense
because there all member functions are technically virtuals.

However ... I have not seen example of large dinosaur switch-case that is
better to maintain. I did not measure the speed because I did care. Major
performance bottle-neck is most probably elsewhere. I have measured
performance to demonstrate to cargo cult "no virtuals gang" defending
that dinosaur that they are wrong and virtuals run tiny bit faster.

> 3) "virtuals did improve performance on such cases in my tests." That's
> exactly my point. Profile.

Lately I optimize only for readability, robustness and maintainability during
project. I profile at end and whole program (not sliced out algorithms)
together with stress testing with real field data. It is always been about 10%
of code-base that runs 90% of runtime. So optimising during project
for something else but for readability, robustness and maintainability is
preliminary on 90% of cases.

Martijn Lievaart

unread,
Jan 23, 2015, 4:15:45 PM1/23/15
to
On Fri, 23 Jan 2015 04:50:51 -0800, Öö Tiib wrote:

> If you read what I wrote then it was that alternatives to virtuals have
> been commonly slower. Later I wrote that switch cases I have measured
> were always slower.

I was not referring to what YOU wrote, but to what was written earlier in
this thread by others. In fact I think we're in vehement agreement. :-)

> Lately I optimize only for readability, robustness and maintainability
> during project. I profile at end and whole program (not sliced out
> algorithms) together with stress testing with real field data. It is
> always been about 10%
> of code-base that runs 90% of runtime. So optimising during project for
> something else but for readability, robustness and maintainability is
> preliminary on 90% of cases.

This is so important, I left it in just to repeat it once more :-)

M4

Ian Collins

unread,
Jan 23, 2015, 5:14:19 PM1/23/15
to
嘱 Tiib wrote:
>
> Lately I optimize only for readability, robustness and maintainability during
> project. I profile at end and whole program (not sliced out algorithms)
> together with stress testing with real field data. It is always been about 10%
> of code-base that runs 90% of runtime. So optimising during project
> for something else but for readability, robustness and maintainability is
> preliminary on 90% of cases.

That is the best approach. If you have real field data and your tools
support it, profile feedback optimisation can often gain you a
performance boost simply by changing your build options.

--
Ian Collins

Richard

unread,
Jan 23, 2015, 5:22:39 PM1/23/15
to
[Please do not mail me a copy of your followup]

Martijn Lievaart <m...@rtij.nl.invlalid> spake the secret code
<40qapb-...@news.rtij.nl> thusly:
+100

Tobias Müller

unread,
Jan 26, 2015, 2:14:36 AM1/26/15
to
Richard <legaliz...@mail.xmission.com> wrote:
> It's from 2005, but the discussion is still relevant today because it
> mostly boils down to proper care and feeding of the data and
> instruction caches on a CPU. When game developers scream "no virtual
> functions!" that is their simplistic takeway from the real advice of
> "keep your cache hot". The latter is what you need to remember, not
> simplistic bugaboos about particular language features.

You are reducing the impact of virtual functions to cache hotness which is
just as much an oversimplification.

One other very important impact of virtual functions is that they are a
hard optimization boundary at compile time, i.e. inlining is impossible.

Tobi

David Brown

unread,
Jan 26, 2015, 2:40:48 AM1/26/15
to
Virtual functions are not a hard optimisation boundary if the compiler
can figure them out at compile time (or link time, if you are using
link-time optimisation). Compilers have been improving quite a lot
recently at devirtualisation optimisations precisely to avoid
unnecessary costs in virtual functions.

Of course, if you access an object through a pointer to a base class,
and the compiler doesn't have all the relevant code at the time, then it
must use the virtual call mechanisms. But if the type of the object is
fully known, then the virtual call is handled directly - or can even be
inlined.


Richard

unread,
Jan 26, 2015, 4:20:52 PM1/26/15
to
[Please do not mail me a copy of your followup]

=?UTF-8?Q?Tobias=20M=C3=BCller?= <tro...@bluewin.ch> spake the secret code
<1737645473443948659.15...@news.eternal-september.org> thusly:

>Richard <legaliz...@mail.xmission.com> wrote:
>> It's from 2005, but the discussion is still relevant today because it
>> mostly boils down to proper care and feeding of the data and
>> instruction caches on a CPU. When game developers scream "no virtual
>> functions!" that is their simplistic takeway from the real advice of
>> "keep your cache hot". The latter is what you need to remember, not
>> simplistic bugaboos about particular language features.
>
>You are reducing the impact of virtual functions to cache hotness which is
>just as much an oversimplification.

No, I am summarizing a real discussion with game developers.

Christopher Pisz

unread,
Jan 26, 2015, 5:30:36 PM1/26/15
to
On 1/12/2015 8:30 AM, alessio211734 wrote:
> class MyClass
> {
>
> ...
> static NearData nearCells[32];
>
> };



After all the discussion of profiling and virtual functions in child
threads, ....I still want to know why in the world anyone would want a
static C-Array in their C++ class.

I'd sure like to know how the author intended to use this class, because
I question both making it static and using the c-array vs a stl container.


Chris Vine

unread,
Jan 26, 2015, 7:25:47 PM1/26/15
to
Clearly there is a use case for a statically sized container with
contiguous storage or std::array would not be in C++11. And if the
array is a non-static class member, it makes little difference whether
you use a plain array or std::array because both have the same compiler
generated copy constructor and assignment operator. That use case
generally involves efficiency, because unlike std::vector arrays are
not dynamically allocated (unless you explicitly new them, which is
stupid because std::vector is available). For the cases I have
mentioned, I use a plain array, partly because that is what I have done
for years, and partly because ... why not?

std::array permits a zero size and a plain array does not, but I have
never written code requiring a zero size array.

Whether it should be static member or not is orthogonal. If it is a
static member, it is of even less importance whether it is a plain
array or a std::array type. And maybe of course the OP was using
C++98/03.

Chris

Christopher Pisz

unread,
Jan 26, 2015, 7:44:12 PM1/26/15
to
On 1/26/2015 6:25 PM, Chris Vine wrote:
> On Mon, 26 Jan 2015 16:30:24 -0600
> Christopher Pisz <nos...@notanaddress.com> wrote:
>> On 1/12/2015 8:30 AM, alessio211734 wrote:
>>> class MyClass
>>> {
>>>
>>> ...
>>> static NearData nearCells[32];
>>>
>>> };
>>
>>
>>
>> After all the discussion of profiling and virtual functions in child
>> threads, ....I still want to know why in the world anyone would want
>> a static C-Array in their C++ class.
>>
>> I'd sure like to know how the author intended to use this class,
>> because I question both making it static and using the c-array vs a
>> stl container.
>
> Clearly there is a use case for a statically sized container with
> contiguous storage or std::array would not be in C++11. And if the
> array is a non-static class member, it makes little difference whether
> you use a plain array or std::array because both have the same compiler
> generated copy constructor and assignment operator. That use case
> generally involves efficiency, because unlike std::vector arrays are
> not dynamically allocated (unless you explicitly new them, which is
> stupid because std::vector is available). For the cases I have
> mentioned, I use a plain array, partly because that is what I have done
> for years, and partly because ... why not?

I often hear that same argument from C programmers: "std::vector
allocates." If you are using a c-array, you must know the size before
hand, and if you know the size before hand, you can create the vector
with that size, so what are we really optimizing away in the name of
efficiency?

I suppose you could argue the one time allocation on the heap at
construction time vs on the stack, if your class resides on the stack
anyway. However, what are you really saving and what are you giving up?

Chris Vine

unread,
Jan 26, 2015, 7:57:40 PM1/26/15
to
On Mon, 26 Jan 2015 18:43:56 -0600
Christopher Pisz <nos...@notanaddress.com> wrote:
[snip]
> > Clearly there is a use case for a statically sized container with
> > contiguous storage or std::array would not be in C++11. And if the
> > array is a non-static class member, it makes little difference
> > whether you use a plain array or std::array because both have the
> > same compiler generated copy constructor and assignment operator.
> > That use case generally involves efficiency, because unlike
> > std::vector arrays are not dynamically allocated (unless you
> > explicitly new them, which is stupid because std::vector is
> > available). For the cases I have mentioned, I use a plain array,
> > partly because that is what I have done for years, and partly
> > because ... why not?
>
> I often hear that same argument from C programmers: "std::vector
> allocates." If you are using a c-array, you must know the size before
> hand, and if you know the size before hand, you can create the vector
> with that size, so what are we really optimizing away in the name of
> efficiency?
>
> I suppose you could argue the one time allocation on the heap at
> construction time vs on the stack, if your class resides on the stack
> anyway. However, what are you really saving and what are you giving
> up?

That is ridiculous. Allocation on the heap is a heavy-weight
operation compared with allocation on the stack, relatively speaking,
particularly in multi-threaded programs which require thread-safe
allocation and deallocation. Dynamic allocation also reduces cache
locality. Furthermore std::vector constructed with a particular
initial size calls the default constructor for each element, so you
sometimes have to reserve a size and then pushback onto it to avoid
that.

Are you seriously saying that in your code you always use std::vector
instead of an array even when the size is known statically? And why do
you think std::array is in C++11?

The case for using std::vector even with statically known sizes is when
you may carry out move operations, as moving vectors just requires
swapping pointers. But you would still need to profile.

I find your question disturbing.

Chris

Ian Collins

unread,
Jan 27, 2015, 2:10:09 AM1/27/15
to
Christopher Pisz wrote:
>
> I suppose you could argue the one time allocation on the heap at
> construction time vs on the stack, if your class resides on the stack
> anyway. However, what are you really saving and what are you giving up?

Locality of reference?

--
Ian Collins

Juha Nieminen

unread,
Jan 27, 2015, 3:21:41 AM1/27/15
to
Christopher Pisz <nos...@notanaddress.com> wrote:
> After all the discussion of profiling and virtual functions in child
> threads, ....I still want to know why in the world anyone would want a
> static C-Array in their C++ class.

Because it's enormously more efficient.

It's faster to allocate and deallocate (allocating memory for the array
doesn't take any more additional time than allocating the instance of
the class the array is inside). It consumes less memory. It does not
contribute to memory fragmentation.

Sure, in a class that's instantiated a few times it doesn't matter.
However, in a class that's instantiated tens of thousands, or even
millions of times, it matters quite a lot.

David Brown

unread,
Jan 27, 2015, 5:23:12 AM1/27/15
to
1.

Allocating a block of data on the stack takes a couple of instructions -
allocating it in heap can mean system calls, locks, threading issues,
etc. (Typically a new/malloc implementation holds a local pool that can
be quickly allocated, and only needs a system malloc to re-fill the
local pool - so allocation times can vary between "not much" to "a lot",
depending on the state of the local pool.)

And allocating a block statically takes no instructions at all.

If you are doing this once, dynamic allocation of a vector means wasting
a few microseconds, which is obviously not a concern. But if you are
doing it thousands of times, it adds up.

2.

Static allocation, or at least stack allocation, is far more amenable to
compile-time checking and (if necessary) faster run-time checks to help
catch errors sooner.

3.

Statically allocated data, or at least stack allocated data, requires
fewer instructions and fewer registers to access, making it faster. It
also has better locality of reference and is more likely to be in the
cache, which can make a huge difference (this depends on the size of the
data and the access patterns, of course).


Coming from an embedded background, where this is often much more
relevant than on desktop systems (partly because memory fragmentation on
heaps is a serious issue when you don't have virtual memory), there is a
clear golden rule that all allocations should be static if possible,
with stack allocation as a second-best. In many embedded systems,
dynamic allocation is not allowed at all - it is certainly never encouraged.

Christopher Pisz

unread,
Jan 27, 2015, 10:28:45 AM1/27/15
to
Well, this is what I am getting at. If there is _one_ allocation, there
is no need to apply a rule of "I won't use any STL container, because
allocation!", which is often what I hear from C programmers. More often
than not A) They are not even aware you can specify size at construction
time for a vector or reserve size after if you wish B) They haven't even
considered how often the container or class that owns the container is
being constructed.

> 2.
>
> Static allocation, or at least stack allocation, is far more amenable to
> compile-time checking and (if necessary) faster run-time checks to help
> catch errors sooner.

What exactly does "amenable to compile time checking" mean?

"Run time checks to catch errors sooner?" What run time check is going
to occur on a c-array? Run time checks would be a reason to use an STL
container in the first place. Of course none is faster than some.


> 3.
>
> Statically allocated data, or at least stack allocated data, requires
> fewer instructions and fewer registers to access, making it faster. It
> also has better locality of reference and is more likely to be in the
> cache, which can make a huge difference (this depends on the size of the
> data and the access patterns, of course).

Not disagreeing with you, but I've never read any evidence of a
contiguous block of memory on the stack vs a contiguous block of memory
on the heap being more likely to be in the cache. Have anything to
reference?

>
> Coming from an embedded background, where this is often much more
> relevant than on desktop systems (partly because memory fragmentation on
> heaps is a serious issue when you don't have virtual memory), there is a
> clear golden rule that all allocations should be static if possible,
> with stack allocation as a second-best. In many embedded systems,
> dynamic allocation is not allowed at all - it is certainly never encouraged.

Well, that's the thing. I often work with C programmers whom came from a
background where it was relevant, but then they adopt silly rules like
"don't ever use an STL container, because 'allocation'" in places where
the difference is negligible.

You do, but I have my doubts that the OP did. I am willing to bet he was
doing what he did, "just because", but we'll never know.

Christopher Pisz

unread,
Jan 27, 2015, 10:40:24 AM1/27/15
to
No one is arguing otherwise.

The thing I have a problem with is I often end up working with people
whom use that as an excuse to never use an STL container at all,
regardless of how often allocation may occur.


> relatively speaking,
> particularly in multi-threaded programs which require thread-safe
> allocation and deallocation.

std::vector<int>(10) is going to be the same whether my program is
multithreaded or not.

If there are thread safety concerners, then most likely the container
would be inside some class and the same locking mechanisms are going to
be added whether it is a std::vector, a c-array, or a snuffleupugus.

> Dynamic allocation also reduces cache
> locality.

Someone else said this too. Not disagreeing, but I've never read it. Got
a reference?

> Furthermore std::vector constructed with a particular
> initial size calls the default constructor for each element, so you
> sometimes have to reserve a size and then pushback onto it to avoid
> that.

but again, how long does it take to construct an int? It depends on the
scenario and like I said, I suspect the OP was doing what he was doing
just because "Me like meat. STL bad....C gud.", but looks like he lost
interest in his topic.

jak

unread,
Jan 27, 2015, 11:38:49 AM1/27/15
to
Il 27/01/2015 16:40, Christopher Pisz ha scritto:
> but again, how long does it take to construct an int? It depends on the
> scenario and like I said, I suspect the OP was doing what he was doing
> just because "Me like meat. STL bad....C gud.", but looks like he lost
> interest in his topic.

I have not lost interest, indeed, I follow you with attention. my
problem was to serialize a resource. I use the static field in the class
to save the state of the resource even when I declare a new variable of
that class in the various functions to always know the situation of the
resource.

PS:
I had to cut the rest of the discussion because my application gave me a
sending error. sorry.

Christopher Pisz

unread,
Jan 27, 2015, 12:04:51 PM1/27/15
to
Oh , I thought some fellow named Alessio was the OP. You are using 2
different accounts then I suppose.

Chris Vine

unread,
Jan 27, 2015, 2:52:12 PM1/27/15
to
On Tue, 27 Jan 2015 09:40:12 -0600
Christopher Pisz <nos...@notanaddress.com> wrote:
> On 1/26/2015 6:57 PM, Chris Vine wrote:
> > Allocation on the heap is a heavy-weight ...
> > operation compared with allocation on the stack,
> > relatively speaking,
> > particularly in multi-threaded programs which require thread-safe
> > allocation and deallocation.
>
> std::vector<int>(10) is going to be the same whether my program is
> multithreaded or not.
>
> If there are thread safety concerners, then most likely the container
> would be inside some class and the same locking mechanisms are going
> to be added whether it is a std::vector, a c-array, or a
> snuffleupugus.

That's not the point. The heap is a global resource and the _heap
manager_ has to deal with concurrency in a multi-threaded program. This
has nothing to do with concurrent access to the same container (if you
are going to have concurrent access to the same container you need to at
least look at std::list if you are going to have a lot of contention).
The fact that the heap manager has to cope with concurrency is one of
the reasons why dynamic allocation has overhead.

> > Dynamic allocation also reduces cache
> > locality.
>
> Someone else said this too. Not disagreeing, but I've never read it.
> Got a reference?

I cannot come up with a study (although I would be surprised if there
wasn't one), but it must do so, because memory allocated on the heap
will not be in the same cache line as the memory for the object itself.

> > Furthermore std::vector constructed with a particular
> > initial size calls the default constructor for each element, so you
> > sometimes have to reserve a size and then pushback onto it to avoid
> > that.
>
> but again, how long does it take to construct an int? It depends on
> the scenario and like I said, I suspect the OP was doing what he was
> doing just because "Me like meat. STL bad....C gud.", but looks like
> he lost interest in his topic.

It's the cost of unnecessary zero initialization of built in types.
Small, but it is there and if you can avoid it then why not?

I just come back to the point that failing to make use of an array (or
std::array) instead of std::vector where you have a statically
(constexpr) sized container of built in types, just because you don't
like C programming practices, seems crazy to me. It's a bit like
failing to call std::reserve() on a vector when you actually know
programmatically at run time what the final size of the vector will be,
and instead just relying on the vector's normal algorithm to allocate
memory in multiple enlarging steps as it is pushed onto. The code will
still work but you will be doing unnecessary allocations of memory and
copying or moving of vector elements. Don't do it if you can avoid it.

Take the low hanging fruit. It is not premature optimization to follow
a few simple rules which will make all your programs faster without any
extra effort.

Chris

Richard

unread,
Jan 27, 2015, 4:17:25 PM1/27/15
to
[Please do not mail me a copy of your followup]

David Brown <david...@hesbynett.no> spake the secret code
<ma7ot4$mth$1...@dont-email.me> thusly:

>And allocating a block statically takes no instructions at all.

Another takeway from talking to the game guys that I didn't mention was
that they simply used fixed-sized arrays to hold all their level data.
The adjust the fixed size to handle the largest level in their shipping
product. Simple resource management that retains locality of reference.

If for some reason you don't like C arrays, there is std::array that
has the same benefits.

David Brown

unread,
Jan 27, 2015, 5:58:48 PM1/27/15
to
Certainly for a one-off allocation, the time taken for the malloc (or
whatever "new" uses) is irrelevant. But issues 2 and 3 below still apply.

I am trying to give /valid/ reasons here - I agree with you that some
programmers will give invalid or mythical reasons for not using the STL
(or any other feature of C++ or the library).

> More often
> than not A) They are not even aware you can specify size at construction
> time for a vector or reserve size after if you wish B) They haven't even
> considered how often the container or class that owns the container is
> being constructed.
>
>> 2.
>>
>> Static allocation, or at least stack allocation, is far more amenable to
>> compile-time checking and (if necessary) faster run-time checks to help
>> catch errors sooner.
>
> What exactly does "amenable to compile time checking" mean?

In the case of an array, this would mean spotting some out-of-bounds
accesses at compile-time.

The general rule of compile-time checking (and also optimisation) is to
give the compiler as much information as possible, make as much as
possible static and constant, and keep scopes to a minimum. In this
respect, static allocation is always better than dynamic allocation.

>
> "Run time checks to catch errors sooner?" What run time check is going
> to occur on a c-array? Run time checks would be a reason to use an STL
> container in the first place. Of course none is faster than some.

Run-time checks on a C array need to be implemented manually (unless you
have a compiler that supports them as an extension of some sort). They
could of course be added in a class that wraps a C array. But they will
be more efficient than for a vector, because the size is fixed and known
at compile-time.

Note that the new std::array<> template gives many of the advantages of
C arrays combined with the advantages of vectors, especially when the
array<> object is allocated statically (or at least on the stack). The
point here is static allocation, not the use of C arrays.

>
>
>> 3.
>>
>> Statically allocated data, or at least stack allocated data, requires
>> fewer instructions and fewer registers to access, making it faster. It
>> also has better locality of reference and is more likely to be in the
>> cache, which can make a huge difference (this depends on the size of the
>> data and the access patterns, of course).
>
> Not disagreeing with you, but I've never read any evidence of a
> contiguous block of memory on the stack vs a contiguous block of memory
> on the heap being more likely to be in the cache. Have anything to
> reference?
>

Think about how a stack works - especially if we are talking about small
arrays. The code will regularly be accessing the data on the stack, so
the stack will be in the cpu caches. Data allocated on the heap will,
in general, not be in the cache before.

It may seem that this does not matter - after all, your program will
write to the new array before reading it, and thus it does not matter if
the old contents of the memory is in the cache. But unless you are
writing using very wide vector stores (or using cache zeroing
instructions), when you write your first 32-bit or 64-bit value, the
rest of that cache line has to be read in from main memory before that
new item is written to the cache. And even if you are using wide stores
that avoid reads to the cache, allocating the new cache line means
pushing out existing cache data - if that's dirty data, it means writing
it to memory.

How big a difference this makes will depend on usage patterns, cpu type,
cache policies, etc.

Regarding instruction and register usage, it's fairly clear that a
dynamically allocated structure is going to need one more pointer and
one more level of indirection than a stack allocated or (preferably) a
statically allocated structure.


>>
>> Coming from an embedded background, where this is often much more
>> relevant than on desktop systems (partly because memory fragmentation on
>> heaps is a serious issue when you don't have virtual memory), there is a
>> clear golden rule that all allocations should be static if possible,
>> with stack allocation as a second-best. In many embedded systems,
>> dynamic allocation is not allowed at all - it is certainly never
>> encouraged.
>
> Well, that's the thing. I often work with C programmers whom came from a
> background where it was relevant, but then they adopt silly rules like
> "don't ever use an STL container, because 'allocation'" in places where
> the difference is negligible.

Regardless of speed issues (which are often not relevant), there are
code correctness and safety issues in using dynamic memory. With C++
used well, many of these issues are solved by using resource container
classes (including the STL), smart pointers, etc., rather than the
"naked" pointers of C. Programmers should be more wary of dynamic
memory in C than in C++. But no matter what the language, programmers
should be aware of the costs and benefits of particular constructs, and
use them appropriately.

>
> You do, but I have my doubts that the OP did. I am willing to bet he was
> doing what he did, "just because", but we'll never know.

Indeed.

jak

unread,
Jan 28, 2015, 12:46:36 PM1/28/15
to
More or less. The question is common and I am the most obstinate among
us. :)
0 new messages