[Please do not mail me a copy of your followup]
<u8MAB.80366$gF2....@fx31.iad> thusly:
>We have an application that has 24 to 96 cpu-bound threads and runs for
>several days. Replacing the dogmatic C++ smart pointer style with
>a more C style of element management improved performance by 40%, which
>cut close to a day off the run time.
You seem to equate advice with dogma.
Any piece of reasonable advice can be turned into silly usage that is
obviously stupid.
It's like turning the advice "owning your own home is a good thing"
into taking out a variable rate mortgage with a massive balloon
payment and servicing that mortgage early on with 90% of your take
home pay. You took a piece of reasonable advice and turned it into a
hell of your own making.
The advice is to use smart pointers to declare explicitly who owns the
heap memory. Does that mean slavishly use shared_ptr every time you
pass one of these pointers around? No, and in fact, that's a bad idea
for performance, particularly if you didn't use make_shared to
construct the pointers in the first place. Passing around
shared_ptr<T> should be used to emphasize changing ownership. If you
just need access to the data, pass a plain pointer or a reference.
I once worked on a team that insisted that the arguments of deep call
chains should all be shared_ptr instead of just the raw pointer.
This resulted in lots of wasted time being spent incrementing and
decrementing the reference count, even though the called functions
didn't own any of the data they were being given. If make_shared
wasn't used to created these shared_ptrs, then you have potential
cache misses as well, stomping all over your performance by stalling
the processor.
I couldn't convince them that this was paying a cost for little to
no benefit, so I complied with their wishes. They kept arguing "well
what if someone stores that pointer inside the call chain?". To which
I would look at them like "what, strangers are committing to our code
base without our knowledge?" This was a team where every commit went
through peer code review, so we should have been addressing real problems
and not imaginary monsters in the closet that might come out at night.
If this kind of mindless "it has to be shared_ptr everywhere" simple
minded behavior is what you're talking about as "dogma", then surprise,
I agree with you.
However, if you watch Herb Sutter's talk, that isn't what he's been
saying. He's not issuing dogma. He is offering advice about how to
use heap memory in such a way that ownership is explicit by design and
leak-free by construction. He's talking about the data structures and
how declaring things as shared_ptr or unique_ptr in the data structure
makes them leak free by default and explicitly declares who owns the data.
From there you should program in such a way that uses of the raw pointer
don't have any implication about ownership, just access. This makes it
such that if I ever see a function taking a raw pointer, I know that
ownership isn't transferred, only access provided. The called code
never becomes an owner of the pointed to data.
With sufficient personal discipline, you can use raw pointers and C
style programming habits and make all of this work. However, I see no
reason to reject Herb's advice, because it is consistent with my own
programming experience. Once I started using smart pointers and standard
library containers to manage the ownership of resources, I stopped having
memory/resource leaks. An entire class of problems just went away and
I could focus on the logic in my problem domain and not resource leaks.
It's not that I can't make it work the C way, or that I don't know how
to write efficient code, it's that C++ lets me achieve the same
objective (leak safety without compromising efficiency) with less
work.