Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

“The pool of talented C++ developers is running dry”

134 views
Skip to first unread message

Lynn McGuire

unread,
Nov 3, 2022, 3:31:10 PM11/3/22
to
“The pool of talented C++ developers is running dry”

https://www.efinancialcareers.com/news/2022/11/why-is-there-a-drought-in-the-talent-pool-for-c-developers

Huh, maybe I can get one of those vaunted C++ $500,000/year developer
jobs even at my age of 62.

Hat tip to:
https://www.codeproject.com/script/Mailouts/View.aspx?mlid=16845

Lynn

Chris M. Thomasson

unread,
Nov 3, 2022, 4:04:21 PM11/3/22
to
On 11/3/2022 1:03 PM, Chris M. Thomasson wrote:
> On 11/3/2022 12:30 PM, Lynn McGuire wrote:
>> “The pool of talented C++ developers is running dry”
>>
>> https://www.efinancialcareers.com/news/2022/11/why-is-there-a-drought-in-the-talent-pool-for-c-developers
>>
>> Huh, maybe I can get one of those vaunted C++ $500,000/year developer
>> jobs even at my age of 62.
>
> Make sure to include the experience of porting a 700,000+ line legacy
> Fortran 77 program into C++ Lynn. :^)

Just another aspect of your resume...

Chris M. Thomasson

unread,
Nov 3, 2022, 4:04:50 PM11/3/22
to
On 11/3/2022 12:30 PM, Lynn McGuire wrote:
> “The pool of talented C++ developers is running dry”
>
> https://www.efinancialcareers.com/news/2022/11/why-is-there-a-drought-in-the-talent-pool-for-c-developers
>
> Huh, maybe I can get one of those vaunted C++ $500,000/year developer
> jobs even at my age of 62.

Make sure to include the experience of porting a 700,000+ line legacy
Fortran 77 program into C++ Lynn. :^)

>

Lynn McGuire

unread,
Nov 3, 2022, 5:06:13 PM11/3/22
to
I've gotta finish the experience first. It is a monster.

Lynn

Lynn McGuire

unread,
Nov 3, 2022, 5:08:03 PM11/3/22
to
On 11/3/2022 3:04 PM, Chris M. Thomasson wrote:
> On 11/3/2022 1:03 PM, Chris M. Thomasson wrote:
>> On 11/3/2022 12:30 PM, Lynn McGuire wrote:
>>> “The pool of talented C++ developers is running dry”
>>>
>>> https://www.efinancialcareers.com/news/2022/11/why-is-there-a-drought-in-the-talent-pool-for-c-developers
>>>
>>> Huh, maybe I can get one of those vaunted C++ $500,000/year developer
>>> jobs even at my age of 62.
>>
>> Make sure to include the experience of porting a 700,000+ line legacy
>> Fortran 77 program into C++ Lynn. :^)
>
> Just another aspect of your resume...

I turned down a job offer from Microsoft in 1987 and Google in 2004.
Wait, that might not look good on my resume.

Lynn

Chris M. Thomasson

unread,
Nov 3, 2022, 5:25:58 PM11/3/22
to
Heck, add current experience. Like a log in a resume. Include a link in
the "short" version to a longer version that records your journey in the
porting process.

Bonita Montero

unread,
Nov 4, 2022, 4:25:00 AM11/4/22
to
With your skills of couse.


Lynn McGuire

unread,
Nov 5, 2022, 3:12:48 PM11/5/22
to
I haven't touched my resume since 1989. It is fairly out of date.

Lynn

Chris M. Thomasson

unread,
Nov 5, 2022, 5:20:29 PM11/5/22
to
Shit happens. Well, go ahead and update the file. :^)

Vir Campestris

unread,
Nov 5, 2022, 5:42:50 PM11/5/22
to
It got one drier. I quit in April (UK tax year). I wasn't earning 500k,
but I reckon I have enough put away for the rest of my life.

It's a balance between running out of money, and running out of health :(

There was an article in today's paper about the growing number of people
becoming economically inactive. Including early retirees. The tax system
over here doesn't help much mind - it's discouraging when the government
gets more of your earnings when you do.

And anyway ... why am I still reading this newsgroup? <g>

Andy

Chris M. Thomasson

unread,
Nov 6, 2022, 2:55:17 PM11/6/22
to
On 11/3/2022 12:30 PM, Lynn McGuire wrote:
Fwiw, I am good at C. I know C++, but do not yet totally understand some
of its more "modern" features, so to speak. I know enough C++ to code up
synchronization algorithms ala atomic and membars, but feel more at
_home_, in C.

Malcolm McLean

unread,
Nov 8, 2022, 5:45:04 AM11/8/22
to
You keep on adding features to the language which have unintuitive syntax and odd rules, and
don't do much to increase the number of programs you can write quickly. So what happens?
Theres not much motivation to learn these features until forced to do so. So codebases tend
to be mainly legacy, and C++ programmers' skills fall behind.

Juha Nieminen

unread,
Nov 8, 2022, 6:29:19 AM11/8/22
to
Malcolm McLean <malcolm.ar...@gmail.com> wrote:
> You keep on adding features to the language which have unintuitive syntax and odd rules, and
> don't do much to increase the number of programs you can write quickly. So what happens?
> Theres not much motivation to learn these features until forced to do so. So codebases tend
> to be mainly legacy, and C++ programmers' skills fall behind.

For the longest time I quite strongly disagreed with the claim that C++ is
becoming too big and too complicated.

However, C++20 has eroded this conviction of mine somewhat. C++23 is eroding
it even more.

C++11 felt like a big bunch of features that the language was in dire need
of, and genuinely made programming easier. C++14 and C++17 fixed and patched
many of the minor problems and defects that turned out to exist in C++11,
so C++17 felt like "what C++11 should have been in the first place".

C++20, however, doesn't feel like this anymore. It has a few new features
that genuinely help in programming, but most of it feels like just adding
features for the sake of adding them. C++23 even moreso.

With both there are tons of features for which I genuinely think "who
exactly asked for this? Who actually needs this? Does the language truly
need and benefit from this?"

When you read the feature proposal papers, they feel a lot like it's
just someone proposing a feature for the sake of proposing it, rather
than it filling an *actual* useful need that the language is in dire
need of. In many cases I don't even understand why they have been
accepted.

And even so, even when some feature *is* somewhat widely requested, they
should still weigh in how much it *actually* contributes to the usability
of the language, and how much it adds to its complexity (which is already
quite high). The standardization committee seems to have a problem with
saying "no" even to some sounds-like-it-could-be-useful feature proposals
(not to talk about some that aren't all that useful).

This feeling of adding-features-just-for-the-sake-of-adding-them is
something that has really started to give me a distaste for the newer
versions of the standard. Both C++20 and C++23 have actual useful
features that I'm looking forward to, but they seem to be a very small
minority of all features.

Öö Tiib

unread,
Nov 8, 2022, 6:54:48 AM11/8/22
to
I disagree on C++17. What the core language of C++14 guaranteed was
quite trashed by C++17. It is impossible to write vector in it; the
std::vector is magical class now. Things that were added felt like sabotage
not needed features. C++17 added few library features that are sorely
needed as those are all magical too and also those were added noticeably
poorly on closer examination. Compilers of course compile most UBs that
we have in our code-bases (as the codebases of compiler suppliers are
full of same UBs) until the suppliers decide otherwise.
Same trend continues in C++20 and C++23. C++ should either roll most
of that broken beyond fixing garbage back ... or it makes sense
to migrate to Rust, Swift, D or C in what it is possible to write software
without using magical classes of standard library only.

Stuart Redmann

unread,
Nov 8, 2022, 8:26:21 AM11/8/22
to
Öö Tiib <oot...@hot.ee> wrote:

> I disagree on C++17. What the core language of C++14 guaranteed was
> quite trashed by C++17. It is impossible to write vector in it; the
> std::vector is magical class now.

Could you elaborate this? Or maybe post a link? A quick web search yields
nothing usable :-/

TIA,
Stuart

Bonita Montero

unread,
Nov 8, 2022, 8:52:55 AM11/8/22
to
Am 08.11.2022 um 12:29 schrieb Juha Nieminen:

> C++20, however, doesn't feel like this anymore. It has a few new features
> that genuinely help in programming, but most of it feels like just adding
> features for the sake of adding them. C++23 even moreso.

I don't think ranges are really necessary, but they're of course techni-
cally very elegant. But concepts and cororoutines really rock. Don't use
them if you're overburdened with them.


Juha Nieminen

unread,
Nov 8, 2022, 10:17:06 AM11/8/22
to
Öö Tiib <oot...@hot.ee> wrote:
> I disagree on C++17. What the core language of C++14 guaranteed was
> quite trashed by C++17.

If you read the list of added features in C++17 (from eg. the Wikipedia
article), most of them sound quite useful and practical, many making
things simpler. There are a few features that could have probably been
left out without anybody really missing them, but most of them seem
very practical.

The text message for static_assert becoming optional is handy and nice.
The "namespace x::y::z" syntax for nested namespaces is really convenient.
The new attributes are useful (especially [[maybe_unused]]).
Initializers in 'if' and 'switch' statements may not be crucial, but
sometimes they become handy.
The same can be said of structured bindings and class template argument
deduction.
Mandatory copy elision adds an efficiency guarantee (and doesn't change
the language syntax itself), so that's nice.
Inline variables are actually useful, especially for header-only libraries.
__has_include is a nice-to-have feature.

constexpr functions are quite a swamp that should have been thought better
in C++11, but given that they exist in their current form and we are stuck
with them, 'if constexpr' can be a useful addition. (The same could be
said of variadic templates and the new fold expressions in C++17.)

(Not going to comment about the additions to the standard library because
the library is not part of the core language syntax and semantics, and thus
not really what I'm talking about here.)

Jorgen Grahn

unread,
Nov 8, 2022, 5:48:20 PM11/8/22
to
On Tue, 2022-11-08, Juha Nieminen wrote:
> Malcolm McLean <malcolm.ar...@gmail.com> wrote:

>> You keep on adding features to the language which have unintuitive
>> syntax and odd rules, and don't do much to increase the number of
>> programs you can write quickly. So what happens? Theres not much
>> motivation to learn these features until forced to do so. So
>> codebases tend to be mainly legacy, and C++ programmers' skills
>> fall behind.
>
> For the longest time I quite strongly disagreed with the claim that C++ is
> becoming too big and too complicated.
>
> However, C++20 has eroded this conviction of mine somewhat. C++23 is eroding
> it even more.
>
> C++11 felt like a big bunch of features that the language was in dire need
> of, and genuinely made programming easier. C++14 and C++17 fixed and patched
> many of the minor problems and defects that turned out to exist in C++11,
> so C++17 felt like "what C++11 should have been in the first place".

Same here (although I haven't bought C++17 yet; I'm still at C++14).

I think we can conclude it's not just me and you growing old and
conservative: there /has/ been a shift in how C++ evolves, and such a
shift will have drawbacks.

Going back to the "pool of talented C++ developers" angle, I have
sometimes wondered if this evolution has been necessary to attract
developers. Maybe the youngsters are used to the constantly changing
web environments, and won't touch something which hasn't changed in a
decade?

But I don't believe that. I believe we need a subset of programmers
to deal with big programs with a long lifetime which go through lots
of small, incremental changes. C++ is a good fit for that, but quick
changes to the C++ language are not.

One thing I've learned in recent years at work is that you don't need
everyone to start out as a C++ expert. We've had to hire several
newbies who didn't know C++ or had spent a few weeks with it at
university. Those with talent for that kind of development learn
surprisingly quickly from the existing code base, and from reviews.
Even those who I assumed were damaged by too much Python.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Chris M. Thomasson

unread,
Nov 8, 2022, 6:39:25 PM11/8/22
to
On 11/8/2022 3:29 AM, Juha Nieminen wrote:
> Malcolm McLean <malcolm.ar...@gmail.com> wrote:
>> You keep on adding features to the language which have unintuitive syntax and odd rules, and
>> don't do much to increase the number of programs you can write quickly. So what happens?
>> Theres not much motivation to learn these features until forced to do so. So codebases tend
>> to be mainly legacy, and C++ programmers' skills fall behind.
>
> For the longest time I quite strongly disagreed with the claim that C++ is
> becoming too big and too complicated.
>
> However, C++20 has eroded this conviction of mine somewhat. C++23 is eroding
> it even more.
>
> C++11 felt like a big bunch of features that the language was in dire need
> of, and genuinely made programming easier. C++14 and C++17 fixed and patched
> many of the minor problems and defects that turned out to exist in C++11,
> so C++17 felt like "what C++11 should have been in the first place".
[...]

I was really excited and happy when C++ finally made atomics and membars
part of the actual standard, C++11 iirc. Before that, I would have to
code these things up in assembly language.

Öö Tiib

unread,
Nov 9, 2022, 3:22:09 AM11/9/22
to
Before C++11 an element of vector had to be CopyAssignable and
CopyConstructible and that was constraining but doable.

After C++11 it was required to be Erasable and more constraints were split
between individual methods. At the same time the object lifetime grew dim
and complicated so it was unclear how the requirements can be met without
magic when just requiring Erasable. In C++14 it felt still fixable by correcting
some problems.

C++17 however added explicit undefined behaviors to nasty places (that
compilers can exploit in optimizations) and so made quite explicit that what
is required takes magic. The std::launder() added does look from afar that
it might help but on closer inspection does not. For example the P0532 of
Nikolai Josuttis elaborates how launder does not help at all on example of
vector.

David Brown

unread,
Nov 9, 2022, 3:29:24 AM11/9/22
to
On 09/11/2022 00:39, Chris M. Thomasson wrote:
> On 11/8/2022 3:29 AM, Juha Nieminen wrote:
>> Malcolm McLean <malcolm.ar...@gmail.com> wrote:
>>> You keep on adding features to the language which have unintuitive
>>> syntax and odd rules, and
>>> don't do much to increase the number of programs you can write
>>> quickly. So what happens?
>>> Theres not much motivation to learn these features until forced to do
>>> so. So codebases tend
>>> to be mainly legacy, and C++ programmers' skills fall behind.
>>
>> For the longest time I quite strongly disagreed with the claim that
>> C++ is
>> becoming too big and too complicated.
>>
>> However, C++20 has eroded this conviction of mine somewhat. C++23 is
>> eroding
>> it even more.
>>
>> C++11 felt like a big bunch of features that the language was in dire
>> need
>> of, and genuinely made programming easier. C++14 and C++17 fixed and
>> patched
>> many of the minor problems and defects that turned out to exist in C++11,
>> so C++17 felt like "what C++11 should have been in the first place".

I agree with that.

A challenge for C++ is that even when a new and better feature is added,
the older and clumsier methods still have to be supported. This also
means that syntax can be awkward because it can't conflict with existing
syntax, and the details get more complex all the time.

>
> I was really excited and happy when C++ finally made atomics and membars
> part of the actual standard, C++11 iirc. Before that, I would have to
> code these things up in assembly language.
>

Standard atomics would be great if they worked for my targets. The gcc
implementations (and I haven't seen any others) for "advanced" use
(read-modify-write, or sizes larger than a standard register) is
completely broken for single-core systems, and even on multi-core
systems it is limited if you use thread priorities. The trouble with
them is that no one has addressed the elephant in the room - in general,
you need OS support and locks to implement large atomics.



Sam

unread,
Nov 9, 2022, 8:00:03 AM11/9/22
to
Juha Nieminen writes:

> C++20, however, doesn't feel like this anymore. It has a few new features
> that genuinely help in programming, but most of it feels like just adding
> features for the sake of adding them. C++23 even moreso.

Or the features were specifically added to make sucky operating systems suck
a little less. Specifically: co-routines. Microsoft hijacked the
standardization process to push through co-routines, because real multiple
execution threads on MS-Windows blows chunks, and the OS can only implement
co-routines in a passable manner.

Scott Lurndal

unread,
Nov 9, 2022, 9:39:46 AM11/9/22
to
IFF the target architecture doesn't have a comprehensive set of
atomic access instructions, perhaps.

ARMv8 LSE, for example, has individual instructions for most of the
gcc atomic intrinsics (e.g. __sync_fetch_and_add will generate a single
LDADD atomic instruction). The instructions support the common
arithmetic operations (add, or, etc).

Before LSE, the ARMv8 implementations were built using the arm
LL/SC equivalent (load exclusive/store exclusive) instructions.

David Brown

unread,
Nov 9, 2022, 11:06:14 AM11/9/22
to
You are more familiar with the details of these things than most people,
so I hope you (or someone else) will correct me if my logic below is wrong.


There's no problem when the target has a single unbreakable instruction
for the action. And LL/SC are fine for atomic loads or stores of
different sizes.

But LL/SC is not sufficient for read-modify-write sequences of a size
larger than can be handled by a single atomic instruction.

Imagine you have a processor that can atomically read or write an
unsigned integer type "uint". Your sequence for "uint_inc" will be :

retry:
load link x = *p
x++
if (store conditional *p = x fails) goto retry


If two processes try this, they can interleave and be started or stopped
without trouble - the result will be an atomic increment.

Now consider a double-sized type containing two "uint" fields:

retry:
load link x_lo = *p
x_hi = *(p + 1)
x_lo++
if (!x_lo) x_hi++
if (store conditional *p = x_lo fails) goto retry
*(p + 1) = x_hi

If the process executing this is stopped after the first write, and a
second process is run that calls a similar function, then the new
process will see a half-changed value for the object resulting in a
corrupted object. Resumption of the first process will half-change the
value again. Different combinations of using "store_conditional" on the
two stores will result in similar problems.

The only way to make a multi-unit RMW operation work is if other
processes are /blocked/ from breaking in during the actual write
sequence. Reads and the calculation can be re-retried, but not the
writes - they must be made an unbreakable sequence. And that, in
general, means a lock and OS support to ensure that the locking process
gets to finish.


The gcc implementation of atomic operations (larger than can be handled
with a single instruction) uses simple user-space spin locks (the lock
can be accessed atomically - with an LL/SC sequence, for the ARM).

If one process tries to access the atomic while another process has the
lock, it will spin - running a busy wait loop. As long as these
processes are running on different cores, there's no problem with one
core running a few rounds of a tight loop while another core does a
quick load or store. Given that contention is rare and cores are often
plentiful, this results in a very efficient atomic operation. But it
can deadlock - a process could take the spin lock and then get
descheduled by the OS, and other threads wanting the lock could be
activated. If these fill up the cores (maybe you have multiple threads
all using the same supposedly lock-free atomic structure), you are screwed.

And if you have only one core (like almost all microcontrollers), and
the thread that has the lock is interrupted by an interrupt routine that
wants to access the same atomic variable, you are /really/ screwed.
This can happen with such simple code as a 64-bit atomic counter in an
interrupt routine that is also accessed atomically from a background task.


It's very unlikely that you'll hit a problem, but it is possible. To
me, that is useless - atomics need guaranteed forward progress. That
means the std::atomic<> stuff needs to use OS-level locks for advanced
cases that can't be handled directly by instructions or LL/SC sequences,
or for a microcontroller you'd want to disable interrupts around the
access. The alternative is to refuse to compile the operations and only
support atomics that are smaller or simpler.






Scott Lurndal

unread,
Nov 9, 2022, 12:59:16 PM11/9/22
to
Here's the code generated by GCC for

q = __sync_fetch_and_add(&q, 1u);


Without LSE (atomics) support:

401034: 885ffc60 ldaxr w0, [x3]
401038: 11000401 add w1, w0, #0x1
40103c: 8804fc61 stlxr w4, w1, [x3]
c01040: 35ffffa4 cbnz w4, 401034 <main+0x34>


With LSE (atomics) support:

12c: b8e10001 ldaddal w1, w1, [x0]

>
>But LL/SC is not sufficient for read-modify-write sequences of a size
>larger than can be handled by a single atomic instruction.

>
>Imagine you have a processor that can atomically read or write an
>unsigned integer type "uint". Your sequence for "uint_inc" will be :
>
>retry:
> load link x = *p
> x++
> if (store conditional *p = x fails) goto retry
>
>
>If two processes try this, they can interleave and be started or stopped
>without trouble - the result will be an atomic increment.
>
>Now consider a double-sized type containing two "uint" fields:
>
>retry:
> load link x_lo = *p
> x_hi = *(p + 1)
> x_lo++
> if (!x_lo) x_hi++
> if (store conditional *p = x_lo fails) goto retry
> *(p + 1) = x_hi

For such sequences, one uses the LL/SC as a spinlock;
acquire the spinlock, perform the non-atomic operation
and release the spinlock. On uniprocessor systems,
alternate mechanisms like disabling interrupts are the
common solution.

Although in this case, using a wider type if available is a
better option.
This is a typical priority inheritance problem.

>
>And if you have only one core (like almost all microcontrollers), and
>the thread that has the lock is interrupted by an interrupt routine that
>wants to access the same atomic variable, you are /really/ screwed.

To be fair, the programmer should be aware of these issues and not
use mechanisms subject to deadlock. As noted above, the typical
solution is to disable interrupts during a critical section.


>
>It's very unlikely that you'll hit a problem, but it is possible.

Famous last words, indeed.

Chris M. Thomasson

unread,
Nov 9, 2022, 2:47:23 PM11/9/22
to
Are you referring to double-width compare-and-swap (DWCAS)? C++ should
be able to handle it directly using the processors instruction set. Say
C++ on a modern 64 bit x64 system, well CMPXCHG16B should be used for a
double word. Double word in the sense that they are two _contiguous_
words. In other words, a lock-free CAS of a double word on a 64 bit x64
should use CMPXCHG16B.

Scott Lurndal

unread,
Nov 9, 2022, 3:44:10 PM11/9/22
to
David works with low-end embedded processors, as I understand it, with
limited and/or restricted instruction sets.

David Brown

unread,
Nov 10, 2022, 3:52:49 AM11/10/22
to
Yes.

But the principle is the same on bigger systems too. If your processor
can do a single-instruction 64-bit write, you see the problems for
atomics bigger than 64-bit. If it can handle 128-bit writes, you see
the problems for atomics bigger than 128-bit.

Obviously the need for big atomics is much lower than the need for
smaller ones. Once you have a DCAS, or LL/SC, you have covered most needs.

However, these alone will not give you read-modify-write operations on
anything bigger than you can handle with a single read (or more
importantly, with a single unbreakable write operation). Anything where
the implementation is "use small atomics to get a spin lock, then do the
work" is /broken/. It has a small but non-zero chance of failing in
general use on big multi-core systems. On small single-core systems, it
is guaranteed broken from the outset.

The C++ (and C) language, standard library, common toolchains and
library implementations give the programmer the impression that they can
make atomics as they like. You can write :

std::atomic<std::array<int, 32>> xs;

and it looks like you have a big atomic object. But it will not work -
you cannot rely on it. It will /seem/ to work in all your testing,
because the chance of hitting a problem is small - but it can fail at
any time.

The atomics that the programmer can use should either be absolutely
correct, guaranteed by design in all circumstances, or they should not
be compile-time errors when you try to use atomics that are too big, or
where the operations are too complex, for the implementation to guarantee.

It would be even better for the implementation to handle these
correctly. That means OS support for /real/ locks, not fake
sort-of-works userland spin locks, but futexes or something like that
for big systems, and interrupt disabling for single-core
microcontrollers. (Dual-core microcontrollers are an extra
complication.) Common library implementations could rely on an extra
library or code for their "lock" and "unlock" calls - if they are not
provided, you at least have a link error.

Michael S

unread,
Nov 10, 2022, 6:07:13 AM11/10/22
to
Yes, failing in compile time is the most reasonable.

> It would be even better for the implementation to handle these
> correctly. That means OS support for /real/ locks, not fake
> sort-of-works userland spin locks, but futexes or something like that
> for big systems, and interrupt disabling for single-core
> microcontrollers. (Dual-core microcontrollers are an extra
> complication.) Common library implementations could rely on an extra
> library or code for their "lock" and "unlock" calls - if they are not
> provided, you at least have a link error.

It certainly would be against the spirit of 'C'.
I'm not sure about about relationship to spirit of C++.
Also I'm not sure that C++ has spirit.

Michael S

unread,
Nov 10, 2022, 6:14:36 AM11/10/22
to
In my book Cortex-M (except M0) is not a low end.
M7 in particular is too big and too complicated not even for
proverbial 99%, but for solid 100% of my microcontroller needs.

Chris M. Thomasson

unread,
Nov 10, 2022, 2:53:18 PM11/10/22
to
A rule of thumb... Imvho, always check the result of:

https://en.cppreference.com/w/cpp/atomic/atomic/is_lock_free

Just to be, sure... ;^) Fwiw, DWCAS is very different than DCAS. The
latter can work on two non-contiguous words. The former only works with
contiguous words. A main reason for DWCAS to exist in the first place is
to be able to handle a lock-free stack. A pointer and an version count
to combat the ABA problem. Although, there are many other interesting
uses for DWCAS...

https://groups.google.com/g/comp.lang.c++/c/nUDtke-H1io/m/g87spoMUCgAJ


> The atomics that the programmer can use should either be absolutely
> correct, guaranteed by design in all circumstances, or they should not
> be compile-time errors when you try to use atomics that are too big, or
> where the operations are too complex, for the implementation to guarantee.
>
> It would be even better for the implementation to handle these
> correctly.  That means OS support for /real/ locks, not fake
> sort-of-works userland spin locks, but futexes or something like that
> for big systems, and interrupt disabling for single-core
> microcontrollers.  (Dual-core microcontrollers are an extra
> complication.)  Common library implementations could rely on an extra
> library or code for their "lock" and "unlock" calls - if they are not
> provided, you at least have a link error.
>

If the result of is_lock_free is not true, then you should really think
about digging into how the locking is actually implemented. Hash based
address locking is one simple way to do it. Fwiw, I created one called
multi-mutex:

https://groups.google.com/g/comp.lang.c++/c/sV4WC_cBb9Q/m/Ti8LFyH4CgAJ

David Brown

unread,
Nov 10, 2022, 3:44:28 PM11/10/22
to
And it if is not, what is the point in allowing it if the locks don't work?
Hash-based arrays of locks are as bad as a single lock for all atomics,
in that it does not work unless it is a proper OS lock. The larger your
array of locks, the lower your chances of problems, but it all comes
down to one thing - are your locks safe or not?



Chris M. Thomasson

unread,
Nov 10, 2022, 3:54:02 PM11/10/22
to
Well, my multi-mutex uses std::mutex as elements of its vector of locks.
It hashes an address into said table. So, it's only as good as the
implementation of std::mutex...

std::vector<std::mutex> m_locks;

Fair enough?

Scott Lurndal

unread,
Nov 10, 2022, 4:20:47 PM11/10/22
to
I'd point out that using a vector is sub-optimal if the various
elements of the vector are accessed from different cores due to
false sharing....

Chris M. Thomasson

unread,
Nov 10, 2022, 4:23:34 PM11/10/22
to
Well, the mutex state should really be isolated on their own cache
lines. Padding and alignment on a l2 cacheline boundary. Iirc, this can
be done in modern c++. Even aligning elements of a vector.

David Brown

unread,
Nov 11, 2022, 2:11:17 AM11/11/22
to
As long as std::mutex is a wrapper for a real OS mutex, that will be
fine (after padding for cache line sizes).

The problem with the atomics library in gcc is not the array of locks or
the hashing on address, but that it doesn't use /real/ locks.


Öö Tiib

unread,
Nov 11, 2022, 4:46:07 AM11/11/22
to
Yes. Libraries look often like sabotaged and so who needs quality
and portability has to use conditional compiling for what they
use std::atomic<Something> and for what manually mutex-protected
Something instance. Purpose defeated.
0 new messages