Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: We've won!

362 views
Skip to first unread message

Vir Campestris

unread,
Nov 16, 2018, 4:48:47 PM11/16/18
to
On 15/11/2018 22:07, Stefan Ram wrote:
> AFAIK, now there C++ is reported to
> be faster than C! Finally after all those years and
> all those efforts by many contributors, C++ has won!

That's odd.

There shouldn't be anything you can do in C++ you can't do in C - albeit
with more pain.

Andy

Scott Lurndal

unread,
Nov 16, 2018, 4:51:56 PM11/16/18
to
And "won"? It's not a contest.

Melzzzzz

unread,
Nov 16, 2018, 5:09:32 PM11/16/18
to
How about implenting efficient qsort function? C++ beats C there and
this is std function...
>
> Andy


--
press any key to continue or any other to quit...

Chris M. Thomasson

unread,
Nov 16, 2018, 11:00:51 PM11/16/18
to
Hardcore nests of pain. Fwiw, sometimes I use a very simplistic method
of interfaces in C:

https://pastebin.com/raw/f52a443b1

It is very minimalist, but it works for what I need it to work for.

:)

Juha Nieminen

unread,
Nov 17, 2018, 3:35:59 AM11/17/18
to
Vir Campestris <vir.cam...@invalid.invalid> wrote:
> There shouldn't be anything you can do in C++ you can't do in C - albeit
> with more pain.

I'm not exactly sure how you would implement exceptions in C
(which, need I remind, needs to take RAII into account when
unwinding the stack).

I suppose something like it could be possible, but I'm not sure
how. Especially if it has to have zero overhead when exceptions
are not thrown.

Rick C. Hodgin

unread,
Nov 17, 2018, 6:32:32 AM11/17/18
to
On Saturday, November 17, 2018 at 3:35:59 AM UTC-5, Juha Nieminen wrote:
> I suppose something like it could be possible, but I'm not sure
> how. Especially if it has to have zero overhead when exceptions
> are not thrown.

You have to maintain a separate manual stack that runs in parallel
to the machine stack, one which holds the current winding state
at each point in code. It allows structured access for recovery
at any point, including a new return to x; keyword. It allows for
nested exceptions / immediate or local exceptions, as well as deep
parent exceptions, simply.

--
Rick C. Hodgin

Vir Campestris

unread,
Nov 17, 2018, 12:33:11 PM11/17/18
to
On 16/11/2018 22:09, Melzzzzz wrote:
> On 2018-11-16, Vir Campestris <vir.cam...@invalid.invalid> wrote:
>> On 15/11/2018 22:07, Stefan Ram wrote:
>>> AFAIK, now there C++ is reported to
>>> be faster than C! Finally after all those years and
>>> all those efforts by many contributors, C++ has won!
>>
>> That's odd.
>>
>> There shouldn't be anything you can do in C++ you can't do in C - albeit
>> with more pain.
>
> How about implenting efficient qsort function? C++ beats C there and
> this is std function...
>>

An efficient _generic_ qsort function should be much _easier_ in C++.
Templates, etc. But if you know what you're sorting and are willing to
put that into the sort code - I don't know why C++ should be faster.

Andy

Melzzzzz

unread,
Nov 17, 2018, 12:42:21 PM11/17/18
to
Well, qsort in clib is much slower then C++ sort...

David Brown

unread,
Nov 18, 2018, 8:22:07 AM11/18/18
to
That is for two reasons. One is that the standard C library qsort
function is generic - as Vir said, /generic/ functions can be more
efficient in C++ (at least while you want things to be clear,
maintainable, etc.), but a specialised sort for a specific type will be
as fast in C as in C++. The other issue is that the actual algorithm
for sorting is not specified - it is quite possible that the C++ library
you tested has a more efficient algorithm than the C library you tested.


Christian Gollwitzer

unread,
Nov 18, 2018, 1:29:49 PM11/18/18
to
Am 18.11.18 um 14:21 schrieb David Brown:
> On 17/11/2018 18:42, Melzzzzz wrote:
>> On 2018-11-17, Vir Campestris <vir.cam...@invalid.invalid> wrote:
>>> On 16/11/2018 22:09, Melzzzzz wrote:
>>>> On 2018-11-16, Vir Campestris <vir.cam...@invalid.invalid> wrote:

>>>> How about implenting efficient qsort function? C++ beats C there and
>>>> this is std function...
>>>>>
>>>
>>> An efficient _generic_ qsort function should be much _easier_ in C++.
>>> Templates, etc. But if you know what you're sorting and are willing to
>>> put that into the sort code - I don't know why C++ should be faster.
>>
>> Well, qsort in clib is much slower then C++ sort...
>>
>
> That is for two reasons.  One is that the standard C library qsort
> function is generic - as Vir said, /generic/ functions can be more
> efficient in C++ (at least while you want things to be clear,
> maintainable, etc.),


I'm not even sure that this holds for a sort function using a callback.
The main difference between the C and the C++ version is that "qsort" is
defined in a separately compiled file, while std::sort is defined in the
header file and compiled simultaneously.

If you define qsort like

inline void qsort(void* base, size_t num, size_t size,
int (*compar)(const void*,const void*))
{ ... code... }

*inside* a header file, and you call it with a fixed callback function,
then the compiler can inline the comparator in the same manner it
inlines it in C++ and I see no reason why it should be slower.

Christian

David Brown

unread,
Nov 18, 2018, 1:40:21 PM11/18/18
to
Yes, you could do that (with a compiler that is smart enough to inline
functions called through pointers). But then you would not be using the
qsort from the C library.

So this is all proving the same point - you /can/ get the same
performance from C as from C++, but it is harder to do so when you are
talking about generic functions.

Rick C. Hodgin

unread,
Nov 18, 2018, 6:23:34 PM11/18/18
to
On Saturday, November 17, 2018 at 12:42:21 PM UTC-5, Melzzzzz wrote:
> Well, qsort in clib is much slower then C++ sort...

It's much slower still in Hungarian folk dance:

Quick-sort with Hungarian (Küküllőmenti legényes) folk dance
https://www.youtube.com/watch?v=ywWBy6J5gz8

--
Rick C. Hodgin

Pavel

unread,
Nov 18, 2018, 10:47:59 PM11/18/18
to
Chris M. Thomasson wrote:
> On 11/16/2018 1:48 PM, Vir Campestris wrote:
>> On 15/11/2018 22:07, Stefan Ram wrote:
>>> AFAIK, now there C++ is reported to
>>>    be faster than C! Finally after all those years and
>>>    all those efforts by many contributors, C++ has won!
>>
>> That's odd.
>>
>> There shouldn't be anything you can do in C++ you can't do in C - albeit with
>> more pain.
>
> Hardcore nests of pain. Fwiw, sometimes I use a very simplistic method of
> interfaces in C:
>
> https://pastebin.com/raw/f52a443b1
That is actually similar to how kernel drivers are programmed (although
indirecttion via vtable is not used often). What's sometimes useful (and often
abused) is that with this approach, the programmer can alter vtable or function
pointers in the struct at runtime at will as opposed to when using C++
polymorphism with virtual functions -- where these pointers are changed by
implementation at some fixed points for no useful reason, but just to annoy
people and hurt performance :-( .

Rick C. Hodgin

unread,
Nov 19, 2018, 4:06:33 AM11/19/18
to
On Friday, November 16, 2018 at 11:00:51 PM UTC-5, Chris M. Thomasson wrote:
> Hardcore nests of pain. Fwiw, sometimes I use a very simplistic method
> of interfaces in C:
>
> https://pastebin.com/raw/f52a443b1

I see this in there:

struct device_prv_vtable {
int (*fp_read) (void* const, void*, size_t);
int (*fp_write) (void* const, void const*, size_t);
};

In fp_write(), what's the difference between:

void* const
void const*

?

--
Rick C. Hodgin

David Brown

unread,
Nov 19, 2018, 5:19:27 AM11/19/18
to
"void * const" says that the /parameter/ (unnamed here, but it will have
a name in the definition of such a function) is a "const" and cannot be
modified by the function. The parameter type is a pointer-to-void
(pointer to anything, if you like).

So we have:

void test1(void * const p) {
p++;
}

Error - increment of read-only pointer. (I think incrementing a void*
pointer is a gcc extension anyway, but that is beside the point here.)

But this is okay:

void test2(void * const p) {
int * q = (int *) p;
*q = 42;
}


> void const*

"void const *" says that the parameter is a pointer to a "void const" -
we don't know the type we are pointing to, but we are not supposed to
change it via the pointer.

Thus:

void test3(void const * p ) {
int * q = (int *) p;
*q = 42;
}

is highly questionable, and gcc gives "warning: initialisation discards
'const' qualifier from pointer target type". It is not an error - it is
allowed behaviour if the original thing pointed to was not declared as
"const". But it is likely to be wrong, and likely to be contrary to the
expectations of the code using test3, so a warning is good.


Declaring a parameter itself to be "const", as in the first "void*
const", is very strange for normal pass-by-value parameters, since it is
basically useless. (It is useful and common for reference parameters).
It is also inconsistent between the parameters in the code you showed.
So I'd assume there is a mistake here somewhere.


>
> ?
>

Rick C. Hodgin

unread,
Nov 19, 2018, 8:15:44 AM11/19/18
to
On Monday, November 19, 2018 at 5:19:27 AM UTC-5, David Brown wrote:
> On 19/11/18 10:06, Rick C. Hodgin wrote:
> > In fp_write(), what's the difference between:
> > void* const
>
> "void * const" says that the /parameter/ ... is a "const" and cannot be
> modified by the function. The parameter type is a pointer-to-void...
>
> So we have:
>
> void test1(void * const p) {
> p++;
> }
>
> Error - increment of read-only pointer. (I think incrementing a void*
> pointer is a gcc extension anyway, but that is beside the point here.)

Why wouldn't it be written as:

const void*

To indicate it's a pointer to void, and it's a constant? Is
that different than void* const?

--
Rick C. Hodgin

Juha Nieminen

unread,
Nov 19, 2018, 8:26:54 AM11/19/18
to
Rick C. Hodgin <rick.c...@gmail.com> wrote:
> Why wouldn't it be written as:
>
> const void*
>
> To indicate it's a pointer to void, and it's a constant? Is
> that different than void* const?

"const void*" (or "void const*") is a pointer to a value that can't
be modified. The pointer itself can be modified to point somewhere else.

"void *const" is a pointer to a value that *can* be modified, but
the pointer itself can't be modified to point somewhere else.

It can get more complicated than that. Consider all of these, which
are different types:

int **p1;
int **const p2;
int *const *p3;
int const **p4;
int *const *const p5;
int const **const p6;
int const *const *p7;
int const *const *const p8;

It helps when you read the declaration from the right to the left.
For example:

int *const p1;

can be read as: "p1 is a const pointer to an int"

int const *p2;

can be read as: "p2 is a pointer to a const int"

(You can also read "const int *p2;", which means the same thing, as
"p2 is a pointer to an int that's const".)

Thus, for example:

int **const p2;

can be read as: "p2 is a const pointer to a pointer to an int".

int const *const *const p8;

can be read as: "p8 is a const pointer to a const pointer to a const int".

David Brown

unread,
Nov 19, 2018, 8:56:05 AM11/19/18
to
They are different.

"const void *" is a pointer to a const void - a pointer to something
with unspecified type, but which you can't use to modified the thing
pointed to. (It's easier to describe "const int *", for example, which
is a pointer to a const int - i.e., you can't use the pointer to modify
the pointed-to int.)

"const void *" is the same as "void const *". Some people like to write
"const" after the thing it affects (which can be done consistently, even
for pointers), others like to write it before the thing it affects
(which makes more sense to read, but can't be done for pointers).

It takes a little practice to see what this all means, but you can get
the hang of it. If types get really complicated, I always prefer to use
typedef's to break them up. And adding a const qualifier to the
parameter itself (when not using references) is rarely helpful - it does
not affect the calling function at all, but makes the types harder to
read. So I would never write "void * const" as a parameter type.


woodb...@gmail.com

unread,
Nov 19, 2018, 12:46:56 PM11/19/18
to
See:
http://slashslash.info/2018/02/a-foolish-consistency/
http://slashslash.info/petition

for more info.

> others like to write it before the thing it affects
> (which makes more sense to read, but can't be done for pointers).
>
> It takes a little practice to see what this all means, but you can get
> the hang of it. If types get really complicated, I always prefer to use
> typedef's to break them up.

Consider also a using statement:
https://stackoverflow.com/questions/10747810/what-is-the-difference-between-typedef-and-using-in-c11



Brian
Ebenezer Enterprises - In G-d we trust.
https://github.com/Ebenezer-group/onwards

Rick C. Hodgin

unread,
Nov 19, 2018, 3:36:47 PM11/19/18
to
Thank you, Juha! Right-to-left is helpful.

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Nov 19, 2018, 3:37:23 PM11/19/18
to
I appreciate the explanation, David. Makes sense.

--
Rick C. Hodgin

Chris M. Thomasson

unread,
Nov 19, 2018, 5:56:35 PM11/19/18
to
On 11/19/2018 5:55 AM, David Brown wrote:
> On 19/11/18 14:15, Rick C. Hodgin wrote:
>> On Monday, November 19, 2018 at 5:19:27 AM UTC-5, David Brown wrote:
>>> On 19/11/18 10:06, Rick C. Hodgin wrote:
>>>> In fp_write(), what's the difference between:
>>>> void* const
>>>
>>> "void * const" says that the /parameter/ ... is a "const" and cannot be
>>> modified by the function. The parameter type is a pointer-to-void...
[...]
> It takes a little practice to see what this all means, but you can get
> the hang of it. If types get really complicated, I always prefer to use
> typedef's to break them up. And adding a const qualifier to the
> parameter itself (when not using references) is rarely helpful - it does
> not affect the calling function at all, but makes the types harder to
> read. So I would never write "void * const" as a parameter type.

Fwiw, I put the const in the interface to attempt to get the point
across that this parameter shall never be mutated by a function. Sort of
a contract, documented within the API itself. The self parameters in my
code are basically akin to the this parameter in C++. The value of the
this pointer shall never be changed, just like the value of the self
pointer shall never be changed.

;^)

Chris Vine

unread,
Nov 19, 2018, 7:50:42 PM11/19/18
to
Const pointers (as opposed to pointers to const) are quite rare in my
experience. If I want an unseatable reference, I take the view that
that is what C++ references are for.

Chris M. Thomasson

unread,
Nov 20, 2018, 12:15:18 AM11/20/18
to
Agreed. Actually, this was another little habit of mine wrt my "self"
pointer: it shall not be modified. Ack. ;^o

Chris M. Thomasson

unread,
Nov 20, 2018, 1:19:48 AM11/20/18
to
On 11/19/2018 9:15 PM, Chris M. Thomasson wrote:
> On 11/19/2018 4:47 PM, Chris Vine wrote:
>> On Mon, 19 Nov 2018 14:55:51 +0100
>> David Brown <david...@hesbynett.no> wrote:
>>> On 19/11/18 14:15, Rick C. Hodgin wrote:
>>>> On Monday, November 19, 2018 at 5:19:27 AM UTC-5, David Brown wrote:
>>>>> On 19/11/18 10:06, Rick C. Hodgin wrote:
>>>>>> In fp_write(), what's the difference between:
>>>>>>      void* const
[...]
>> Const pointers (as opposed to pointers to const) are quite rare in my
>> experience.  If I want an unseatable reference, I take the view that
>> that is what C++ references are for.
>>
>
> Agreed. Actually, this was another little habit of mine wrt my "self"
> pointer: it shall not be modified. Ack. ;^o

Well, here it goes again with my habit. Take a look at the self points
in my code for a fractal reverse Julia 2-ary plotter the creates a text
based PPM image called "ct_cipher_rifc.ppm":

https://github.com/ChrisMThomasson/fractal_cipher/blob/master/RIFC/ct_bin_ppm.c

________________
// Compute the fractal
void
ct_ifs(
struct ct_plane* const self,
double complex z,
double complex c,
double ratio,
unsigned long n
)
[...]
________________

Ummm, "struct ct_plane* const self"... ;^o

if you run it, there will be a new file called: "ct_cipher_rifc.ppm" on
your system. So, be aware of that.

Good thing this is my own personal software. Yikes!

Jorgen Grahn

unread,
Nov 20, 2018, 1:49:46 AM11/20/18
to
On Tue, 2018-11-20, Chris Vine wrote:
...
> Const pointers (as opposed to pointers to const) are quite rare in my
> experience. If I want an unseatable reference, I take the view that
> that is what C++ references are for.

I may use const pointers whenever I use pointers, just like I may use
const int whenever I use int. (I don't use pointers a lot, though.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

David Brown

unread,
Nov 20, 2018, 3:27:42 AM11/20/18
to
On 19/11/18 23:56, Chris M. Thomasson wrote:
> On 11/19/2018 5:55 AM, David Brown wrote:
>> On 19/11/18 14:15, Rick C. Hodgin wrote:
>>> On Monday, November 19, 2018 at 5:19:27 AM UTC-5, David Brown wrote:
>>>> On 19/11/18 10:06, Rick C. Hodgin wrote:
>>>>> In fp_write(), what's the difference between:
>>>>> void* const
>>>>
>>>> "void * const" says that the /parameter/ ... is a "const" and cannot be
>>>> modified by the function. The parameter type is a pointer-to-void...
> [...]
>> It takes a little practice to see what this all means, but you can get
>> the hang of it. If types get really complicated, I always prefer to use
>> typedef's to break them up. And adding a const qualifier to the
>> parameter itself (when not using references) is rarely helpful - it does
>> not affect the calling function at all, but makes the types harder to
>> read. So I would never write "void * const" as a parameter type.
>
> Fwiw, I put the const in the interface to attempt to get the point
> across that this parameter shall never be mutated by a function. Sort of
> a contract, documented within the API itself.

Except for reference parameters, parameters are /never/ mutated by a
function. This is basic stuff - parameters in C are always pass by
value. If you think putting "const" in a non-reference parameter tells
the user something useful, then either /you/ misunderstand how
parameters work in C and C++, or you think your users misunderstand it.

For reference parameters, it is essential to distinguish between const
and non-const parameters - precisely because with pass-by-reference a
function /can/ mutate the calling function arguments.

So please do not add unnecessary const qualifiers to non-reference
parameters in a function. Put const on the pointed-to types to indicate
that those are not changed by the function - but don't put const on the
parameters themselves.

> The self parameters in my
> code are basically akin to the this parameter in C++. The value of the
> this pointer shall never be changed, just like the value of the self
> pointer shall never be changed.
>

Let me re-emphasise - this is pointless, because functions cannot change
parameters.

What would be /vastly/ more useful is to give sensible parameter names
in your function declarations. You have no "self" parameters in these
functions - you have anonymous parameters whose purpose is hidden.



Louis Krupp

unread,
Nov 20, 2018, 4:40:52 AM11/20/18
to
On Tue, 20 Nov 2018 09:27:30 +0100, David Brown
<david...@hesbynett.no> wrote:

<snip>
>Except for reference parameters, parameters are /never/ mutated by a
>function. This is basic stuff - parameters in C are always pass by
>value. If you think putting "const" in a non-reference parameter tells
>the user something useful, then either /you/ misunderstand how
>parameters work in C and C++, or you think your users misunderstand it.
>
>For reference parameters, it is essential to distinguish between const
>and non-const parameters - precisely because with pass-by-reference a
>function /can/ mutate the calling function arguments.
>
>So please do not add unnecessary const qualifiers to non-reference
>parameters in a function. Put const on the pointed-to types to indicate
>that those are not changed by the function - but don't put const on the
>parameters themselves.

I worked with someone who added "const" to non-reference input-only
parameters. When I asked him why, he said it helped catch buggy code
that modified those parameters even if the actual function arguments
wouldn't change.

For example, with g++:

===
int square(const int x)
{
x += 1; // possibly a bug
return x * x;
}
===
mnr1.cxx: In function ‘int square(int)’:
mnr1.cxx:3:10: error: assignment of read-only parameter ‘x’
x += 1; // possibly a bug
===

In Fortran, nobody would laugh at you for doing this even if the
caller's value of x wouldn't change:

===
integer function square(x)
implicit none
integer, value, intent(in) :: x

x = x + 1 ! possibly a bug

square = x * x
return
end
===
mnr2.f90:5:0:

x = x + 1 ! possibly a bug

Error: Dummy argument ‘x’ with INTENT(IN) in variable definition
context (assignment) at (1)
===

although lots of people would laugh at you just for using Fortran.

FWIW.

Louis

Ben Bacarisse

unread,
Nov 20, 2018, 6:38:54 AM11/20/18
to
I don't think you should make that exception for references either. In
fact, C++ forbids making a reference parameter const. You can make the
/target/ const, just as you can (and often should) for pointers, but the
parameter itself, no.

> For reference parameters, it is essential to distinguish between const
> and non-const parameters - precisely because with pass-by-reference a
> function /can/ mutate the calling function arguments.

I don't like that wording. References can never be const -- it makes no
sense. References may be to const or non const-qualified types. I know
this just words, but I think they matter when explaining things.

Conceptually you can have both

void f(int const &a);
void f(int &const a);

just as you can have both

void f(int const *a);
void f(int *const a);

The first of those is not a const pointer no matter how many times
people call it one -- it's a pointer to const. Similarly, the first
reference example is not a const reference. The second one would be,
were it permitted (the syntax allows it of course).

<snip>
--
Ben.

David Brown

unread,
Nov 20, 2018, 6:40:05 AM11/20/18
to
Putting "const" on the parameters in the function definition is a
different thing. If you find it helps avoid bugs in the code, fine.

> ===
> mnr1.cxx: In function ‘int square(int)’:
> mnr1.cxx:3:10: error: assignment of read-only parameter ‘x’
> x += 1; // possibly a bug
> ===
>
> In Fortran, nobody would laugh at you for doing this even if the
> caller's value of x wouldn't change:
>

I have not used Fortran myself, but I believe its parameters were passed
by reference, not value. In C++ it makes sense to mark reference
parameters as const if possible.


Chris Vine

unread,
Nov 20, 2018, 7:24:12 AM11/20/18
to
On 20 Nov 2018 06:49:36 GMT
Jorgen Grahn <grahn...@snipabacken.se> wrote:
> On Tue, 2018-11-20, Chris Vine wrote:
> ...
> > Const pointers (as opposed to pointers to const) are quite rare in my
> > experience. If I want an unseatable reference, I take the view that
> > that is what C++ references are for.
>
> I may use const pointers whenever I use pointers, just like I may use
> const int whenever I use int. (I don't use pointers a lot, though.)

Indeed you may. I have possibly not grasped your point, but mine was
that it would be rare to write:

int i;
int* const p = &i;

It would be more normal to write the immutable pointer in reference
form:

int& r = i;

and the same with function parameters.

"Rare" does not mean "never". You might want an immutable pointer if
you receive an object by pointer which you do not own and in due course
you are going to pass it to another function by pointer, and you want to
ensure (or document) that in the meantime you do not reseat the pointer
in your code.

David Brown

unread,
Nov 20, 2018, 7:42:05 AM11/20/18
to
I thought it was clear that I meant using the const to indicate that the
passed-by-reference parameter was unchanged. But on re-reading, I see I
was not clear enough. If we use "int" as our basic type, rather than
pointers, I mean that you should not write:

void foo(int const x);

or

void foo(const int x);

because the "const" adds nothing useful to the declaration, and may be
confusing. (People may think it is a distinct overload from "void
foo(int x)", for example.)

But with references, it /is/ important:

void foo(int const &x);
or
void foo(const int &x);

is different from

void foo(int &x);


>
>> For reference parameters, it is essential to distinguish between const
>> and non-const parameters - precisely because with pass-by-reference a
>> function /can/ mutate the calling function arguments.
>
> I don't like that wording. References can never be const -- it makes no
> sense. References may be to const or non const-qualified types. I know
> this just words, but I think they matter when explaining things.
>
> Conceptually you can have both
>
> void f(int const &a);
> void f(int &const a);
>
> just as you can have both
>
> void f(int const *a);
> void f(int *const a);
>
> The first of those is not a const pointer no matter how many times
> people call it one -- it's a pointer to const. Similarly, the first
> reference example is not a const reference. The second one would be,
> were it permitted (the syntax allows it of course).
>

"void f(int & const a);" is not permitted, as far as I can see - from
N3797 (C++14) section 8.3.2:

> In a declaration T D where D has either of the forms
> & attribute-specifier-seqopt D1
> && attribute-specifier-seqopt D1
> and the type of the identifier in the declaration T D1 is
> “derived-declarator-type-list T,” then the type of the identifier of
> D is “derived-declarator-type-list reference to T.” The optional
> attribute-specifier-seq appertains to the reference type.
> Cv-qualified references are ill-formed except when the cv-qualifiers
> are introduced through the use of a typedef-name (7.1.3, 14.1) or
> decltype-specificer (7.1.6.2), in which case the cv-qualifiers are
> ignored.

gcc gives an error "const qualifiers cannot be applied to int&".

The reference itself cannot be changed after initialisation - that is a
key feature of a reference. So as you say, it makes no sense for the
reference to be "const", as it is always unchangeable.

I fully agree that it is important to distinguish "pointer to const"
from "const pointer" - these both exist, with different usages, and the
distinction is critical.

However, the C++ standard itself uses the term "const reference" as well
as "reference to const" for things of types such as "const int&". I can
appreciate that "reference to const" is more accurate, but "const
reference" is a common phrase that does not suffer from the ambiguity or
confusion you get with "const pointer".

Of course, you might not like the wording in the standard either :-)


Rick C. Hodgin

unread,
Nov 20, 2018, 8:13:46 AM11/20/18
to
On Monday, November 19, 2018 at 8:56:05 AM UTC-5, David Brown wrote:
> "const void *" is a pointer to a const void - a pointer to something
> with unspecified type, but which you can't use to modified the thing
> pointed to. ...
>
> ... And adding a const qualifier to the
> parameter itself ... is rarely helpful - it does
> not affect the calling function at all, but makes the types harder to
> read. So I would never write "void * const" as a parameter type.

If I wanted a pointer that can't be altered, pointing to something
that can't be altered, what would I use?

const void* const p;

--
Rick C. Hodgin

David Brown

unread,
Nov 20, 2018, 8:25:06 AM11/20/18
to
Yes, exactly.


Just to be clear - it is a good thing to make objects "const" inside
functions. You have to initialise them when you declare them, as you
can't assign to them later (IMHO it is usually a good thing to postpone
declaring local data until you are ready to initialise it). They help
stop mistakes and make it clearer to the reader (and writer!) which
objects are single valued, and which will change throughout the function.

It is not a bad idea to put "const" on parameters in a function
definition, if you think it helps catch errors.

But it is unnecessary to put "const" on parameters in a function
declaration, and can cause confusion.

James Kuyper

unread,
Nov 20, 2018, 9:20:28 AM11/20/18
to
On 11/20/18 03:27, David Brown wrote:
> On 19/11/18 23:56, Chris M. Thomasson wrote:
...
>> Fwiw, I put the const in the interface to attempt to get the point
>> across that this parameter shall never be mutated by a function. Sort of
>> a contract, documented within the API itself.
>
> Except for reference parameters, parameters are /never/ mutated by a
> function.

I think you mean "arguments", not "parameters". See the definitions of
those terms in 1.3.2 and 1.3.15 of the standard.

int half(int count) { return count/2; }

int main(void) {
int i = 10000;
int j = half(i);
return j==5000;
}

"count" is a parameter of the half function, and is modified by it. "i"
is the argument for a given call to that function, and is not modified
by that call.

A parameter is a variable local to a function, and I declare it 'const'
or not on the same basis as any other local variable: if it shouldn't be
changed from it's initial value, I declare it 'const'.
However, that's an implementation detail that is of no relevance to the
user of the function. The function declaration that is used to call a
function is allowed to lack qualifiers on parameters that have those
qualifiers in the function definition, and it's conventional to take
advantage of that fact.

Ben Bacarisse

unread,
Nov 20, 2018, 9:31:40 AM11/20/18
to
No you were clear and I was not confused about what you were saying. My
point was to suggest another way of saying it which I think as some
advantages.

> that I meant using the const to indicate that the
> passed-by-reference parameter was unchanged.

That's another usage that I don't like! I was taught to distinguish
between an argument and a parameter (though sometimes these are called
actual arguments and formal arguments). In my preferred wording "a
reference to const parameter makes it clear that the argument will not
be changed".

<snip>
> However, the C++ standard itself uses the term "const reference" as well
> as "reference to const" for things of types such as "const int&". I can
> appreciate that "reference to const" is more accurate, but "const
> reference" is a common phrase that does not suffer from the ambiguity or
> confusion you get with "const pointer".
>
> Of course, you might not like the wording in the standard either :-)

Indeed not! But it's used sparingly and the context (in the cases I
looked at) helps to avoid confusion.

--
Ben.

Ralf Goertz

unread,
Nov 20, 2018, 9:56:10 AM11/20/18
to
Am Tue, 20 Nov 2018 09:20:17 -0500
schrieb James Kuyper <james...@alumni.caltech.edu>:

> On 11/20/18 03:27, David Brown wrote:
> > On 19/11/18 23:56, Chris M. Thomasson wrote:
> ...
> >> Fwiw, I put the const in the interface to attempt to get the point
> >> across that this parameter shall never be mutated by a function.
> >> Sort of a contract, documented within the API itself.
> >
> > Except for reference parameters, parameters are /never/ mutated by a
> > function.
>
> I think you mean "arguments", not "parameters". See the definitions of
> those terms in 1.3.2 and 1.3.15 of the standard.
>
> int half(int count) { return count/2; }
>
> int main(void) {
> int i = 10000;
> int j = half(i);
> return j==5000;
> }
>
> "count" is a parameter of the half function, and is modified by it.


In what way is "count" modified? It would still have the same value
after the return (but of course it is destroyed) that it had when the
function was called. Changing the function definition to

int half(const int count) { return count/2; }

compiles fine.

David Brown

unread,
Nov 20, 2018, 10:05:36 AM11/20/18
to
I expected that /you/ understood what I was saying, but that you thought
I could be clearer for other people. At least I could have been more
accurate in my wording.

>
>> that I meant using the const to indicate that the
>> passed-by-reference parameter was unchanged.
>
> That's another usage that I don't like! I was taught to distinguish
> between an argument and a parameter (though sometimes these are called
> actual arguments and formal arguments). In my preferred wording "a
> reference to const parameter makes it clear that the argument will not
> be changed".
>

Fair enough. The terms "argument" and "parameter" are often mixed up,
but your usage here is more precise than mine.

> <snip>
>> However, the C++ standard itself uses the term "const reference" as well
>> as "reference to const" for things of types such as "const int&". I can
>> appreciate that "reference to const" is more accurate, but "const
>> reference" is a common phrase that does not suffer from the ambiguity or
>> confusion you get with "const pointer".
>>
>> Of course, you might not like the wording in the standard either :-)
>
> Indeed not! But it's used sparingly and the context (in the cases I
> looked at) helps to avoid confusion.
>

"Const reference" is used more often than "reference to const" in the
standard. I'm still not convinced it makes a large difference - indeed,
I am quite convinced that the person asking the question understood the
answer. But I agree that careful terminology is better than loose
terminology even if both are understood.


Robert Wessel

unread,
Nov 20, 2018, 10:05:50 AM11/20/18
to
On Sun, 18 Nov 2018 19:29:38 +0100, Christian Gollwitzer
<auri...@gmx.de> wrote:

>Am 18.11.18 um 14:21 schrieb David Brown:
>> On 17/11/2018 18:42, Melzzzzz wrote:
>>> On 2018-11-17, Vir Campestris <vir.cam...@invalid.invalid> wrote:
>>>> On 16/11/2018 22:09, Melzzzzz wrote:
>>>>> On 2018-11-16, Vir Campestris <vir.cam...@invalid.invalid> wrote:
>
>>>>> How about implenting efficient qsort function? C++ beats C there and
>>>>> this is std function...
>>>>>>
>>>>
>>>> An efficient _generic_ qsort function should be much _easier_ in C++.
>>>> Templates, etc. But if you know what you're sorting and are willing to
>>>> put that into the sort code - I don't know why C++ should be faster.
>>>
>>> Well, qsort in clib is much slower then C++ sort...
>>>
>>
>> That is for two reasons.  One is that the standard C library qsort
>> function is generic - as Vir said, /generic/ functions can be more
>> efficient in C++ (at least while you want things to be clear,
>> maintainable, etc.),
>
>
>I'm not even sure that this holds for a sort function using a callback.
>The main difference between the C and the C++ version is that "qsort" is
>defined in a separately compiled file, while std::sort is defined in the
>header file and compiled simultaneously.


If distributed in the proper format, even a separately compiled
qsort() should be handle-able by link time code generation, if passed
a fixed pointer to the comparison function (and the comparison
function is visible as well).

Unfortunately, to my knowledge, that's never been done, and even if
done, LTCG often is a bit conservative about that sort of thing,
especially with routines that can be a bit larger, like qsort().

james...@alumni.caltech.edu

unread,
Nov 20, 2018, 10:14:56 AM11/20/18
to
On Tuesday, November 20, 2018 at 9:56:10 AM UTC-5, Ralf Goertz wrote:
> Am Tue, 20 Nov 2018 09:20:17 -0500
> schrieb James Kuyper <james...@alumni.caltech.edu>:
...
> > I think you mean "arguments", not "parameters". See the definitions of
> > those terms in 1.3.2 and 1.3.15 of the standard.
> >
> > int half(int count) { return count/2; }
> >
> > int main(void) {
> > int i = 10000;
> > int j = half(i);
> > return j==5000;
> > }
> >
> > "count" is a parameter of the half function, and is modified by it.
>
>
> In what way is "count" modified?

In a way that disappeared during the translation between what I had in my head and what I wrote in my message. I meant to write:

void half(int count)
{
count /= 2;
return count;
}

When writing my message, I automatically merged the two lines of the
body of that function, not noticing that I had destroyed the point I was
trying to make when I wrote it. I was in a hurry because I was running
late. Sorry for the confusion.

In real-life code, I've frequently modified parameters, and frequently
declared a parameter itself const (never both to the same parameter, of
course), so the distinction does matter.

Rick C. Hodgin

unread,
Nov 20, 2018, 10:25:23 AM11/20/18
to
On Tuesday, November 20, 2018 at 8:25:06 AM UTC-5, David Brown wrote:
> On 20/11/18 14:13, Rick C. Hodgin wrote:
> > If I wanted a pointer that can't be altered, pointing to something
> > that can't be altered, what would I use?
> >
> > const void* const p;
> >
>
> Yes, exactly.
>
>
> Just to be clear - it is a good thing to make objects "const" inside
> functions. You have to initialise them when you declare them, as you
> can't assign to them later (IMHO it is usually a good thing to postpone
> declaring local data until you are ready to initialise it). They help
> stop mistakes and make it clearer to the reader (and writer!) which
> objects are single valued, and which will change throughout the function.

I disagree with this design philosophy. I believe it's better to
declare everything in one place, and then assign it in a documented
init { } block.

The compiler can move the actual location of the initialization to
some other location that's more optimized for you. But in this way,
everything is grouped / encapsulated in its area, able to be docu-
mented and understood at first glance.

And in keeping with that philosophy, I think the ability to formally
indicate something is read-only should exist forward that point in
code:

void my_function(void)
{
int my_variable; // Declared normally
// Other variables here

init
{
my_variable = populate_it() [|readonly|];
// Other code here
}
}

In this way, after it's assigned, it's marked read-only. It can-
not be set after this state is established. Depending on the
dynamics of the code at work using that variable, this is either
by known compiler state, or a hard internal flag.

Likewise, the ability for something to be locked down in that way
should also be able to be un-done by protocol as needed, either
for an instance change, or a revocation of the prior policy:

// One-time override
my_variable = new_value() [|writeallow|];

// From this point forward
my_variable [|readwrite|];

You should also be able to lock-down a variable when it's defined,
and then only write to it when you issue the writeallow override:

void my_function(void)
{
[|readonly|] int my_variable;
...

No population into that variable is then allowed unless it is ac-
companied by the "paperwork" authorizing it, which is the direct
presence of the [|writeallow|] keyword. This would also be true
in the init { } block.

> It is not a bad idea to put "const" on parameters in a function
> definition, if you think it helps catch errors.
>
> But it is unnecessary to put "const" on parameters in a function
> declaration, and can cause confusion.

I see the need for const-like declarations. But I see the need
for them to be handled differently, and with a different protocol,
and with a different format / style applied for coding and long-
term maintenance by people.

I also think the idea of having a pointer that cannot have its
value altered ... is not a sound thing. I think pointers are a
particular thing and they and their properties exist for a reason,
and if you need something else that can't be shifted through tra-
ditional pointer manipulation ... use a reference, or even pass
it by value if the source doesn't need to be altered.

My $0.02.

--
Rick C. Hodgin

Bonita Montero

unread,
Nov 20, 2018, 11:18:27 AM11/20/18
to
> There shouldn't be anything you can do in C++ you can't do in C - albeit
> with more pain.

There's one thing you can't do in C with the same performance like with
C++: error handling. RAII with table-driven exception-handling has zero
overhead for the performance-relevant case that no exception is thrown.

Bonita Montero

unread,
Nov 20, 2018, 11:19:35 AM11/20/18
to
> I'm not exactly sure how you would implement exceptions in C
> (which, need I remind, needs to take RAII into account when
> unwinding the stack).

Hehe, two idiots, the same thought.

Rick C. Hodgin

unread,
Nov 20, 2018, 11:35:53 AM11/20/18
to
There are no runtime-based exception handling algorithms with zero
overhead, short of entirely static systems that are profiled.

In a dynamic system, there is always minimal overhead at work as all
require a contextual observation of the stack at least ... and that
means additional writes and pointer accesses reads for those writes,
along with the associated storage, equating to minimal overhead.

--
Rick C. Hodgin

Jorgen Grahn

unread,
Nov 20, 2018, 1:17:06 PM11/20/18
to
On Tue, 2018-11-20, Chris Vine wrote:
> On 20 Nov 2018 06:49:36 GMT
> Jorgen Grahn <grahn...@snipabacken.se> wrote:
>> On Tue, 2018-11-20, Chris Vine wrote:
>> ...
>> > Const pointers (as opposed to pointers to const) are quite rare in my
>> > experience. If I want an unseatable reference, I take the view that
>> > that is what C++ references are for.
>>
>> I may use const pointers whenever I use pointers, just like I may use
>> const int whenever I use int. (I don't use pointers a lot, though.)
>
> Indeed you may.

Perhaps I should have written "I might". English is not my native
language.

> I have possibly not grasped your point,

My point was, whenever you use pointers, const pointers are going to
be useful, just like const int is useful when you use ints.

For example, I often parse strings and buffers by forming a range
[a, b) of pointers, and then working on that range:

const char* a = v.data();
const char* const b = a + v.size();
while(a != b) ...
// work on [a, b)

> but mine was that it would be rare to write:
>
> int i;
> int* const p = &i;
>
> It would be more normal to write the immutable pointer in reference
> form:
>
> int& r = i;
>
> and the same with function parameters.
>
> "Rare" does not mean "never".

True, and I'm not looking for an argument. Your "quite rare" was just
strong enough that I wanted to soften it up a bit.

> You might want an immutable pointer if you receive an object by
> pointer which you do not own and in due course you are going to pass
> it to another function by pointer, and you want to ensure (or
> document) that in the meantime you do not reseat the pointer in your
> code.

I'm thinking more about pointers into arrays -- I don't use pointers
much for anything else, or anything high-level like object handles.

Mr Flibble

unread,
Nov 20, 2018, 1:24:18 PM11/20/18
to
On 20/11/2018 08:27, David Brown wrote:
> Except for reference parameters, parameters are /never/ mutated

I assume you meant to type:

Except for reference parameters, arguments are /never/ mutated

I quite often mutate function parameters treating them as local function
variables.

/Flibble

--
“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who
doesn’t believe in any God the most. Oh, no..wait.. that never happens.” –
Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That's what I would say."

Louis Krupp

unread,
Nov 20, 2018, 2:25:33 PM11/20/18
to
On Tue, 20 Nov 2018 12:39:56 +0100, David Brown
In its modern incarnation, Fortran can pass arguments by value, but
you have to specify that:

integer function square(x)
implicit none
integer, value, intent(in) :: x
...

Louis

Christian Gollwitzer

unread,
Nov 20, 2018, 2:32:44 PM11/20/18
to
Am 20.11.18 um 12:39 schrieb David Brown:
> On 20/11/18 10:40, Louis Krupp wrote:
>> For example, with g++:
>>
>> ===
>> int square(const int x)
>> {
>> x += 1; // possibly a bug
>> return x * x;
>> }
>
> Putting "const" on the parameters in the function definition is a
> different thing. If you find it helps avoid bugs in the code, fine.

Now I'm confused. It is OK to put const in the formal parameter to catch
bugs - I agree, and I did the same thing before. But it it is not OK to
put it in the declaration? How should this work? Don't they need to match?

Christian

Jorgen Grahn

unread,
Nov 20, 2018, 3:03:07 PM11/20/18
to
Not in this respect, no. So you happily declare the function above as

int square(int x);

because putting the const there would serve no purpose; the caller
doesn't care if square can potentially modify its copy of x or not.

james...@alumni.caltech.edu

unread,
Nov 20, 2018, 3:26:25 PM11/20/18
to
No, they don't need to match:
9.2.3.5p5: "After producing the list of parameter types, any top-level cv-qualifiers modifying a parameter type are deleted when forming the function type."

Chris M. Thomasson

unread,
Nov 20, 2018, 5:05:47 PM11/20/18
to
Back in they day, I had to turn link time optimizations off for highly
sensitive synchronization algorithms. It was too scary to keep them on.

Alf P. Steinbach

unread,
Nov 20, 2018, 7:57:22 PM11/20/18
to
On 20.11.2018 19:24, Mr Flibble wrote:
> On 20/11/2018 08:27, David Brown wrote:
>> Except for reference parameters, parameters are /never/ mutated
>
> I assume you meant to type:
>
> Except for reference parameters, arguments are /never/ mutated
>
> I quite often mutate function parameters treating them as local function
> variables.

There's as yet no real distinction between "parameter" and "argument",
but some people do make such distinctions. One woman's "argument" is
another mans "parameter". And vice versa.

The Holy Standard seems to have a preference for "parameter" for
templates and "argument" for functions.

Some people say "argument" and "parameter" instead of "actual argument"
and "formal argument", or "actual parameter" and "formal parameter".

And presumably some people do the opposite.

Now that we have Wikipedia as a direction guide for the most conformist
segment of the population, which I believe constitutes about 98%, it's
likely that they will all soon fall into line and say the same. But
still it's IMO prudent to be lenient in what to accept, and precise in
one's own formulations. I like the "actual" and "formal" qualifications.


Cheers!,

- Alf

James Kuyper

unread,
Nov 20, 2018, 9:09:33 PM11/20/18
to
On 11/20/18 19:57, Alf P. Steinbach wrote:
> On 20.11.2018 19:24, Mr Flibble wrote:
>> On 20/11/2018 08:27, David Brown wrote:
>>> Except for reference parameters, parameters are /never/ mutated
>>
>> I assume you meant to type:
>>
>> Except for reference parameters, arguments are /never/ mutated
>>
>> I quite often mutate function parameters treating them as local function
>> variables.
>
> There's as yet no real distinction between "parameter" and "argument",
> but some people do make such distinctions.

Mpst importantly, the C++ standard makes such a distinction, and relies
upon that distinction to clearly specify it's meaning. You can't
properly understand what the standard says using those words unless you
interpret them in the manner defined by that standard.

One woman's "argument" is
> another mans "parameter". And vice versa.
>
> The Holy Standard seems to have a preference for "parameter" for
> templates and "argument" for functions.

No, the standard isn't Holy, It's authoritative, but because it's an ISO
standard, not because of anything of a religious nature, nor because of
any misguided belief that it's perfect or flawless.

The standard very explicitly defines the meanings of both "parameter"
and "argument", and it provides separate (and parallel) meanings for
both words for functions, function-like macros, templates, throw/catch code:

> 1.3.2
> [defns.argument]
> argument
> <function call expression> expression in the comma-separated list bounded by the parentheses (5.2.2)
> 1.3.3
> [defns.argument.macro]
> argument
> <function-like macro> sequence of preprocessing tokens in the comma-separated list bounded by the paren-
> theses (16.3)
> 1.3.4
> [defns.argument.throw]
> argument
> <throw expression> the operand of throw (5.17)
> 1.3.5
> [defns.argument.templ]
> argument
> <template instantiation> constant-expression, type-id, or id-expression in the comma-separated list bounded
> by the angle brackets (14.3)
...
> 1.3.15
> [defns.parameter]
> parameter
> <function or catch clause> object or reference declared as part of a function declaration or definition or in
> the catch clause of an exception handler that acquires a value on entry to the function or handler
> 1.3.16
> [defns.parameter.macro]
> parameter
> <function-like macro> identifier from the comma-separated list bounded by the parentheses immediately
> following the macro name
> 1.3.17
> [defns.parameter.templ]
> parameter
> <template> template-parameter

> Some people say "argument" and "parameter" instead of "actual argument"
> and "formal argument", or "actual parameter" and "formal parameter".
>
> And presumably some people do the opposite.
>
> Now that we have Wikipedia as a direction guide for the most conformist
> segment of the population, which I believe constitutes about 98%, it's

It's not about being a conformist - it's about understanding the meaning
of the standard, and about communicating clearly about that meaning. If
you want to understand and be understood, your best bet is to stick to
the definitions provided by the C++ standard in any context where the
C++ standard is relevant. If you don't care about understanding or being
understood, go ahead and use the terms any way you wish.

Alf P. Steinbach

unread,
Nov 21, 2018, 12:18:24 AM11/21/18
to
This was not a discussion about the C++ standard.

C++ standardese literal text has its place, of course.

General programming discussion is usually not such a place.


> If
> you want to understand and be understood, your best bet is to stick to
> the definitions provided by the C++ standard in any context where the
> C++ standard is relevant.

I don't agree that the context under discussion is one where the literal
text of the C++ standard is important.

But considering such contexts: that still /depends/.

Consider

C++17 §16.3.3.1.2/1
<quote>
A user-defined conversion sequence consists of an initial standard
conversion sequence followed by a user-defined conversion (15.3)
followed by a second standard conversion sequence. If the user-defined
conversion is specified by a constructor (15.3.1), the initial standard
conversion sequence converts the source type to the type required by the
argument of the constructor.
</quote>

Here the “source type” in the last sentence, is clearly the type of an
actual argument.

The last sentence can't therefore be talking about converting that type
to the type of the actual argument: it's already that type.

So “the argument of the constructor” must be referring to the
constructor's formal argument.

Hence, understanding ¹the general programming terminology, and how very
far from clear-cut unambiguous it is, is key to understanding at least
some parts of the C++ standard.

Because that standard is not quite perfect. ;-)


> If you don't care about understanding or being
> understood, go ahead and use the terms any way you wish.

I hope you see how irrelevant that advice is, now.

Cheers!,

- Alf

Notes:
¹ https://en.wikipedia.org/wiki/Parameter_(computer_programming)

David Brown

unread,
Nov 21, 2018, 2:18:43 AM11/21/18
to
In the parameters of a function declaration that is not a definition,
qualifiers ("const" and "volatile") are ignored. (This applies to the
parameter itself, not any other parts of the type. A
pointer-to-const-int is different from a pointer-to-int, but a "const
int" is treated the same as an "int".)

Still, you may prefer to keep the declaration and the definition exactly
the same. I do - I use the same parameter names in declarations for a
function as in the definition, and I use the same "const" (or rather,
lack of it).

Bonita Montero

unread,
Nov 21, 2018, 3:12:28 AM11/21/18
to
>> There's one thing you can't do in C with the same performance like with
>> C++: error handling. RAII with table-driven exception-handling has zero
>> overhead for the performance-relevant case that no exception is thrown.

> There are no runtime-based exception handling algorithms with zero
> overhead, short of entirely static systems that are profiled.

Correctly implemented, table-driven exception-handling has zero overhead

Bonita Montero

unread,
Nov 21, 2018, 3:13:41 AM11/21/18
to
> There are no runtime-based exception handling algorithms with zero
> overhead, short of entirely static systems that are profiled.

Read this: http://www.ut.sco.com/developers/products/ehopt.pdf

David Brown

unread,
Nov 21, 2018, 4:16:56 AM11/21/18
to
On 20/11/18 15:20, James Kuyper wrote:
> On 11/20/18 03:27, David Brown wrote:
>> On 19/11/18 23:56, Chris M. Thomasson wrote:
> ...
>>> Fwiw, I put the const in the interface to attempt to get the point
>>> across that this parameter shall never be mutated by a function. Sort of
>>> a contract, documented within the API itself.
>>
>> Except for reference parameters, parameters are /never/ mutated by a
>> function.
>
> I think you mean "arguments", not "parameters". See the definitions of
> those terms in 1.3.2 and 1.3.15 of the standard.
>

Yes, that is what I should have written.

> int half(int count) { return count/2; }
>
> int main(void) {
> int i = 10000;
> int j = half(i);
> return j==5000;
> }
>
> "count" is a parameter of the half function, and is modified by it. "i"
> is the argument for a given call to that function, and is not modified
> by that call.
>
> A parameter is a variable local to a function, and I declare it 'const'
> or not on the same basis as any other local variable: if it shouldn't be
> changed from it's initial value, I declare it 'const'.
> However, that's an implementation detail that is of no relevance to the
> user of the function. The function declaration that is used to call a
> function is allowed to lack qualifiers on parameters that have those
> qualifiers in the function definition, and it's conventional to take
> advantage of that fact.
>

Agreed.

David Brown

unread,
Nov 21, 2018, 4:25:34 AM11/21/18
to
On 21/11/18 01:57, Alf P. Steinbach wrote:
> On 20.11.2018 19:24, Mr Flibble wrote:
>> On 20/11/2018 08:27, David Brown wrote:
>>> Except for reference parameters, parameters are /never/ mutated
>>
>> I assume you meant to type:
>>
>> Except for reference parameters, arguments are /never/ mutated
>>
>> I quite often mutate function parameters treating them as local
>> function variables.
>
> There's as yet no real distinction between "parameter" and "argument",
> but some people do make such distinctions. One woman's "argument" is
> another mans "parameter". And vice versa.
>

There is a distinction in the C standards - "argument" is the things in
the function call, while "parameter" is the things in the function
declaration and definition. The distinction has not always been clear,
different languages have different terms, and people (such as myself)
are not always accurate about following the terminology of the language.

As far as I understand it, C++ standards follow the same distinction as
for C, but with more nuances. See section 1.3 of the standards-

> The Holy Standard seems to have a preference for "parameter" for
> templates and "argument" for functions.
>
> Some people say "argument" and "parameter" instead of "actual argument"
> and "formal argument", or "actual parameter" and "formal parameter".
>
> And presumably some people do the opposite.
>
> Now that we have Wikipedia as a direction guide for the most conformist
> segment of the population, which I believe constitutes about 98%, it's
> likely that they will all soon fall into line and say the same. But
> still it's IMO prudent to be lenient in what to accept, and precise in
> one's own formulations. I like the "actual" and "formal" qualifications.
>

"actual argument" and "formal parameter" would probably make things as
clear as they could be. Other than that, a little explanation of the
context always helps - if someone writes about the "arguments in the
function declaration", you know they actually mean parameters.


David Brown

unread,
Nov 21, 2018, 4:57:53 AM11/21/18
to
On 20/11/18 16:25, Rick C. Hodgin wrote:
> On Tuesday, November 20, 2018 at 8:25:06 AM UTC-5, David Brown wrote:
>> On 20/11/18 14:13, Rick C. Hodgin wrote:
>>> If I wanted a pointer that can't be altered, pointing to something
>>> that can't be altered, what would I use?
>>>
>>> const void* const p;
>>>
>>
>> Yes, exactly.
>>
>>
>> Just to be clear - it is a good thing to make objects "const" inside
>> functions. You have to initialise them when you declare them, as you
>> can't assign to them later (IMHO it is usually a good thing to postpone
>> declaring local data until you are ready to initialise it). They help
>> stop mistakes and make it clearer to the reader (and writer!) which
>> objects are single valued, and which will change throughout the function.
>
> I disagree with this design philosophy. I believe it's better to
> declare everything in one place, and then assign it in a documented
> init { } block.
>

Fair enough. You are not alone in preferring to declare variables
together at the start of a block (usually the start of a function), and
assign to them later rather than initialising them. Just be aware that
initialisation and assignment are different. They usually - especially
in C, and with an optimising compiler - have the same effect in
practice. But you can't assign to a "const" variable, you can only
initialise it.

And in C++, initialisation (construction) can be very different from
assignment. In particular, initialisation of objects can be a good deal
more efficient than default construction followed by assignment later
on. In C, something like "T x;" does not usually correspond to any
generated code, but in C++ it certainly can do.

So you will find "declare all your variables at the top of the function"
style in C90 code, and amongst some C99/C11 programmers - but you will
rarely find it in C++ programming.

(I am not trying to persuade you here, just pointing out some issues for
your consideration.)

> The compiler can move the actual location of the initialization to
> some other location that's more optimized for you. But in this way,
> everything is grouped / encapsulated in its area, able to be docu-
> mented and understood at first glance.
>

I agree that the compiler can move things around for optimisation. As
for how much this affects documentation or understanding, I think the
diplomatic answer is "it's not that simple, and it will vary case by case".

> And in keeping with that philosophy, I think the ability to formally
> indicate something is read-only should exist forward that point in
> code:
>
> void my_function(void)
> {
> int my_variable; // Declared normally
> // Other variables here
>
> init
> {
> my_variable = populate_it() [|readonly|];
> // Other code here
> }
> }
>
> In this way, after it's assigned, it's marked read-only. It can-
> not be set after this state is established. Depending on the
> dynamics of the code at work using that variable, this is either
> by known compiler state, or a hard internal flag.

Well, for your language, you make the rules to suit yourself.
Personally, I think:

void my_function(void)
{
const int my_variable = populate_it();
// Other code here
}

is simpler, and clearer. It also makes it clear that "my_variable" is a
really bad name, since it is immediately obvious that it cannot vary.

Would it make a difference if there were lots of variables here, some of
them constant (or read-only)? I don't think so, but you may have a
different opinion.

>
> Likewise, the ability for something to be locked down in that way
> should also be able to be un-done by protocol as needed, either
> for an instance change, or a revocation of the prior policy:
>
> // One-time override
> my_variable = new_value() [|writeallow|];
>
> // From this point forward
> my_variable [|readwrite|];
>
> You should also be able to lock-down a variable when it's defined,
> and then only write to it when you issue the writeallow override:
>
> void my_function(void)
> {
> [|readonly|] int my_variable;
> ...
>

I am lost here - I can't see the use-cases. If I want to say something
can't be changed, I define it as "const". I don't want to be able to
make it non-const later on. If I want a read-write version, I make a
new variable initialised to the const's value. Conversely, if I want to
say something can be changed, I define it without "const". If I want an
unchanging copy of it, I define a new "const" variable initialised from
the variable.

(This may just be another case where you want to give programmers
flexibility, and I like there to be more concrete rules.)

> No population into that variable is then allowed unless it is ac-
> companied by the "paperwork" authorizing it, which is the direct
> presence of the [|writeallow|] keyword. This would also be true
> in the init { } block.
>
>> It is not a bad idea to put "const" on parameters in a function
>> definition, if you think it helps catch errors.
>>
>> But it is unnecessary to put "const" on parameters in a function
>> declaration, and can cause confusion.
>
> I see the need for const-like declarations. But I see the need
> for them to be handled differently, and with a different protocol,
> and with a different format / style applied for coding and long-
> term maintenance by people.
>
> I also think the idea of having a pointer that cannot have its
> value altered ... is not a sound thing. I think pointers are a
> particular thing and they and their properties exist for a reason,
> and if you need something else that can't be shifted through tra-
> ditional pointer manipulation ... use a reference, or even pass
> it by value if the source doesn't need to be altered.
>

Sometimes you want to change where a pointer points, other times you
don't. const pointers have their place. It is certainly the case that
many, perhaps most, uses of const pointers in C can be replaced by
references in C++, but they are still not entirely pointless.

> My $0.02.
>

Rick C. Hodgin

unread,
Nov 21, 2018, 8:18:02 AM11/21/18
to
It's an interesting approach. I considered something similar,
but equated it to being a little too dangerous and risky. It's
better to have an observed runtime construction in my opinion.

And in those places / cases where performance is truly an issue,
don't use try..catch at every instance, but implement something
else that's custom with a larger outer parent try..catch to min-
imize the impact on oft-called functions. A set of enum constant
values can be used to populate a single member variable indicating
where the exception took place. Minimal overhead, and nothing
fancy required.

Note also:

The author of the PDF you linked cited it does have some per-
formance overhead. It has a large footprint in RAM at all
times, removes some optimizations which may otherwise be em-
ployed in normal code (in the code that does not generate
exceptions), and has some additional slower performance in
processing exceptions in unwinding and recovery.

The author cites that some (rather extraordinary) effort can
be applied to overcome many of the non-exception-case performance
issues, but the memory footprint and slower exception handling
cannot be overcome.

IMO ... it's a questionable approach. But, I have no doubts
it would work. I just wouldn't use its design.

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Nov 21, 2018, 8:27:58 AM11/21/18
to
On Wednesday, November 21, 2018 at 4:57:53 AM UTC-5, David Brown wrote:
> (I am not trying to persuade you here, just pointing out some issues for
> your consideration.)

I understand. I still stand behind the philosophy.

> Well, for your language, you make the rules to suit yourself.
> Personally, I think:
>
> void my_function(void)
> {
> const int my_variable = populate_it();
> // Other code here
> }
>
> is simpler, and clearer...

To me, the use of "const" can be confusing. Suppose it
were const int* my_variable = populate_it(); ... it would
be easy to become confused on if the value pointed to by
the pointer is a constant, or if the pointer value itself
is a constant. That confusion was in my mind a few days
ago. It's still a little murky. :-)

I believe a different approach is appropriate: Pointers
should always be non-constant. And the values they point
to only should ever be read-only or read-write, and I'm
trying to convince myself to make a use case for write-
only as well.

> It also makes it clear that "my_variable" is a
> really bad name, since it is immediately obvious that it cannot vary.

It's still a variable. It's not a fixed quantity. It is
populated externally, and used here as that value.

> > Likewise, the ability for something to be locked down in that way
> > should also be able to be un-done by protocol as needed, either
> > for an instance change, or a revocation of the prior policy:
> >
> > // One-time override
> > my_variable = new_value() [|writeallow|];
> >
> > // From this point forward
> > my_variable [|readwrite|];
> >
> > You should also be able to lock-down a variable when it's defined,
> > and then only write to it when you issue the writeallow override:
> >
> > void my_function(void)
> > {
> > [|readonly|] int my_variable;
> > ...
> >
>
> I am lost here - I can't see the use-cases. If I want to say something
> can't be changed, I define it as "const". I don't want to be able to
> make it non-const later on. If I want a read-write version, I make a
> new variable initialised to the const's value. Conversely, if I want to
> say something can be changed, I define it without "const". If I want an
> unchanging copy of it, I define a new "const" variable initialised from
> the variable.

I believe the const/non-const state should be a policy
when the input may be in read/write memory anyway. The
only fixed cases are when it's in read-only memory. And
in such a case, I don't think "const" is a good idea,
but maybe [|locked|] or something, to indicate the value
is locked and cannot be changed even with an override.

There's probably a legal term for such a thing. I can re-
member in the Bible there were laws issued or decreed which
could not be revoked once issued. I don't remember if they
had a special name though. I'm sure there's one somewhere
that could replace "locked" with the proper term.

> (This may just be another case where you want to give programmers
> flexibility, and I like there to be more concrete rules.)

The rules here would be concrete. You just have to follow
policy / protocol.

> > I also think the idea of having a pointer that cannot have its
> > value altered ... is not a sound thing. I think pointers are a
> > particular thing and they and their properties exist for a reason,
> > and if you need something else that can't be shifted through tra-
> > ditional pointer manipulation ... use a reference, or even pass
> > it by value if the source doesn't need to be altered.
>
> Sometimes you want to change where a pointer points, other times you
> don't. const pointers have their place. It is certainly the case that
> many, perhaps most, uses of const pointers in C can be replaced by
> references in C++, but they are still not entirely pointless.

My approach is that pointers are just numbers, and if you
have a value that you don't want to change, don't use a
pointer, but use a commensurately sized unsigned integer,
and then do the direct compare.

Pointers are for pointing to data, and they should always
be able to be manipulated. If you need something that
should not be manipulated, don't use a pointer. Use some-
thing else.

--
Rick C. Hodgin

James Kuyper

unread,
Nov 21, 2018, 9:15:48 AM11/21/18
to
On 11/21/18 00:18, Alf P. Steinbach wrote:
> On 21.11.2018 03:09, James Kuyper wrote:
>> On 11/20/18 19:57, Alf P. Steinbach wrote:
>>> On 20.11.2018 19:24, Mr Flibble wrote:
>>>> On 20/11/2018 08:27, David Brown wrote:
>>>>> Except for reference parameters, parameters are /never/ mutated
>>>>
>>>> I assume you meant to type:
>>>>
>>>> Except for reference parameters, arguments are /never/ mutated
...
>>> Now that we have Wikipedia as a direction guide for the most conformist
>>> segment of the population, which I believe constitutes about 98%, it's
>>
>> It's not about being a conformist - it's about understanding the meaning
>> of the standard, and about communicating clearly about that meaning.
>
> This was not a discussion about the C++ standard.

This group is for discussing C++, the rules of which are set by that
standard. The context of your comment was an incorrect statement of one
of those rules. It was claimed, incorrectly, that function parameters
are never mutated.
The correct statement is that function parameters may freely be mutated
unless declared "const"; it's function arguments that a function cannot
mutate directly. I can't imagine a clearer example of the necessity of
clearly distinguishing between parameters and arguments, and of doing so
in a manner consistent with the standard's definitions of those terms.

...
> Consider
>
> C++17 §16.3.3.1.2/1
> <quote>
> A user-defined conversion sequence consists of an initial standard
> conversion sequence followed by a user-defined conversion (15.3)
> followed by a second standard conversion sequence. If the user-defined
> conversion is specified by a constructor (15.3.1), the initial standard
> conversion sequence converts the source type to the type required by the
> argument of the constructor.
> </quote>
>
> Here the “source type” in the last sentence, is clearly the type of an
> actual argument.
>
> The last sentence can't therefore be talking about converting that type
> to the type of the actual argument: it's already that type.
>
> So “the argument of the constructor” must be referring to the
> constructor's formal argument.
>
> Hence, understanding ¹the general programming terminology, and how very
> far from clear-cut unambiguous it is, is key to understanding at least
> some parts of the C++ standard.
>
> Because that standard is not quite perfect. ;-)

No, as you say, the standard is not perfect. Correcting that section to
refer to the parameter rather than the argument is the key to making it
more clearly understandable. Taking liberties in interpreting those
words can lead only to more misunderstandings down the road, even if
those liberties happen to give you a correct understanding in this case.

>> If you don't care about understanding or being
>> understood, go ahead and use the terms any way you wish.
>
> I hope you see how irrelevant that advice is, now.

No, I see how relevant it is to precisely the cases you presented as
counter-examples.

David Brown

unread,
Nov 21, 2018, 3:57:30 PM11/21/18
to
Table-driven exception handling is by far the most common system used in
modern C++ compilers. It takes extra code space for the unwind tables -
these are rarely an issue (except for constraint-limited embedded
systems). Yes, you get some optimisation limitations, such as
limitations on the amount of re-ordering and moving the compiler can do,
especially on object construction and destruction. But you get that
kind of limitation with any C++ exception implementation - when
compiling code which has an external call, the compiler does not know if
the function can throw an exception, and must order code on the
assumption that it might.

The common alternative strategy for C++ exceptions is to store lists of
destructors on the stack. This avoids the need to generate long tables
in code, but uses more run-time memory and stack space, and means fewer
functions can skip having a stack frame. It has higher run-time speed
costs for code that does not throw an exception, but works faster when
an exception is thrown.


Öö Tiib

unread,
Nov 22, 2018, 6:51:09 AM11/22/18
to
The result of table-driven exception handling is faster overall than failure
checking with branches when failures happen rarely (once among
thousands of successes). That is paid with size of generated code.

> Yes, you get some optimisation limitations, such as
> limitations on the amount of re-ordering and moving the compiler can do,
> especially on object construction and destruction. But you get that
> kind of limitation with any C++ exception implementation - when
> compiling code which has an external call, the compiler does not know if
> the function can throw an exception, and must order code on the
> assumption that it might.

If compiler does not know about potential exceptions then it also does
not know about potential other side effects.

> The common alternative strategy for C++ exceptions is to store lists of
> destructors on the stack. This avoids the need to generate long tables
> in code, but uses more run-time memory and stack space, and means fewer
> functions can skip having a stack frame. It has higher run-time speed
> costs for code that does not throw an exception, but works faster when
> an exception is thrown.

Since the exceptions are part of function's interface the mechanism
(and so the handling strategy) should be part of ABI. ABI has likely
to pick one since allowing several in parallel would be likely most
expensive.

Robert Wessel

unread,
Nov 22, 2018, 11:23:12 AM11/22/18
to
That sounds more like problems with the synchronization routines.

There would not be any worse problems with LTCG than if you happened
to move the code for the routines into a single source file, and
compiled that, or you used GCC (or similar) in the mode where it can
compile several source files at once.

Now it's easy to see how such a thing might happen. For example, a
routine might depend on not being specialized and optimized into its
caller so that it executes the required globally visible operations
completely and in the desired order, but that only works if you're
assuming things about the C implementation that are *not* required by
the standard (or, often, most implementations).

IOW, code that breaks because you turn LTCG on, is just broken. Just
like code that fails when you turn (ordinary) optimization on.

Compiler bugs excepted, of course.

Chris M. Thomasson

unread,
Nov 22, 2018, 3:49:14 PM11/22/18
to
I just turned them off because of paranoia.


> There would not be any worse problems with LTCG than if you happened
> to move the code for the routines into a single source file, and
> compiled that, or you used GCC (or similar) in the mode where it can
> compile several source files at once.
>
> Now it's easy to see how such a thing might happen. For example, a
> routine might depend on not being specialized and optimized into its
> caller so that it executes the required globally visible operations
> completely and in the desired order, but that only works if you're
> assuming things about the C implementation that are *not* required by
> the standard (or, often, most implementations).
>
> IOW, code that breaks because you turn LTCG on, is just broken. Just
> like code that fails when you turn (ordinary) optimization on.

There was no portable language guarantee about memory barriers and
atomic operations back then. I made heavy use of assembly language in
separately assembled files, (no inline). LTGC scared me.


> Compiler bugs excepted, of course.
>

Well, there was a problem with a GCC optimization that broke a POSIX
mutex. It was an old post on comp.programming.threads:

https://groups.google.com/d/topic/comp.programming.threads/Y_Y2DZOWErM/discussion

Pavel

unread,
Nov 23, 2018, 2:51:30 AM11/23/18
to
I have never read that discussion before and got interested by your mentioning
of it. I found the claims of people who believed that acquiring POSIX mutex is
somehow supposed to protect some "associated" variable highly dubious. Just in
case, I double-checked and could not find any such guarantee in my copy of POSIX
standard (The Open Group Base Specifications Issue 6 IEEE Std 1003.1, 2004
Edition).

AFAIK (and I has been using POSIX synchronization primitives for decades), the
only thing that successful locking of a posix mutex by a thread guarantees is
that no other thread has or will have that mutex locked until the first thread
releases it. How to use this guarantee is entirely up to the programmer. In
particular, I believe that the OP's code was buggy if the intention was to
maintain a total count of successful locks by all threads in `acquires_count'
variable, exactly because the programmer did not take responsibility for memory
access sequencing. Respectively, IMHO the code optimization by GCC was
absolutely legal.

David Brown

unread,
Nov 23, 2018, 4:02:56 AM11/23/18
to
I agree. C has sequence points for /logical/ ordering of operations
according to the abstract machine, but these do not need to be followed
in the actual generated code. Only observable behaviour needs to follow
a specific order (until the 5.1.2.4 "Multi-threaded execution" was
introduced in C11). So with the "trylock" function as given, the
compiler can't rely on the value of "acquires_count" read before the
call to "ptrhead_mutex_trylock", because that call may change the global
variable. However, after the call returns, the compiler knows it has
full control of the system, and that it can do as it likes with any
non-volatile variables here. It is free to read acquires_count before
checking res, and it is free to write it as it wants. It can read it
and write it as many times as it likes, with whatever values, regardless
of "res", as long as the last value written by end of the function
matches the value expected by the abstract machine.

int trylock()
{
int res;
res = pthread_mutex_trylock(&mutex);
if (res == 0)
++acquires_count;
return res;
}

The answer, of course, is to use "volatile" (or now "atomic") accesses
on acquires_count. Alternatively, OS-specific or compiler specific
equivalents can be used, like __sync_fetch_and_add, memory barriers, etc.


Chris M. Thomasson

unread,
Nov 23, 2018, 4:23:59 AM11/23/18
to
On 11/23/2018 1:02 AM, David Brown wrote:
> On 23/11/18 08:51, Pavel wrote:
>> Chris M. Thomasson wrote:
>>> On 11/22/2018 8:23 AM, Robert Wessel wrote:
>>>> On Tue, 20 Nov 2018 14:05:37 -0800, "Chris M. Thomasson"
>>>> <invalid_chr...@invalid.invalid> wrote:
>>>>
>>>>> On 11/20/2018 7:05 AM, Robert Wessel wrote:
>>>>>> On Sun, 18 Nov 2018 19:29:38 +0100, Christian Gollwitzer
>>>>>> <auri...@gmx.de> wrote:
>>>>>>
>>>>>>> Am 18.11.18 um 14:21 schrieb David Brown:
>>>>>>>> On 17/11/2018 18:42, Melzzzzz wrote:
>>>>>>>>> On 2018-11-17, Vir Campestris <vir.cam...@invalid.invalid> wrote:
>>>>>>>>>> On 16/11/2018 22:09, Melzzzzz wrote:
>>>>>>>>>>> On 2018-11-16, Vir Campestris <vir.cam...@invalid.invalid> wrote:
[...]
> int trylock()
> {
> int res;
> res = pthread_mutex_trylock(&mutex);
> if (res == 0)
> ++acquires_count;
> return res;
> }
>
> The answer, of course, is to use "volatile" (or now "atomic") accesses
> on acquires_count. Alternatively, OS-specific or compiler specific
> equivalents can be used, like __sync_fetch_and_add, memory barriers, etc.

Well, wrt C11 and mtx_trylock and a return value of thrd_success,
acquires_count does not need any special decorations wrt the standard.
Period. Imvho, POSIX should strive for the same protections...


David Brown

unread,
Nov 23, 2018, 5:14:45 AM11/23/18
to
Yes, it needs special consideration. A mutex lock is not a general
barrier - it is an acquire operation that synchronises with other mutex
operations on the same mutex. That is /all/. "acquires_count" should
be atomic, and use atomic operations. Without atomic, or volatile, the
compiler can do exactly the same sort of optimisations.

In fact, using C11 mtx_trylock() allows even more optimisations than
pthread_mutex_trylock() would do. The compiler knows exactly what
mtx_trylock() does, since it is in the standard library - it could use
that knowledge to see that the call could not possibly read or write
acquires_count - and then hoist the reading of acquires_count above the
call to mtx_trylock().

> Period. Imvho, POSIX should strive for the same protections...
>

Well, since you are wrong (AFAIUI) about the guarantees you get from C11
mutex functions, this point is moot. But if you /did/ get extra
guarantees from mtx_trylock(), then it would be due to special features
of those functions in the standard, and a general normal function
unknown to the compiler cannot get the same features. The nearest you
could get was to have an atomic_thread_fence() call in the function, but
that only affects atomic operations.

Scott Lurndal

unread,
Nov 23, 2018, 10:50:34 AM11/23/18
to
Bonita Montero <Bonita....@gmail.com> writes:
>> There shouldn't be anything you can do in C++ you can't do in C - albeit
>> with more pain.
>
>There's one thing you can't do in C with the same performance like with
>C++: error handling. RAII with table-driven exception-handling has zero
>overhead for the performance-relevant case that no exception is thrown.

For some value of zero greater than zero. There's always overhead,
whether it's instruction cache footprint, more instructions executed
or poor locality of reference.

You can't get more efficient then longjmp().

Chris M. Thomasson

unread,
Nov 23, 2018, 6:12:14 PM11/23/18
to
[...]

No. acquires_count does not have to be atomic at all. A C11 mtx_t
provides a standard mutex. In C++11, its equivalent is std::mutex.

C11
https://en.cppreference.com/w/c/thread/mtx_lock
https://en.cppreference.com/w/c/thread

C++11
https://en.cppreference.com/w/cpp/thread/mutex/lock

You do not seem to understand how a C11 mutex works. It is part of the
language now. Variables protected by the mutex do NOT need to be atomic
at all, no volatile or any crap like that. No extra memory barriers are
needed at all.
____________________
// vars protected by the critical section.
int a = 0;
short b = 1;
double c = 0.5;

if (mtx_trylock(&mutex) == thrd_success)
{
// we are in a locked region.
a += b + 1;
b += 2;
c += 1.6;

int status = mtx_unlock(&mutex);
assert(status == thrd_success);
}
____________________

a, b and c do not need to be atomic types at all. No extra memory
barriers are needed. Period.

David Brown

unread,
Nov 23, 2018, 6:50:33 PM11/23/18
to
I know that.
I've read these. I've read the standards too, which are authoritative
(though I don't see anything wrong in those linked pages, and my
experience is that cppreference.com is very accurate).

I don't see anything in these that suggests that there is any kind of
synchronisation or ordering enforced on a non-atomic non-volatile
variable just because you have a mutex function. The mtx_lock()
function synchronises with other mtx_ functions on the same mutex. That
is /all/.

And in particular, even if mtx_trylock were defined to be a fence, and
even if the fence were defined to apply to all memory operations, not
just atomic ones (which is how C11 and C++11 fences are defined), it
still would not affect the validity of transforming:

if (res) {
++acquires_count;
}

into

auto tmp = acquires_count;
tmp += res ? 1 : 0;
acquires_count = tmp;


If I am wrong, show me /exactly/ which paragraphs of the C11 or C++11
standards make this wrong.

>
> C++11
> https://en.cppreference.com/w/cpp/thread/mutex/lock
>
> You do not seem to understand how a C11 mutex works. It is part of the
> language now. Variables protected by the mutex do NOT need to be atomic
> at all, no volatile or any crap like that. No extra memory barriers are
> needed at all.

There are no variables "protected" by a mutex. Mutex's do not "protect"
variables. They are locks, which are synchronisation primitives. You
can /use/ a mutex to protect access to a variable, but it does not
happen by itself. Think about it - how is "mutex" supposed to "know"
that it is protecting "acquires_count" ? How is "acquires_count"
supposed to "know" that is it protected by "mutex" ? These are
independent variables - they are unrelated.

And in particular, the code

if (res) {
++acquires_count;
}

is run whether the mutex is acquired or not. Is the mutex so magical
that it protects and controls this code even when you fail to get the lock?


> ____________________
> // vars protected by the critical section.
> int a = 0;
> short b = 1;
> double c = 0.5;
>

Do you mean these variables to be at file-scope, while the rest of the
code is within a function? If the variables are local to a function
they will be eliminated entirely by the optimiser.

> if (mtx_trylock(&mutex) == thrd_success)
> {
>    // we are in a locked region.
>    a += b + 1;
>    b += 2;
>    c += 1.6;
>
>    int status = mtx_unlock(&mutex);
>    assert(status == thrd_success);
> }
> ____________________
>
> a, b and c do not need to be atomic types at all. No extra memory
> barriers are needed. Period.

Wrong. And there is no "period" here - clearly the discussion is not over.

In practice, this will work, since there is little to be gained by
moving things around in a different way. But the standards and the
description of the mtx functions do not guarantee it.





Melzzzzz

unread,
Nov 23, 2018, 7:17:24 PM11/23/18
to
In practice, this always works... you have to provide *one* example
where this does not works and then you got point. That is any claim
that assumes for all mathematics say can be debuffed with one counter
example...
>
>
>
>
>


--
press any key to continue or any other to quit...

David Brown

unread,
Nov 23, 2018, 8:13:26 PM11/23/18
to
On 24/11/2018 01:17, Melzzzzz wrote:
> On 2018-11-23, David Brown <david...@hesbynett.no> wrote:
>> On 24/11/2018 00:12, Chris M. Thomasson wrote:
<snip>
No, the onus is on Chris to provide a justification for claiming the
code here works as he thinks it does. That should be by pointing to the
relevant paragraphs in the standards (preferably C11, as it is smaller
and easier to navigate, but C++11 or later would be fine). Failing
that, clear documentation in a compiler reference would be useful, as
would some sort of official "C++11 memory model" paper.

Otherwise "it worked when I tried it" is not worth the pixels it is
written on.

Chris M. Thomasson

unread,
Nov 23, 2018, 8:17:43 PM11/23/18
to
C11 and C++11 have guarantees on mutex operations. The variables they
protect do not need any special decorations. Fwiw, a lock on a
std::mutex basically implies acquire semantics. An unlock implies
release. A conforming C11 or C++11 compiler shall honor the semantics of
a mutex. Period. Anything else, is non-conforming. End of story.

James Kuyper

unread,
Nov 24, 2018, 12:25:16 AM11/24/18
to
On 11/23/18 20:17, Chris M. Thomasson wrote:
...
> C11 and C++11 have guarantees on mutex operations. The variables they
> protect do not need any special decorations.

What specifies which variables they protect? What is the nature of the
protection that they provide to those variables? I've reviewed every
line of the standard containing the word "mutex" without seeing any hint
of an answer to either of those questions - what did I miss?

> ... Fwiw, a lock on a

Alf P. Steinbach

unread,
Nov 24, 2018, 12:42:11 AM11/24/18
to
On 24.11.2018 06:25, James Kuyper wrote:
> On 11/23/18 20:17, Chris M. Thomasson wrote:
> ...
>> C11 and C++11 have guarantees on mutex operations. The variables they
>> protect do not need any special decorations.
>
> What specifies which variables they protect? What is the nature of the
> protection that they provide to those variables? I've reviewed every
> line of the standard containing the word "mutex" without seeing any hint
> of an answer to either of those questions - what did I miss?

Mutexes are used to provide exclusive access to variables.

It's up to the programmer to establish the guard relationship.


>> ... Fwiw, a lock on a
>> std::mutex basically implies acquire semantics. An unlock implies
>> release. A conforming C11 or C++11 compiler shall honor the semantics of
>> a mutex. Period. Anything else, is non-conforming. End of story.

Cheers & hth.,

- Alf

Tim Rentsch

unread,
Nov 24, 2018, 3:51:52 PM11/24/18
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:

> [.. distinction between the terms "parameter" and "argument" ..]
>
> Consider
>
> C++17 16.3.3.1.2/1
> <quote>
> A user-defined conversion sequence consists of an initial standard
> conversion sequence followed by a user-defined conversion (15.3)
> followed by a second standard conversion sequence. If the user-defined
> conversion is specified by a constructor (15.3.1), the initial
> standard conversion sequence converts the source type to the type
> required by the argument of the constructor.
> </quote>
>
> Here the "source type" in the last sentence, is clearly the type
> of an actual argument.
>
> The last sentence can't therefore be talking about converting that
> type to the type of the actual argument: it's already that type.
>
> So "the argument of the constructor" must be referring to the
> constructor's formal argument.

I don't think so. The quoted paragraph uses the term "argument"
and that does mean "argument" in its usual sense, not "parameter"
(or "formal argument" if someone prefers that term). I think
you may have missed a subtle but important point being made
here. Here is an example to illustrate (disclaimer: just typed
in, not compiled, though I have compiled similar examples):

class Foo {
public:
Foo( const long & ){}
};

int
takes_foo( Foo foo ){
(void) &foo; /* explicitly ignore the value of 'foo' */
return 47;
}

...

int
main(){
return takes_foo( 0 ) != 47;
}

The return statement in main() makes use of a user-defined
conversion sequence that is specified by a constructor. The
source type (for 0) is 'int'. The type of the (unnamed and
ignored) constructor parameter is 'const long &'. There must be
a conversion (or several) to get from the source type to the type
of the constructor parameter. The quoted paragraph says "the
initial standard conversion sequence converts the source type to
the type required by the argument of the constructor". The type
required by the argument of the constructor is (in this case)
'long'. The reason it is 'long' and not 'const long &' is that a
standard conversion sequence cannot get all the way from 'int' to
'const long &', only to 'long'. The definition of "standard
conversion sequence" is given in section 7, paragraph 1 (with
sub-paragraphs 1.1, 1.2, 1.3, and 1.4). Notice that it does not
include any conversions to get from a prvalue to the parameter
type 'const long &'. That step is done by a "temporary
materialization conversion", defined in 7.4. So the quoted
sentence, in talking about "the initial standard conversion
sequence", is referring to an intermediate _argument_ to the
constructor, said argument only subsequently being converted to
the type of the constructor parameter. There are three types:

the source type - int
the standard conversion sequence result type - long
the parameter type - const long &

The quoted paragraph is referring to the second of these, which
is the type of an intermediate argument to the constructor, not
the type of the parameter in the constructor's definition.

Chris M. Thomasson

unread,
Nov 24, 2018, 7:30:56 PM11/24/18
to
On 11/23/2018 9:25 PM, James Kuyper wrote:
> On 11/23/18 20:17, Chris M. Thomasson wrote:
> ...
>> C11 and C++11 have guarantees on mutex operations. The variables they
>> protect do not need any special decorations.
>
> What specifies which variables they protect? What is the nature of the
> protection that they provide to those variables? I've reviewed every
> line of the standard containing the word "mutex" without seeing any hint
> of an answer to either of those questions - what did I miss?

They basically give them acquire, for lock, and release semantics for
unlock, just like the standard memory barrier functions in C++11 and C11.

Take a look at:

https://en.cppreference.com/w/cpp/atomic/memory_order


>> ... Fwiw, a lock on a
>> std::mutex basically implies acquire semantics. An unlock implies
>> release. A conforming C11 or C++11 compiler shall honor the semantics of
>> a mutex. Period. Anything else, is non-conforming. End of story.

Fwiw, the following program has no undefined behavior wrt threading:

Take a good look at ct_shared... Its variables wrt members
ct_shared::data_[0...2] do not need any special atomics, or membars
whatsoever because it uses a std::mutex to protect itself. It is nice
that this is all 100% standard now:
_________________________________
#include <iostream>
#include <functional>
#include <mutex>
#include <thread>
#include <cassert>


#define THREADS 7
#define N 123456


// Shared Data
struct ct_shared
{
std::mutex mtx;
unsigned long data_0;
unsigned long data_1;
unsigned long data_2;
};


// A thread...
void ct_thread(ct_shared& shared)
{
for (unsigned long i = 0; i < N; ++i)
{
shared.mtx.lock();
// we are locked!
shared.data_0 += i;
shared.data_1 += i;
shared.data_2 += i;
shared.mtx.unlock();

std::this_thread::yield();

shared.mtx.lock();
// we are locked!
shared.data_0 -= i;
shared.data_1 -= i;
std::this_thread::yield(); // for fun...
shared.data_2 -= i;
shared.mtx.unlock();
}
}


int main(void)
{
ct_shared shared;

// init
shared.data_0 = 1;
shared.data_1 = 2;
shared.data_2 = 3;

// launch...
{
std::thread threads[THREADS];

// create
for (unsigned long i = 0; i < THREADS; ++i)
{
threads[i] = std::thread(ct_thread, std::ref(shared));
}

std::cout << "processing...\n\n";
std::cout.flush();

// join...
for (unsigned long i = 0; i < THREADS; ++i)
{
threads[i].join();
}
}

std::cout << "shared.data_0 = " << shared.data_0 << "\n";
std::cout << "shared.data_1 = " << shared.data_1 << "\n";
std::cout << "shared.data_2 = " << shared.data_2 << "\n";

assert(shared.data_0 == 1);
assert(shared.data_1 == 2);
assert(shared.data_2 == 3);

return 0;
}
_________________________________


No undefined behavior. Why do you think that ct_shared::data_[0...2]
should be specially decorated?

Chris M. Thomasson

unread,
Nov 24, 2018, 7:38:59 PM11/24/18
to
On 11/24/2018 4:30 PM, Chris M. Thomasson wrote:
> On 11/23/2018 9:25 PM, James Kuyper wrote:
>> On 11/23/18 20:17, Chris M. Thomasson wrote:
>> ...
>>> C11 and C++11 have guarantees on mutex operations. The variables they
>>> protect do not need any special decorations.
>>
>> What specifies which variables they protect? What is the nature of the
>> protection that they provide to those variables? I've reviewed every
>> line of the standard containing the word "mutex" without seeing any hint
>> of an answer to either of those questions - what did I miss?
>
> They basically give them acquire, for lock, and release semantics for
> unlock, just like the standard memory barrier functions in C++11 and C11.
[...]

Fwiw, the C11 and C++11 standard wrt atomics and membars allows one to
build custom mutex logic. I love that this is in the language now. Fwiw,
Alexander Terekhov built a nice one over on comp.programming.threads
many years ago, before all of this was standard. Over two decades? I
will try to find the original post.

Chris Vine

unread,
Nov 24, 2018, 8:05:25 PM11/24/18
to
On Sat, 24 Nov 2018 02:13:15 +0100
Mutexes do, as you say, provide mutual exclusion to any piece of code
which is only accessible by locking the mutex in question. However,
they do more than that. They also synchronize the values held in
memory locations: in particular, locking a mutex is an acquire operation
and unlocking it is a release operation. It is guaranteed that the
values of any variables in the program as they existed no earlier than
at the time of the unlocking of a mutex by one thread will be visible
to any other thread which subsequently locks the same mutex. An
operation with acquire semantics is one which does not permit
subsequent memory operations to be advanced before it, and an operation
with release semantics is one which does not permit preceding memory
operations to be delayed past it, as regards the two threads
synchronizing on the same synchronization object.

Non-normatively, for mutexes this is offered by §1.10/5 of C++11:

"Note: For example, a call that acquires a mutex will perform
an acquire operation on the locations comprising the mutex.
Correspondingly, a call that releases the same mutex will perform a
release operation on those same locations. Informally, performing a
release operation on A forces prior side effects on other memory
locations to become visible to other threads that later perform a
consume or an acquire operation on A."

The normative (and more hard-to-read) requirement for mutexes is in
§30.4.1.2/11 and §30.4.1.2/25 ("synchronizes with") read with §1.10/11
and §1.10/12 ("happens before") and §1.10/13 ("visible side effect")
of C++11.

Posix mutexes have the same effect although this is much more
incoherently expressed. Section 4.12 of the SUS says (without further
explanation) in referring to mutex locking and unlocking operations
(amongst other similar operations) that "The following functions
synchronize memory with respect to other threads". In practice posix
mutexes behave identically to C/C++ mutexes.

Alf P. Steinbach

unread,
Nov 24, 2018, 9:58:03 PM11/24/18
to
In the last sentence above you are yourself using the word “argument”
about a formal argument.

The actual argument is `0`. There is nothing “required by” the `0`.

The formal argument is `const long&`. There is a type required by the
`const long&`. In the given context, that type is `long`.


> The reason it is 'long' and not 'const long &' is that a
> standard conversion sequence cannot get all the way from 'int' to
> 'const long &', only to 'long'. The definition of "standard
> conversion sequence" is given in section 7, paragraph 1 (with
> sub-paragraphs 1.1, 1.2, 1.3, and 1.4). Notice that it does not
> include any conversions to get from a prvalue to the parameter
> type 'const long &'. That step is done by a "temporary
> materialization conversion", defined in 7.4. So the quoted
> sentence, in talking about "the initial standard conversion
> sequence", is referring to an intermediate _argument_ to the
> constructor, said argument only subsequently being converted to
> the type of the constructor parameter. There are three types:
>
> the source type - int
> the standard conversion sequence result type - long
> the parameter type - const long &
>
> The quoted paragraph is referring to the second of these, which
> is the type of an intermediate argument to the constructor, not
> the type of the parameter in the constructor's definition.

The last sentence I quoted from the C++17 standard, specified the result
type of the initial conversion sequence:

“If the user-defined conversion is specified by a constructor
(15.3.1), the initial standard conversion sequence converts the source
type to the type required by the argument of the constructor.”

You get into infinite recursion, a totally meaningless statement worthy
of membership in the International Tautology Club, if you define the
result type of the initial conversion sequence as (the result type of
the initial conversion sequence), as you strongly suggest here by saying
the specification of the result type of the initial conversion sequence
refers to second of the three possibilities you list, where the second
is “the standard conversion sequence result type”, which is evidently
intended to refer to the result type of the initial conversion sequence.

In an ISO standard such infinite recursion would be a defect.

David Brown

unread,
Nov 25, 2018, 10:21:44 AM11/25/18
to
Sorry, Chris, but your "proof by repeated assertion" is not good enough.
Your "Period. End of story." is just like sticking your fingers in
your ears and saying "La, la, la, I'm not listening". It shows that you
are unwilling to think about the situation or read the relevant standards.

I've read plenty of your posts here over the years, and you are
experienced with multi-threading and multi-processing. It surprises me
greatly to hear your attitude here. You know fine that in the world of
multi-threading, "it works when I tried it" is /not/ good enough. Code
can work as desired in millions of tests, and then fail at a critical
juncture in practice. You have to /know/ the code is correct. You have
to /know/ the standards guarantee particular behaviour regarding
ordering and synchronisation - you can't just guess because it looked
okay on a couple of tests, and it would be convenient to you if it worked.

David Brown

unread,
Nov 25, 2018, 10:31:04 AM11/25/18
to
On 24/11/2018 06:42, Alf P. Steinbach wrote:
> On 24.11.2018 06:25, James Kuyper wrote:
>> On 11/23/18 20:17, Chris M. Thomasson wrote:
>> ...
>>> C11 and C++11 have guarantees on mutex operations. The variables they
>>> protect do not need any special decorations.
>>
>> What specifies which variables they protect? What is the nature of the
>> protection that they provide to those variables? I've reviewed every
>> line of the standard containing the word "mutex" without seeing any hint
>> of an answer to either of those questions - what did I miss?
>
> Mutexes are used to provide exclusive access to variables.
>

Mutexes are used to provide exclusive access to a lock. That is all.

> It's up to the programmer to establish the guard relationship.
>

Exactly.

And that means using appropriate synchronisation operations and atomic
operations.

Melzzzzz

unread,
Nov 25, 2018, 10:44:10 AM11/25/18
to
On 2018-11-25, David Brown <david...@hesbynett.no> wrote:
> On 24/11/2018 06:42, Alf P. Steinbach wrote:
>> On 24.11.2018 06:25, James Kuyper wrote:
>>> On 11/23/18 20:17, Chris M. Thomasson wrote:
>>> ...
>>>> C11 and C++11 have guarantees on mutex operations. The variables they
>>>> protect do not need any special decorations.
>>>
>>> What specifies which variables they protect? What is the nature of the
>>> protection that they provide to those variables? I've reviewed every
>>> line of the standard containing the word "mutex" without seeing any hint
>>> of an answer to either of those questions - what did I miss?
>>
>> Mutexes are used to provide exclusive access to variables.
>>
>
> Mutexes are used to provide exclusive access to a lock. That is all.
>
>> It's up to the programmer to establish the guard relationship.
>>
>
> Exactly.
>
> And that means using appropriate synchronisation operations and atomic
> operations.

So mutexes are not appropriate synchronisation?

David Brown

unread,
Nov 25, 2018, 11:01:54 AM11/25/18
to
Mutexes are certainly appropriate synchronisation. What is wrong is to
assume that just because you have locked a mutex, then all normal
(non-atomic, non-volatile) accesses that are inside the locked section
in the source code, are executed entirely within the locked section in
practice.

And it is also wrong to think that just because you sometimes access a
variable within a locked section, that the variable is not accessed when
you don't have a lock.

Remember the original code under discussion here:

int res = trylock(&mutex);
if (res == 0)
++acquires_count;

There is /nothing/ in that to suggest that "acquires_count" will not be
accessed if the lock is not acquired.

Use atomic accesses, or volatile accesses, as appropriate.

Bonita Montero

unread,
Nov 25, 2018, 11:17:41 AM11/25/18
to
> You can't get more efficient then longjmp().

LOL!

Melzzzzz

unread,
Nov 25, 2018, 11:28:39 AM11/25/18
to
Hm, I was always thought that volatile is useless for multithreading...
If you use atomics you don't need mutexes.
So what's the purpose of mutexes then?

Chris Vine

unread,
Nov 25, 2018, 12:30:04 PM11/25/18
to
This thread is getting horribly confused.

Volatile is useless for multithreading. In C and C++ (as opposed to
Java and C#), volatile does not synchronize. If you need to
synchronize, use a mutex, an atomic variable or a fence.

Chris Thomasson can speak for himself, but it seems clear to me that in
the example under discussion and his subsequent example (his posting of
24 Nov 2018 16:30:43 -0800), he was taking it as a given that every read
or write access to acquire_count and to his data_0, data_1 and data_2
variables was (as written by the programmer) within a locking of the
same mutex. That is the only reasonable explanation of his postings,
and is how I read them. It also seems clear enough to me that that was
the case also for the gcc bug posting to which we were referred (the
reference to dropped increments was not to do with the fact that
acquires_count is not an atomic variable, but to the fact that the
compiler was reordering access to it outside the mutex for optimization
purposes in a way forbidden by the C++ standard, although arguably not
by posix).

In such a case, using an atomic is pointless. It just results in a
doubling up of fences or whatever other synchronization primitives the
implementation uses.

Sure, if not all accesses to a protected variable are within a mutex,
then it needs to be atomic. But if that is the case there is probably
something wrong with the design. You should not design code which
requires such a doubling up of synchronization approaches, and I cannot
immediately visualize a case where that would be sensible.

David Brown

unread,
Nov 25, 2018, 3:34:25 PM11/25/18
to
On 25/11/2018 01:30, Chris M. Thomasson wrote:
> On 11/23/2018 9:25 PM, James Kuyper wrote:
>> On 11/23/18 20:17, Chris M. Thomasson wrote:
>> ...
>>> C11 and C++11 have guarantees on mutex operations. The variables they
>>> protect do not need any special decorations.
>>
>> What specifies which variables they protect? What is the nature of the
>> protection that they provide to those variables? I've reviewed every
>> line of the standard containing the word "mutex" without seeing any hint
>> of an answer to either of those questions - what did I miss?
>
> They basically give them acquire, for lock, and release semantics for
> unlock, just like the standard memory barrier functions in C++11 and C11.
>
> Take a look at:
>
> https://en.cppreference.com/w/cpp/atomic/memory_order
>

Almost everything here describes synchronisation and ordering amongst
/atomic/ accesses. Fences do affect non-atomic accesses, but you need
to be much more careful about how the non-atomic variables are handled.
In particular, there is nothing about fences (or mutexes) that stops
other accesses to the non-atomic variables. And in the case of the
original example here, the code after the "trylock" call runs whether
the lock is taken or not - and if it is not taken, there is no fence.
Note that this is /totally/ different from the original example. Here,
you have a lock - in the earlier example, you might or might not have
the lock. The problem situation earlier came from the possibility of
having a write operation even when the lock was not taken.
The ordering enforced by the fences in the mutex acquire and release
operations affects the /accesses/ to the variables - it does not lock or
protect the variables themselves. This is important to understand,
especially when considering non-atomic non-volatile variables that can
have other accesses generated by the compiler optimisations.

Certainly the C11 and C++11 standards have made this sort of thing
clearer and easier. But do not make the mistake of thinking it has
suddenly become easy - you still need to think long and hard about
things, especially if you are using data that is not atomic.

Vir Campestris

unread,
Nov 25, 2018, 4:09:01 PM11/25/18
to
On 25/11/2018 16:01, David Brown wrote:
> Use atomic accesses, or volatile accesses, as appropriate.

Volatile is no longer appropriate except when accessing hardware registers.

It Just Doesn't Work on all architectures when you've got
multiprocessors and cacheing.

Andy

Pavel

unread,
Nov 25, 2018, 4:21:28 PM11/25/18
to
I just found a kind-of-resolution of this dispute by Ian Lance's, with
good explanation:

https://www.airs.com/blog/archives/79

Essentially, Ian, beleives that "For this code to be correct in standard
C, the variable needs to be marked as volatile, or it needs to use an
explicit memory barrier (which requires compiler specific magic–in the
case of gcc, a volatile asm with an explicit memory clobber)."

Ian did eventually change the compiler, mainly it seems to please
[Linux] kernel people. However what's interesting even Linus who
essentially blasted GCC team in his usual arrogant manner (see
https://lkml.org/lkml/2007/10/25/186) does not seem to believe the
optimization is illegal as far as specs are concerned; instead he
lambasts gcc team for being "far enough removed from "real life" that
they have a tendency to talk in terms of "this is what the spec says"
rather than "this is a problem". Essentially his position (as quite
often before): "this should be fixed because I don't like it" -- which
seems to be slightly different from "this should be fixed because it is
against the specs (in this case , ISO C and POSIX)".

Whether we like it or not, POSIX says nothing about memory barriers or
fences that mutex locking should execute or any "variables protected by
a mutex".

Even (C++11) standard is not definitive: in 1.10-5 it says "a call that
acquires a mutex will perform an acquire operation on the locations
comprising the mutex"; but it never defines what exactly locations a
mutex "comprise".

Then, 1.10-8 says:

The least requirements on a conforming implementation are:
— Access to volatile objects are evaluated strictly according to the
rules of the abstract machine.
...
These collectively are referred to as the observable behavior of the
program.

-- which seems to assume that access to non-volatile objects may be
reordered.

But then,

1.10-5 ...A synchronization operation without an associated memory
location is a fence and can be either
an acquire fence, a release fence, or both an acquire and release fence. ...

-- that is mutex locking in a fence in C++11; however the fence is later
defined as a hardware memory ordering (which is different from compiler
fence that essentially prevents compiler from reading variable too early
or writing it too late).

But, eventually, it seems (again, from the definition of
atomic_signal_fence as a subset of atomic_thread_fence, rather than
explicitly) that any fence always should inhibit memory access
reordering by the compiler.

Therefore it seems that with regard to C++ mutex, the compiler fences
are to be always executed by mutex locking (for all variables -- which
is clearly a big impediment for optimization) and hence the
correspondent optimization for C++11 standard mutex would be illegal.

But, to repeat myself, POSIX does not have any such language so no
assumption should be made about compiler memory access reordering across
POSIX mutexes; instead, correspondent gcc or gcc asm primitives should
be used.


Scott Lurndal

unread,
Nov 25, 2018, 4:47:50 PM11/25/18
to
David Brown <david...@hesbynett.no> writes:
>On 24/11/2018 06:42, Alf P. Steinbach wrote:
>> On 24.11.2018 06:25, James Kuyper wrote:
>>> On 11/23/18 20:17, Chris M. Thomasson wrote:
>>> ...
>>>> C11 and C++11 have guarantees on mutex operations. The variables they
>>>> protect do not need any special decorations.
>>>
>>> What specifies which variables they protect? What is the nature of the
>>> protection that they provide to those variables? I've reviewed every
>>> line of the standard containing the word "mutex" without seeing any hint
>>> of an answer to either of those questions - what did I miss?
>>
>> Mutexes are used to provide exclusive access to variables.
>>
>
>Mutexes are used to provide exclusive access to a lock. That is all.

More accurately, they provide exclusive access to one or more code
sequences.

David Brown

unread,
Nov 25, 2018, 4:55:36 PM11/25/18
to
Volatile accesses force an ordering compared to other volatiles, but
that order is not necessarily visible to other threads. They can still
be useful for two purposes in a multi-threaded environment.

One is if you have a single cpu - no SMP or multi-threading. In such
systems, different threads /will/ see the same order of volatile
accesses (even if memory and other bus masters perhaps do not see the
same order without additional fences, barriers, or synchronisation
instructions), and volatile accesses can be significantly cheaper than
atomic accesses.

The other is that volatile accesses can be used to ensure order compared
to other actions, from the viewpoint of the current thread. They can
also control optimisation, avoiding the kind of extra read and write
operation that has been a concern in this thread. And you can easily
force an access to a normal variable to be atomic using a pointer cast
(this is not guaranteed by the standards at the moment, but all
compilers allow it and I believe it is likely to be codified in the
upcoming C standards). As far as I know, you cannot use "*((_Atomic int
*) &x)" to force an atomic access to x, in the way you can force a
volatile access with "*((volatile int *) &x)". Even it were possible,
volatile accesses can be cheaper than atomic accesses.

But as is noted below, volatile accesses do not synchronise with
multi-threading primitives or atomics (unless these are also volatile),
and their order is not guaranteed to match when viewed from different
threads.

>> If you use atomics you don't need mutexes.
>> So what's the purpose of mutexes then?
>
> This thread is getting horribly confused.
>
> Volatile is useless for multithreading. In C and C++ (as opposed to
> Java and C#), volatile does not synchronize. If you need to
> synchronize, use a mutex, an atomic variable or a fence.

Agreed. (Implementation-specific methods are also possible, but clearly
they will not be portable.)

>
> Chris Thomasson can speak for himself, but it seems clear to me that in
> the example under discussion and his subsequent example (his posting of
> 24 Nov 2018 16:30:43 -0800), he was taking it as a given that every read
> or write access to acquire_count and to his data_0, data_1 and data_2
> variables was (as written by the programmer) within a locking of the
> same mutex. That is the only reasonable explanation of his postings,
> and is how I read them. It also seems clear enough to me that that was
> the case also for the gcc bug posting to which we were referred (the
> reference to dropped increments was not to do with the fact that
> acquires_count is not an atomic variable, but to the fact that the
> compiler was reordering access to it outside the mutex for optimization
> purposes in a way forbidden by the C++ standard, although arguably not
> by posix).
>
> In such a case, using an atomic is pointless. It just results in a
> doubling up of fences or whatever other synchronization primitives the
> implementation uses.

As I see it, the fences from the mutex lock (and presumably later
unlock) protect the accesses to data_0, data_1 and data_2 in his later
example. But the earlier example from the google group link was
different - the increment code was not necessarily in the context of
having the lock. The fence from the lock would prevent movement of the
accesses to acquire_const from moving before the trylock call, but it
would not prevent optimisation after that call. So in this case,
additional effort /is/ needed to ensure there is no unexpected effects.
This could be from other fences, atomic accesses, or volatile accesses.
The following would, I believe, all work:

int acquire_counts;

int trylock1(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
atomic_thread_fence(memory_order_acquire);
++acquire_counts;
}
return res;
}

or


int acquire_counts;

int trylock2(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
(*(volatile int*)(&acquire_counts))++;
}
return res;
}


or


_Atomic int acquire_counts;

int trylock3(void) {
int res = mtx_trylock(&mutex);
if (res == thrd_success) {
++acquire_counts;
}
return res;
}

>
> Sure, if not all accesses to a protected variable are within a mutex,
> then it needs to be atomic.

You need to be careful about mixing accesses that are protected with a
mutex with accesses that are not protected with the same mutex, even if
these are atomic. Some accesses won't be guaranteed to be visible, or
in the same order, unless the reading thread also takes the same mutex.
Without that, other threads will read the atomic data with either the
old value, or the new value - but not necessarily with the same ordering
amongst other data.

> But if that is the case there is probably
> something wrong with the design.

Agreed.

> You should not design code which
> requires such a doubling up of synchronization approaches, and I cannot
> immediately visualize a case where that would be sensible.
>

I'd say trylock1 above is the best choice for this example, and avoids
unnecessary "doubling up". But too much protection is better than too
little protection - it is better to be a bit inefficient, than to have
the risk of race conditions.

Chris M. Thomasson

unread,
Nov 25, 2018, 5:05:31 PM11/25/18
to
Correct. Afaict, David Brown is mistaken wrt his view on C11 and C++11
mutexs, and how they work. He does not seem to understand how a
acquire-release relationship guarantees visibility, and how it is now
part of the language itself.


> Posix mutexes have the same effect although this is much more
> incoherently expressed. Section 4.12 of the SUS says (without further
> explanation) in referring to mutex locking and unlocking operations
> (amongst other similar operations) that "The following functions
> synchronize memory with respect to other threads". In practice posix
> mutexes behave identically to C/C++ mutexes.
>

Agreed. POSIX casts some rules on a system, and the compiler is part of
the system. Always had a good laugh at that quote:

"The following functions synchronize memory with respect to other threads."

Define synchronize? At least C++11/C11 defines it as an acquire-release
relationship. ;^)

Jorgen Grahn

unread,
Nov 25, 2018, 5:14:22 PM11/25/18
to
On Sun, 2018-11-25, Pavel wrote:
...
> Whether we like it or not, POSIX says nothing about memory barriers or
> fences that mutex locking should execute or any "variables protected by
> a mutex".

I haven't followed this thread (and I'm snipping heavily) but as I
remember the attitude here and in comp.lang.c, before the languages
got threading support, it went roughly like this:

Any compiler/library combination which claims to support pthreads
makes sure to put a suitable fence/barrier/whatever-it's-called
at a mutex lock, because anything else would be suicide.

...
> But, to repeat myself, POSIX does not have any such language so no
> assumption should be made about compiler memory access reordering across
> POSIX mutexes; instead, correspondent gcc or gcc asm primitives should
> be used.

Just to clarify, are you saying most software which uses POSIX
multithreading is broken, since it (in my experience) rarely inserts
its own fences?

(The answer "yes" wouldn't bother me. I'm fine with this being a
theoretical problem, triggered by a "suicidal" compiler/library
combination. Partly because multithreaded software tends to be
subtly broken anyway.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Chris M. Thomasson

unread,
Nov 25, 2018, 5:18:38 PM11/25/18
to
The standard guarantees that a std::mutex works without having to use
any special decorations (atomics, membars, volatile) on the things it
protects. Read this response very carefully:

https://groups.google.com/forum/#!original/comp.lang.c++/zcVKIRlahZg/qenDx2WYCgAJ

I am very happy that all of this is in the standard now.


> I've read plenty of your posts here over the years, and you are
> experienced with multi-threading and multi-processing.  It surprises me
> greatly to hear your attitude here.  You know fine that in the world of
> multi-threading, "it works when I tried it" is /not/ good enough.  Code
> can work as desired in millions of tests, and then fail at a critical
> juncture in practice.  You have to /know/ the code is correct.  You have
> to /know/ the standards guarantee particular behaviour regarding
> ordering and synchronisation - you can't just guess because it looked
> okay on a couple of tests, and it would be convenient to you if it worked.

Imo, the fact that standard C++11 and C11 guarantee these things now, is
great. Assuming a bug free compiler: A std::mutex works fine. However, I
never really trusted things before they were integrated into the
language and was forced to use externally assembled functions. Check
this out, please try to read all, it _is_ an interesting thread:

https://groups.google.com/d/topic/comp.programming.threads/KepRbFWBJA4/discussion

Fwiw, I was SenderX. ;^)

Chris M. Thomasson

unread,
Nov 25, 2018, 5:28:08 PM11/25/18
to
Wrong. The trylock will have acquire semantics when it actually locks,
or returns zero in this case. A compiler that does not honor this _fact_
is totally broken wrt the standard. Heck, it is broken even in POSIX.
Big time.
It is loading more messages.
0 new messages