Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Default values in #define

901 views
Skip to first unread message

Rick C. Hodgin

unread,
Aug 19, 2016, 5:22:27 PM8/19/16
to
I tried to do this today, and the compiler balked. Is there a syntax
where this idea is legal (line 06 and the [c = "hello"] part)?

01: void myfunc(int value, char* text)
02: {
03: printf("%d %s\n", value, text);
04: }
05:
06: #define abc(a, b, c = "hello") a(b, c)
07:
08: abc(myfunc, 2);

Best regards,
Rick C. Hodgin

Ian Collins

unread,
Aug 19, 2016, 5:36:17 PM8/19/16
to
If you want C++, you know where to find it!

--
Ian

Ben Bacarisse

unread,
Aug 19, 2016, 5:48:24 PM8/19/16
to
The irony is that he uses C++ despite posting in c.l.c.

--
Ben.

BartC

unread,
Aug 19, 2016, 5:56:41 PM8/19/16
to
I tried it in C++ and it didn't seem to work.

--
Bartc


Rick C. Hodgin

unread,
Aug 19, 2016, 5:56:44 PM8/19/16
to
I use a C++ compiler and some of its tightenings and relaxations, but
(apart from those tightenings and relaxations) I program in C.

Ian Collins

unread,
Aug 19, 2016, 6:14:30 PM8/19/16
to
In C++, we don't use macros.

A function template like

template <typename Fn, typename B>
void abc( Fn a, B b, const char* c = "hello") { a(b, c); }

would be used.

--
Ian

Rick C. Hodgin

unread,
Aug 19, 2016, 9:09:17 PM8/19/16
to
06: #define abc(a, b, c = "no param given") a(b, c)
07:
08: abc(myfunc, 2);
09: abc(myfunc, 5, "five");

Ian Collins

unread,
Aug 19, 2016, 9:15:59 PM8/19/16
to
# cat x.cc
#include <stdio.h>

void myfunc(int value, const char* text)
{
printf("%d %s\n", value, text);
}

template <typename Fn, typename B>
void abc( Fn a, B b, const char* c = "no param given" ) { a(b, c); }

int main()
{
abc(myfunc, 2);
abc(myfunc, 5, "five");
}

# CC x.cc; ./a.out;
2 no param given
5 five

--
Ian

Rick C. Hodgin

unread,
Aug 19, 2016, 10:10:20 PM8/19/16
to
I appreciate your response, Ian. There are things about this I don't
understand, but since it's clc, I'll let it go.

Thank you again.

Ian Collins

unread,
Aug 19, 2016, 10:50:58 PM8/19/16
to
Seeing as you also asked there, I've cross-posted to c.l.c++ so you can
follow up there.

--
Ian

BartC

unread,
Aug 20, 2016, 5:35:27 AM8/20/16
to
I don't understand. If you're going to use C++, then why not just this:

#include <stdio.h>

void myfunc(int value, const char* text="no param given"){
printf("%d %s\n",value,text);
}

int main(void){
myfunc(2);
myfunc(5,"five");
}

As it appears the OP just wants to implement a default parameter.

(This also works in C when lccwin is used.)

--
Bartc



Malcolm McLean

unread,
Aug 20, 2016, 6:14:15 AM8/20/16
to
On Saturday, August 20, 2016 at 10:35:27 AM UTC+1, Bart wrote:
>
> I don't understand. If you're going to use C++, then why not just this:
>
> #include <stdio.h>
>
> void myfunc(int value, const char* text="no param given"){
> printf("%d %s\n",value,text);
> }
>
> int main(void){
> myfunc(2);
> myfunc(5,"five");
> }
>
> As it appears the OP just wants to implement a default parameter.
>
> (This also works in C when lccwin is used.)
>
>
The C way is this:

To make the example a bit more realistic, we're doing Canny line
detection. There are several parameters such as filter sizes you
can fiddle about with. But most users will just want to pass in
an image and get the lines, they won't know what values to pass
for the filters.

So

unsigned char *canny(unsigned char *grey, int width, int height);

unsigned char *cannyparam(unsigned char *grey, int width, int height, float lowThreshold, float highthreshold, float gaussiankernelradius, int gaussiankernelwidth, int contrastnormalised);

unsigned char *canny(unsigned char *grey, int width, int height)
{
return cannyparam(grey, width, height, 2.5f, 7.5f, 2.0f, 16, 0);
}

BartC

unread,
Aug 20, 2016, 7:24:20 AM8/20/16
to
That's using two different functions so it not quite as useful (although
I use the technique myself to wrap external functions I consider too
complex).

With proper defaults, you can start with a canny(a,b,c) function that
takes three parameters, with perhaps built-in defaults. Then later on
you could add extra, optional parameters to override the defaults.

Now the same canny() function can be called with from 3 to 7 params.

The advantage is that existing code that calls canny(a,b,c) will work
unchanged (but will likely need recompiling).

(I expected by this point that someone would have presented some
elaborate C macro to solve the problem, but maybe there isn't one. Or
maybe it's more hassle to use than simply providing the extra arguments.

Your specific example I think can be done with variadic arguments, but
it's harder to code inside the function.)


--
Bartc

Ian Collins

unread,
Aug 20, 2016, 8:15:01 AM8/20/16
to
The code was to illustrate the use of a function template with a default
parameter which was closer to the original macro.

--
Ian

Ben Bacarisse

unread,
Aug 20, 2016, 9:06:21 AM8/20/16
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:
<snip>
> 01: void myfunc(int value, char* text)
> 02: {
> 03: printf("%d %s\n", value, text);
> 04: }
> 05:
> 06: #define abc(a, b, c = "no param given") a(b, c)
> 07:
> 08: abc(myfunc, 2);
> 09: abc(myfunc, 5, "five");

I know I promised myself I would not reply to you again, but the lack of
an answer is bothering me:

#define abc(...) abc_(__VA_ARGS__, "no param given", dummy needed by ISO C)
#define abc_(a,b,c,...) a(b, c)

Unfortunately this answer was revealed to me only after performing a
satanic ritual with my communist husband so you won't be able to use it
in your project. Other people might find it useful though.

--
Ben.

Kenny McCormack

unread,
Aug 20, 2016, 10:13:34 AM8/20/16
to
In article <np988l$cn0$1...@dont-email.me>, BartC <b...@freeuk.com> wrote:
...
>I don't understand. If you're going to use C++, then why not just this:
>
>#include <stdio.h>
>
>void myfunc(int value, const char* text="no param given"){
> printf("%d %s\n",value,text);
>}
>
>int main(void){
> myfunc(2);
> myfunc(5,"five");
>}
>
>As it appears the OP just wants to implement a default parameter.

The more normal C way to do it is to always pass the second parameter, but
to pass it as NULL if it is not to be used (in which case the called
function works out that it is null and uses its own default value).o

This is common in functions that take a buffer argument (aka, char *, aka,
string), where if you pass it as NULL, the called function allocates a
buffer for you.

A generalization of this technique is to use an "argv style" parameter (a
null terminated list of pointers to null terminated strings), where the
caller iterates through it in the same manner as we are used to iterating
through main's "argv" parameter.

--
Religion is what keeps the poor from murdering the rich.

- Napoleon Bonaparte -

Rick C. Hodgin

unread,
Aug 20, 2016, 3:08:09 PM8/20/16
to
Correct. However, this example I posted is contrived. The real use case
is more complex, of course.

> (This also works in C when lccwin is used.)

Jacob definitely does good work ... now, if we could just get him to do a
little more of it. Or, at least that's what my mother used to say about me.

:-)

Rick C. Hodgin

unread,
Aug 20, 2016, 3:14:33 PM8/20/16
to
It is not me to whom you will give an account, Ben. I just point you to and
remind you of the One to whom you will give an account.

I care about you, and about other people as well, so I guide all of you in an
onging capacity toward Him. It's up to each of you indidvidually as to whether
or not you will receive His free offer of forgiveness of your sin ... or not.
I hope you do, because I would like to see you in Heaven. I would like to see
everybody in Heaven.

-----
I appreciate the responses I've received. I haven't been happy with any of
the available alternatives so far ... so I will plan to allow default
parameters in CAlive wherever parameters can be used. It seems to be the
best and most easy to understand solution.

Philipp Klaus Krause

unread,
Aug 21, 2016, 6:51:30 AM8/21/16
to
I you want to use default arguments in standard C, I recommend having a
look at P99 (http://p99.gforge.inria.fr) and Jens Gustedt's blog (in
particular
https://gustedt.wordpress.com/2010/06/03/default-arguments-for-c99/).

Philipp

Rick C. Hodgin

unread,
Aug 22, 2016, 8:11:42 AM8/22/16
to
Thank you, Philipp. Interesting additions.

Rick C. Hodgin

unread,
Aug 22, 2016, 8:17:44 AM8/22/16
to
On Saturday, August 20, 2016 at 10:13:34 AM UTC-4, Kenny McCormack wrote:
> --
> Religion is what keeps the poor from murdering the rich.
> - Napoleon Bonaparte -

Kenny, do you understand the difference between religion and having a
personal relationship with Jesus Christ? Religion is an outside-in force.
Having a personal relationship with Jesus Christ is something that's the
opposite, emanating from the inside-out.

A person must be born again to see the Kingdom of Heaven, Kenny:

http://biblehub.com/kjv/john/3-3.htm
http://biblehub.com/kjv/john/3-7.htm
"Jesus answered and said unto him, Verily, verily, I say unto thee,
Except a man be born again, he cannot see the kingdom of God."

It's not something a man can do on his own. He must seek the truth, and
then God changes the person on the inside, from the inside-out, so that
those things which could not previously be understood are then able to be
understood through the new additions given a man by the born again nature,
which is a spirit nature.

You'll never get it, Kenny, unless you seek the truth. And you are doing
harm to people with your misguided attacks upon "religion" when what you
really need to be proclaiming is that "impotent man-made contrivances, all
of which are pale substitutes for the truth, for Jesus Christ, are always
tools used by Satan to keep men distracted, so that they will not seek the
truth, will not come to Jesus, will not ask forgiveness, and will not be
saved."

The truth sets men free, Kenny. Forever. Even you.

Chad

unread,
Aug 22, 2016, 10:56:30 AM8/22/16
to
How did you know that homegirl was using C++?

David Brown

unread,
Aug 22, 2016, 11:09:18 AM8/22/16
to
I am not sure what you mean with the "homegirl" reference - perhaps it
is some American slang that I don't get.

But we know that Rick programs basically in C, but using a C++ compiler,
because he has told us. He believes that full C++ is too complicated a
language, but takes advantage of a few C++ features in his basically C
programming. And his (currently) preferred development environment is
MSVC, which has very good C++ support but quite poor C support (maybe
that's different now that they are using clang).

Rick C. Hodgin

unread,
Aug 22, 2016, 11:21:37 AM8/22/16
to
On Monday, August 22, 2016 at 11:09:18 AM UTC-4, David Brown wrote:
> On 22/08/16 16:56, Chad wrote:
> > On Friday, August 19, 2016 at 2:48:24 PM UTC-7, Ben Bacarisse wrote:
> >> Ian Collins <ian-...@hotmail.com> writes:
> >>
> >>> On 08/20/16 09:22 AM, Rick C. Hodgin wrote:
> >>>> I tried to do this today, and the compiler balked. Is there a syntax
> >>>> where this idea is legal (line 06 and the [c = "hello"] part)?
> >>>>
> >>>> 01: void myfunc(int value, char* text)
> >>>> 02: {
> >>>> 03: printf("%d %s\n", value, text);
> >>>> 04: }
> >>>> 05:
> >>>> 06: #define abc(a, b, c = "hello") a(b, c)
> >>>> 07:
> >>>> 08: abc(myfunc, 2);
> >>>
> >>> If you want C++, you know where to find it!
> >>
> >> The irony is that he uses C++ despite posting in c.l.c.
> >
> > How did you know that homegirl was using C++?
>
> I am not sure what you mean with the "homegirl" reference - perhaps it
> is some American slang that I don't get.

Yep. It's Chad's derogatory name calling directed at me.

> But we know that Rick programs basically in C, but using a C++ compiler,
> because he has told us. He believes that full C++ is too complicated a
> language, but takes advantage of a few C++ features in his basically C
> programming.

I think it's both too complex and an incredibly messy syntax, though I
think there are some truly brilliant aspects of its extension of C.

> And his (currently) preferred development environment is
> MSVC, which has very good C++ support but quite poor C support (maybe
> that's different now that they are using clang).

Is that true that Microsoft is using clang now? Does that mean that clang
is an optional compiler? Or that Microsoft is using clang as part of its
VC++ compilers (as your reply seems to indicate)?

If so, despite its many improvements I shall abandon all new aspects of
Visual Studio 2015 and later, as I want nothing to do with Apple. Clang's
logo is a fire breathing dragon, for crying out loud:

http://llvm.org/Logo.html
https://en.wikipedia.org/wiki/Wyvern

All of Apple's products are to be avoided. The only reason I've used
Microsoft's products are because I used them before I was saved, and God
received me in my stead, and there are several cases where I can remember
in my pre-salvation days the hand of some (at the time I thought) force
at work in my life, guiding me, including my original introduction into
Visual Studio 98, and later 2002 and 2003 (before I was saved in 2004).

It's also the fundamental reason why I'm writing my own compiler, even
my own complete hardware and software stack, so that I have my own tools
upon which to build everything, tools founded upon an offering unto the
Lord Jesus Christ.

[My original attempt at posting failed. I apologize if this posts twice.]

Rick C. Hodgin

unread,
Aug 22, 2016, 11:47:14 AM8/22/16
to
I see where it's an add-on, such that you can support Android, iOS, and
clang tools from within Visual Studio as a cross-platform feature, but
not as though it's been directly incorporated into VC/VC++ compilers.

Rick C. Hodgin

unread,
Aug 22, 2016, 11:55:45 AM8/22/16
to
On Monday, August 22, 2016 at 11:47:14 AM UTC-4, Rick C. Hodgin wrote:
> On Monday, August 22, 2016 at 11:21:37 AM UTC-4, Rick C. Hodgin wrote:
> > On Monday, August 22, 2016 at 11:09:18 AM UTC-4, David Brown wrote:
> > > And his (currently) preferred development environment is
> > > MSVC, which has very good C++ support but quite poor C support (maybe
> > > that's different now that they are using clang).
> >
> > Is that true that Microsoft is using clang now? Does that mean that clang
> > is an optional compiler? Or that Microsoft is using clang as part of its
> > VC++ compilers (as your reply seems to indicate)?
>
> I see where it's an add-on, such that you can support Android, iOS, and
> clang tools from within Visual Studio as a cross-platform feature, but
> not as though it's been directly incorporated into VC/VC++ compilers.

Support:

https://blogs.msdn.microsoft.com/vcblog/2016/06/03/clang-3-8-in-the-may-release-of-clang-with-microsoft-codegen/

"When creating a new project you should see an option for two Clang
projects in the Cross Platform section of the Visual C++ templates..."

supe...@casperkitty.com

unread,
Aug 22, 2016, 12:27:36 PM8/22/16
to
On Monday, August 22, 2016 at 10:21:37 AM UTC-5, Rick C. Hodgin wrote:
> Is that true that Microsoft is using clang now? Does that mean that clang
> is an optional compiler? Or that Microsoft is using clang as part of its
> VC++ compilers (as your reply seems to indicate)?

Microsoft's compilers have historically taken the view that compilers for
platforms where various actions naturally result in useful behaviors should
treat such actions as defined, even if supporting such actions on other
platforms would be expensive (in cases where actions would be practical to
support on some platforms but not others, the Standard leaves the question
of whether or not to support them to the implementer's judgment).

Clang, however, seems to go dangerously far in the opposite direction, and
(at least in the version on godbolt) does not support type punning through
array-type union members, even when the array-indexing operators are applied
to arrays accessed through the union (e.g. given:

typedef union {long long l; int64_t ll[1];} lu;
lu *p1, *p2;

an attempt to access p1->ll[0] will be seen simply as an indirection of a
long* with no relation to the union, and will thus be presumed incapable
of affecting p2->l (notwithstanding the fact that members l and ll[0] have
identical bitwise representations!). I wonder what MS compiler writers
thank about that sort of craziness.

Rick C. Hodgin

unread,
Aug 22, 2016, 12:41:31 PM8/22/16
to
It seems an odd decision on behalf of the clang developers. They must have
a guard of warning in their documentation about using unions at your own
risk. :-) If true, I agree with you that it's dangerous.

Ben Bacarisse

unread,
Aug 22, 2016, 3:19:32 PM8/22/16
to
supe...@casperkitty.com writes:
<snip>
> Clang, however, seems to go dangerously far in the opposite direction, and
> (at least in the version on godbolt) does not support type punning through
> array-type union members, even when the array-indexing operators are applied
> to arrays accessed through the union (e.g. given:
>
> typedef union {long long l; int64_t ll[1];} lu;
> lu *p1, *p2;

I see a problem even without the array. At first glance it looks like a
bug. Have you communicated with the clang people to see if they think
it's a bug? My test case was:

#include <stdint.h>
#include <stdio.h>

union U {long long l; int64_t ll;};

long long f(union U *p1, union U *p2)
{
p2->l = 42;
p1->ll = 0;
return p2->l;
}

int main(void)
{
union U u;
printf("%lld\n", f(&u, &u));
}

prints 42 with clang -std=c11 -pedantic -O2 (version 3.6.2-1). Is this
what you mean or are you describing something else?

<snip>
--
Ben.

Rick C. Hodgin

unread,
Aug 22, 2016, 3:55:04 PM8/22/16
to
If you are on an Intel CPU, can you post the assembly source code it
generates for those lines in f() and main()?

supe...@casperkitty.com

unread,
Aug 22, 2016, 3:56:15 PM8/22/16
to
On Monday, August 22, 2016 at 2:19:32 PM UTC-5, Ben Bacarisse wrote:
> I see a problem even without the array. At first glance it looks like a
> bug. Have you communicated with the clang people to see if they think
> it's a bug? My test case was:
>
> #include <stdint.h>
> #include <stdio.h>
>
> union U {long long l; int64_t ll;};
>
> long long f(union U *p1, union U *p2)
> {
> p2->l = 42;
> p1->ll = 0;
> return p2->l;
> }
>
> int main(void)
> {
> union U u;
> printf("%lld\n", f(&u, &u));
> }
>
> prints 42 with clang -std=c11 -pedantic -O2 (version 3.6.2-1). Is this
> what you mean or are you describing something else?

That seems like the issue, though I'd noticed the issue first when testing
with two arrays (which used to be an accepted way of handling cases where
data will sometimes be read with one sized of chunk and written as another).
I hadn't thought clang would go so far as to ignore the possibility of
aliasing with directly-accessed lvalue members.

The problem, of course, is that while the authors of the Standard have
sought to avoid defining behaviors that might not be meaningful on
absolutely all platforms, modern compiler writers treat the Standard as
passing judgement on whether quality implementations should support
behaviors on platforms where they are meaningful. Given that the Standard
makes no effort to forbid conforming-but-useless implementations in other
respects, there's no reason the Standard should have to mandate that an
implementation where "long long" and "int64_t" have identical representations
should recognize aliasing between them, especially when both appear in the
same union. Good quality implementations will recognize such aliasing, and
poor quality implementations could find some other excuse to behave in
arbitrary function any time such aliasing would be relevant.

Ben Bacarisse

unread,
Aug 22, 2016, 4:11:43 PM8/22/16
to
supe...@casperkitty.com writes:

> On Monday, August 22, 2016 at 2:19:32 PM UTC-5, Ben Bacarisse wrote:
>> I see a problem even without the array. At first glance it looks like a
>> bug. Have you communicated with the clang people to see if they think
>> it's a bug? My test case was:
>>
>> #include <stdint.h>
>> #include <stdio.h>
>>
>> union U {long long l; int64_t ll;};
>>
>> long long f(union U *p1, union U *p2)
>> {
>> p2->l = 42;
>> p1->ll = 0;
>> return p2->l;
>> }
>>
>> int main(void)
>> {
>> union U u;
>> printf("%lld\n", f(&u, &u));
>> }
>>
>> prints 42 with clang -std=c11 -pedantic -O2 (version 3.6.2-1). Is this
>> what you mean or are you describing something else?
>
> That seems like the issue,
<snip>
> The problem, of course, is that while the authors of the Standard have
> sought to avoid defining behaviors that might not be meaningful on
> absolutely all platforms, modern compiler writers...

...and off you go. Have you asked if the clang people think this is a
bug or not?

<snip>
--
Ben.

Keith Thompson

unread,
Aug 22, 2016, 4:39:43 PM8/22/16
to
The problem, of course, is that Clang apparently has a bug.

The C standard has a footnote in section 6.5.2.3:

If the member used to read the contents of a union object is
not the same as the member last used to store a value in the
object, the appropriate part of the object representation of
the value is reinterpreted as an object representation in the
new type as described in 6.2.6 (a process sometimes called
"type punning"). This might be a trap representation.

I can confirm that the bug occurs with clang 3.8.1 on x86-64
(with -O2, but not with -O1). int64_t is defined as long. If I
change long long to long in the program (they have the same size
and representation), the problem goes away.

Not everything is about the vast conspiracy to produce deliberately
useless compilers.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

David Brown

unread,
Aug 22, 2016, 5:13:35 PM8/22/16
to
On 22/08/16 17:21, Rick C. Hodgin wrote:
> On Monday, August 22, 2016 at 11:09:18 AM UTC-4, David Brown wrote:
>> On 22/08/16 16:56, Chad wrote:
>>> On Friday, August 19, 2016 at 2:48:24 PM UTC-7, Ben Bacarisse wrote:
>>>> Ian Collins <ian-...@hotmail.com> writes:
>>>>
>>>>> On 08/20/16 09:22 AM, Rick C. Hodgin wrote:
>>>>>> I tried to do this today, and the compiler balked. Is there a syntax
>>>>>> where this idea is legal (line 06 and the [c = "hello"] part)?
>>>>>>
>>>>>> 01: void myfunc(int value, char* text)
>>>>>> 02: {
>>>>>> 03: printf("%d %s\n", value, text);
>>>>>> 04: }
>>>>>> 05:
>>>>>> 06: #define abc(a, b, c = "hello") a(b, c)
>>>>>> 07:
>>>>>> 08: abc(myfunc, 2);
>>>>>
>>>>> If you want C++, you know where to find it!
>>>>
>>>> The irony is that he uses C++ despite posting in c.l.c.
>>>
>>> How did you know that homegirl was using C++?
>>
>> I am not sure what you mean with the "homegirl" reference - perhaps it
>> is some American slang that I don't get.
>
> Yep. It's Chad's derogatory name calling directed at me.
>

I guessed as much, but I still don't get the point. I suppose I never
really understand the point of such name-calling. There is no need to
explain it, of course.

>> But we know that Rick programs basically in C, but using a C++ compiler,
>> because he has told us. He believes that full C++ is too complicated a
>> language, but takes advantage of a few C++ features in his basically C
>> programming.
>
> I think it's both too complex and an incredibly messy syntax, though I
> think there are some truly brilliant aspects of its extension of C.

At least some of its messiness is because of its C ancestry - in order
to keep compatibility, it was unable to change some of the less
appealing parts of C. But yes, it has plenty of complications and messy
syntax - although I don't know any language that is powerful and
feature-rich, and does not have a messy syntax or complications. (I
realise you will have different opinions, and I am not trying to provoke
an argument - but I think your CAlive syntax is very much messier and
more complex than C++. Sometimes these things are rather subjective.)

>
>> And his (currently) preferred development environment is
>> MSVC, which has very good C++ support but quite poor C support (maybe
>> that's different now that they are using clang).
>
> Is that true that Microsoft is using clang now? Does that mean that clang
> is an optional compiler? Or that Microsoft is using clang as part of its
> VC++ compilers (as your reply seems to indicate)?
>

I think the idea is to use clang as a front-end for C, but stick to VC's
back end. But I am not an MSVC user, and I don't know the details.

> If so, despite its many improvements I shall abandon all new aspects of
> Visual Studio 2015 and later, as I want nothing to do with Apple. Clang's
> logo is a fire breathing dragon, for crying out loud:
>
> http://llvm.org/Logo.html
> https://en.wikipedia.org/wiki/Wyvern
>

First, clang and llvm are not Apple products. Apple is the major
sponsor and contributor, but so are Google, Mozilla, Intel, MIPS,
various academic institutes, and lots of individuals and smaller
companies. Apple pays people to contribute because they want to use the
tools - they don't own anything.

Secondly, while I am not a fan of some of Apple's business practices, it
does not have a record of legal convictions close to that of Microsoft -
yet you use their products. Frankly, if you consider Apple to be so
morally unacceptable that you won't touch anything they work with, then
it is hypocrisy to use products of Google, Microsoft, Intel, etc. -
along with anything made from oil, meat, and practically everything else
you come across in daily life in Western society. I know you have a lot
against many modern practices - so do many people, although not always
for the same reasons. Just avoid buying iThingies if you don't want to
support Apple.

Thirdly, don't get so worked up about logos. Do you boycott Scotland
because we have a unicorn in our coat of arms? Do you avoid BSD
because of its devil "Beastie" logo? Do you avoid sending emails,
because the programs that pass them on are called "mailer daemons"?



Rick C. Hodgin

unread,
Aug 22, 2016, 5:40:03 PM8/22/16
to
On Monday, August 22, 2016 at 5:13:35 PM UTC-4, David Brown wrote:
> On 22/08/16 17:21, Rick C. Hodgin wrote:
> > If so, despite its many improvements I shall abandon all new aspects of
> > Visual Studio 2015 and later, as I want nothing to do with Apple. Clang's
> > logo is a fire breathing dragon, for crying out loud:
> >
> > http://llvm.org/Logo.html
> > https://en.wikipedia.org/wiki/Wyvern
>
> Secondly, while I am not a fan of some of Apple's business practices, it
> does not have a record of legal convictions close to that of Microsoft -
> yet you use their products.

Did you read my reply?

-----[ Begin ]-----

All of Apple's products are to be avoided. The only reason I've used
Microsoft's products are because I used them before I was saved, and God
received me in my stead, and there are several cases where I can remember
in my pre-salvation days the hand of some (at the time I thought) force
at work in my life, guiding me, including my original introduction into
Visual Studio 98, and later 2002 and 2003 (before I was saved in 2004).

It's also the fundamental reason why I'm writing my own compiler, even
my own complete hardware and software stack, so that I have my own tools
upon which to build everything, tools founded upon an offering unto the
Lord Jesus Christ.

-----[ End ]-----

I have wrestled many times since being saved about continuing on in software
development using Microsoft products, and many people have come forward
accusing me of being a hypocrite for continuing to use them, while railing
against the company (Microsoft).

I have come to a place where I am comfortable in moving forward because my
goals are CAlive, my own kernel, and my own hardware, all dedicated unto
the Lord. I spelled all of this out in this video back in 2012, and have
become more solidified in my position since then (begins about 30 minutes
in, all the way to the end):

http://www.visual-freepro.org/videos/2012_12_08__01_vvmmc__and_vfrps_relationship_to_christianity.ogv
If you can't see the video, use VLC (http://www.videolan.org).

You've said you don't like to watch any of the videos I post, so I doubt
you'll gain a proper understanding of my position on this matter, and will
continue to shoot from the hip of ignorance. As such, this will likely be
my last reply to you on this matter.

David Brown

unread,
Aug 23, 2016, 3:14:28 AM8/23/16
to
An online compiler is marvellous for such testing:

<http://gcc.godbolt.org/#>

For the function f (the interesting one), clang 3.8 with -O3 gives:

union U {long long l; int64_t ll;};

long long f(union U *p1, union U *p2)
{
p2->l = 42;
p1->ll = 0;
return p2->l;
}



f(U*, U*): # @f(U*, U*)
movq $42, (%rsi)
movq $0, (%rdi)
movl $42, %eax
retq


gcc 6.1 gives:

f(U*, U*):
movq $42, (%rsi)
movq $0, (%rdi)
movq (%rsi), %rax
ret

icc 13.01 gives the same code as gcc.


clang gives the same code as gcc if the union type is changed to have a
"long l" rather than "long long l". Note that this is tested on x86-64
Linux, where "long" and "long long" are both 64 bits - on clang at
least, int64_t appears to be typedef'ed to "long" rather than "long long".


gcc specifically says that "type punning" through unions is allowed:

<https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#Type-punning>

I don't know if that is because the standards do not require it, or
because it is not entirely clear if the standards require it.

To me, this looks like a bug in clang. But I can't say my C standard
legalise is good enough to be sure.

David Brown

unread,
Aug 23, 2016, 3:32:46 AM8/23/16
to
On 22/08/16 23:39, Rick C. Hodgin wrote:
> On Monday, August 22, 2016 at 5:13:35 PM UTC-4, David Brown wrote:
>> On 22/08/16 17:21, Rick C. Hodgin wrote:
>>> If so, despite its many improvements I shall abandon all new aspects of
>>> Visual Studio 2015 and later, as I want nothing to do with Apple. Clang's
>>> logo is a fire breathing dragon, for crying out loud:
>>>
>>> http://llvm.org/Logo.html
>>> https://en.wikipedia.org/wiki/Wyvern
>>
>> Secondly, while I am not a fan of some of Apple's business practices, it
>> does not have a record of legal convictions close to that of Microsoft -
>> yet you use their products.
>
> Did you read my reply?
>

Yes.

And I understand why you use Microsoft software despite not liking the
company, and why you want to write your own replacements.

But that does not provide a rational, consistent or logical reason to
avoid a piece of software simply because Apple is involved with its
development. Picking on Apple as a company to be avoided at all costs
is prejudice - they are no worse than any other major IT company, or
pretty much any major company the world over. That is, unless you have
some sort of personal grievance against the company other than the usual
corporate immorality (exploiting workers, abusing monopoly positions,
bullying other companies, over-charging customers, etc.).

>
> You've said you don't like to watch any of the videos I post, so I doubt
> you'll gain a proper understanding of my position on this matter, and will
> continue to shoot from the hip of ignorance. As such, this will likely be
> my last reply to you on this matter.
>

No, I believe I understand your position reasonably well. I haven't
watched your videos - and as you know, I don't intend to do so. But I
have read enough of your posts over the years to understand what drives
you (I don't have to agree with you to understand your motives). Still,
it always bugs me when someone goes out of their way to cause themselves
inconvenience, for no apparent sensible reason.


Clearly, this is all up to you - I'm not going to argue more about it.

Rick C. Hodgin

unread,
Aug 23, 2016, 8:24:03 AM8/23/16
to
On Tuesday, August 23, 2016 at 3:14:28 AM UTC-4, David Brown wrote:
> An online compiler is marvellous for such testing:
> <http://gcc.godbolt.org/#>

"God Bolt"? You don't know me at all, David, do you? :-)

> f(U*, U*): # @f(U*, U*)
> movq $42, (%rsi)
> movq $0, (%rdi)
> movl $42, %eax
> retq

And if I may just say this one thing: it's flatly wrong to use AT&T
syntax when Intel syntax exists. AT&T syntax is like trying to read
what should be simplicity in a sea of chaos.

Rick C. Hodgin

unread,
Aug 23, 2016, 8:33:57 AM8/23/16
to
On Tuesday, August 23, 2016 at 3:32:46 AM UTC-4, David Brown wrote:
> ...I have read enough of your posts over the years to understand what drives
> you (I don't have to agree with you to understand your motives).

Got me all figured out do you? Well, that makes one of us. :-)

> Still,
> it always bugs me when someone goes out of their way to cause themselves
> inconvenience, for no apparent sensible reason.

I seek to serve the Lord and have Him as a foundation in all I do. I see
utility in having computers. I do not see utility in having them given
over to money-making endeavors, but rather to reach out to mankind and say,
"Here is our [the hardware and software designers] best offering. Here is
all of the source code. If you can improve it, we will all benefit."

What we are able to offer up should be a service given to one another.
The resources came from God, put here in creation, and we are able to
harness and wield them into the multitude of creations we are able. But
they should not be done for money. They should be done for the love of
God living in each one of us, which means also the love for one another
being paramount within our Earthly goals toward one another (helping,
loving, caring, teaching, guiding, and so on).

Love is the goal, and then an application of that love, and without
artificial barriers and binding forces preventing progress for selfish
needs (people imposing upon other people limitations which should not
otherwise exist). There should be no pigeonholing of technologies, or
preventing people from utilizing that which was created by someone else.
We are all here together, and we are far stronger when we are all using
the fullness of what we've been given to help one another.

If you would listen to that video I posted, you'd hear a lot of that
kind of guidance. It addresses what is missing in this world, what I
am trying to do with my product, and trying to encourage other people
to do with the interests in their lives.

mark.b...@gmail.com

unread,
Aug 23, 2016, 8:42:49 AM8/23/16
to
On Tuesday, 23 August 2016 13:24:03 UTC+1, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 3:14:28 AM UTC-4, David Brown wrote:
> > An online compiler is marvellous for such testing:
> > <http://gcc.godbolt.org/#>
>
> "God Bolt"? You don't know me at all, David, do you? :-)

I don't think Matt Godbolt will change his name to pander to your
frankly bizarre outlook.

If you choose not to take advantage of the work he's shared for free,
that's your loss not his.

Do you apply this degree of warped thinking to any name that has the
slightest potential for misinterpretation?

Rick C. Hodgin

unread,
Aug 23, 2016, 8:52:25 AM8/23/16
to
Well, it's my mistake then. Matt is the first "Godbolt" I've ever heard
of in my 47 years. I had concluded based on the two names added together
that it was something demeaning. My apologies to Matt and his family.

To answer your question: When people take something and profane it
against the God of the Bible, against Jesus CHrist, and the intents
and purposes He had for us in the things He's given us, yes I will
shun it, and I will speak out against it. That is what God commands
us (believers) to do. We are to shun evil, and to hold fast to that
which is good.

What I know of GCC is that it stems from the GNU project, and that GNU
stems from Richard Stallman, and that the original GNU C Compiler was
even written by Stallman alone. That history exists, and the GNU project
and FSF has demonstrated they are governed by people who are operating
completely contrary to God and the teachings of Jesus Christ. Stallman,
for example, believes several heinous things should be legal including
pedophelia ().

When I learned of that, I resolved to never contribute to the GNU project
or FSF with an offering, and instead went out and created the Liberty
Software Foundation as a God-fearing alternative to the FSF.

Our foundation will be upon Jesus Christ, and upon honoring God with all
we do.

Again, my apologies to Matt.

mark.b...@gmail.com

unread,
Aug 23, 2016, 9:01:29 AM8/23/16
to
On Tuesday, 23 August 2016 13:52:25 UTC+1, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 8:42:49 AM UTC-4, mark.b...@gmail.com wrote:
>
> > Do you apply this degree of warped thinking to any name that has the
> > slightest potential for misinterpretation?
>
> Well, it's my mistake then. Matt is the first "Godbolt" I've ever heard
> of in my 47 years. I had concluded based on the two names added together
> that it was something demeaning. My apologies to Matt and his family.

So the answer is "yes, I'd prefer to be offended by reflex, than to spend a
moment in finding out whether there is any offense to be taken". Thank you
for clarifying the matter.

Rick C. Hodgin

unread,
Aug 23, 2016, 9:02:09 AM8/23/16
to
On Tuesday, August 23, 2016 at 8:52:25 AM UTC-4, Rick C. Hodgin wrote:
> ...Stallman,
> for example, believes several heinous things should be legal including
> pedophelia ().

Forgot the link:

https://stallman.org/notes/2006-may-aug.html

"I am skeptical of the claim that voluntarily pedophilia harms children.
The arguments that it causes harm seem to be based on cases which aren't
voluntary, which are then stretched by parents who are horrified by the
idea that their little baby is maturing."

He also believes incest, bestiality, child pornography, and necrophilia
should be legal:

http://www.stallman.org/archives/2003-mar-jun.html

Such a person is not to be the leader of things which are given over to
God. The foundation must come upon our best efforts in seeking after
God, learning of Jesus Christ, living and teaching the same, and offering
ongoing encouragement to others to do the same. It is a very difficult
thing to do in this world because the entirety of the world, to some
degree or other, that is not born again, not actively seeking after Jesus
Christ, thinks all of these ideas are "frankly bizarre" and "warped" and
so on.

Our foundation must be upon Jesus Christ, to then move forward from that
place with all we do. We take our offering unto God, and ask Him to bless
and sanctify it, so that it is clean before Him, us, and all of mankind.
It is a real effort given for real purposes, and of the kind that are
actually far more real than we are able to see here in this world through
our Earth-only-focused eyes. You must consider eternity to realize the
true and full significance of what it means to offer up things here unto
the Lord Jesus Christ. Only then will you have a more proper appreciation.

Rick C. Hodgin

unread,
Aug 23, 2016, 9:06:29 AM8/23/16
to
I'll take the hit for being wrong in this case. I did visit the site, and
I saw it had a large list of compilers. I did not see the About link in
the upper-right until I explicitly searched for it just now based on this
exchange.

I was wrong. It was my mistake. And I apologize. This is the second time
such a mistake has been made with a person's name. The first was in a
FoxPro forum where someone came in named "irambutts". I replied to the
name and several people corrected my error there as well.

I will make additional effort in the future, but will maintain my erring
on the side of caution in all such cases because for every valid Iram Butts
and Matt Godbolt I find, there are two dozen similar forms which are not
valid, and are poking fun at God and men.

Malcolm McLean

unread,
Aug 23, 2016, 9:13:55 AM8/23/16
to
On Tuesday, August 23, 2016 at 8:32:46 AM UTC+1, David Brown wrote:
>
> Picking on Apple as a company to be avoided at all costs
> is prejudice - they are no worse than any other major IT company, or
> pretty much any major company the world over. That is, unless you have
> some sort of personal grievance against the company other than the usual
> corporate immorality (exploiting workers, abusing monopoly positions,
> bullying other companies, over-charging customers, etc.).
>
There was a period when the Apple Mac had very much a niche market. The
desktop publishing software was good, and lots of small magazines and
other media organisations used it. It genuinely was for people who were
creative types and a bit different.
But they tried to keep the image going after the company's fortunes had
been restored and they were once more pumping out devices for the mass
consumer market. So for a long time, their advertising was very deceptive
and manipulative, encouraging customers to define themselves by their
consumption. They were called out on it by the Lord Chief Rabbi (for non-Brits,
he writes a column in The Times, mainly read by non-Jews).

It's fairly rare for a British religious leader to single out a company for criticism
in that way.

Rick C. Hodgin

unread,
Aug 23, 2016, 9:22:48 AM8/23/16
to
To be clear, in MY case, it is not just about Apple. It is about all
large corporations. I see the same behavior in Oracle, Microsoft,
Google, Intel, and many others.

The Bible teaches that the love of money is the root of all evil, and
that in pursuing it many have pierced themselves through with many
sorrows. And that teaching is correct.

There are people in need, people without food, clothing, medicine, even
clean water, and these companies have 100s of billions in cash on hand.

The efforts we (as the living entities in this world) need to occupy,
are those which are focused on loving and helping one another. Perhaps
the greatest mathematician who would've ever lived died today at age 4
because of starvation. Or a disease that could've been prevented with
clean water and better housing conditions.

Eben Moglen once said (paraphrased from memory): "How many Einsteins
do you want to throw away today? How many Picassos?"

It's in one of these two videos from 2012. I haven't watched them in
years, and do not remember which one it was in:

"Why Freedom of Thought Requires Free Media and Free/Libre Software)"
http://www.youtube.com/watch?v=sKOk4Y4inVY

"Innovation Under Austerity"
http://www.youtube.com/watch?v=G2VHf5vpBy8

We owe it to the living, breathing, people made of God in this world to
do our best to serve them and their needs. We are stronger together,
when our focus is on people, than we are individually when our "leaders"
focus on isolating us, dividing us, separating "us" from "them" with
contrived organizations like "Black lives matter" and the like.

This world is moving into wholly evil pursuits, from its governments, to
its corporations, to those forces behind the scenes running both. It is
time for people to turn to God, and seek and learn from Him, and then to
go out into the world and teach the people what the enemy of this world
will not teach them: real love, beginning and ending in Jesus Christ,
and applied out from Him into each of us, out to all of us, through our
hands, our hearts, our minds, our possessions.

We are stronger together than we are separate. And the way the Bible
teaches us to live is thusly: serve everyone else, and in so doing
you'll find that while you are looking to make the people's lives in
your life better, all of the people you encounter are also working to
make your life better. You give out tremendously, but you also receive
far more so. And because everyone's giving, the gain is internal, not
external, and it endures unto life, as all such giving is of love, and
love is of Jesus Christ.

David Brown

unread,
Aug 23, 2016, 9:31:20 AM8/23/16
to
On 23/08/16 14:23, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 3:14:28 AM UTC-4, David Brown wrote:
>> An online compiler is marvellous for such testing:
>> <http://gcc.godbolt.org/#>
>
> "God Bolt"? You don't know me at all, David, do you? :-)
>

It never occurred to me to notice - and even if I had, I would not have
thought you would be bothered by some guy's name. It seems a big jump
to assume that the domain name here is picked with some religious or
anti-religious meaning just because it has the three letters g, o and d
in sequence. For all you know, it could be from a different language -
the web is not just made by "merkins" - in Norwegian, the word "god"
simply means "good".


>> f(U*, U*): # @f(U*, U*)
>> movq $42, (%rsi)
>> movq $0, (%rdi)
>> movl $42, %eax
>> retq
>
> And if I may just say this one thing: it's flatly wrong to use AT&T
> syntax when Intel syntax exists. AT&T syntax is like trying to read
> what should be simplicity in a sea of chaos.
>

It seems perfectly obvious to me. I've worked with dozens of different
assembly languages over the years, and find no problems with this. Of
course, if you are used to writing x86 assembly code (I am not - it's
about the one processor I /haven't/ programmed in assembly), then extra
familiarity with the Intel syntax will make the AT&T syntax harder.

But for code like this, it should be clear - obviously, the order here
is "source -> dest" rather than Intel's "dest <- source".

As for the difference between the syntaxes, AT&T is common in the *nix
world, and Intel is common in DOS + Windows world. I think the AT&T
syntax is actually older - it is basically the same style as for other
processors, and they had an assembler for the x86 before Intel had
standardised on a syntax. Both work fine for most instructions, and
both are ugly for complicated addressing modes. Intel syntax is nicer
for some things, but AT&T is more compact for code like this, rather than:

f(U*, U*): # @f(U*, U*)
mov qword ptr [rsi], 42
mov qword ptr [rdi], 0
mov eax, 42
ret

f(U*, U*):
mov QWORD PTR [rsi], 42
mov QWORD PTR [rdi], 0
mov rax, QWORD PTR [rsi]
ret


Both gcc and clang will give you Intel syntax with the switch "-masm=intel".


But in both cases, it is clearer and simpler to write in C and leave the
x86 assembly to the compiler writers :-)

Rick C. Hodgin

unread,
Aug 23, 2016, 9:43:41 AM8/23/16
to
On Tuesday, August 23, 2016 at 9:31:20 AM UTC-4, David Brown wrote:
> On 23/08/16 14:23, Rick C. Hodgin wrote:
> > And if I may just say this one thing: it's flatly wrong to use AT&T
> > syntax when Intel syntax exists. AT&T syntax is like trying to read
> > what should be simplicity in a sea of chaos.
>
> Both gcc and clang will give you Intel syntax with the switch "-masm=intel".

I am well aware of that. It's why I said, "...when Intel syntax exists."

Along with AT&T syntax, I can also read Olde English text. I just prefer
not to when modern English is available, and is so much easier to read.

I think it's also presumptuous of AT&T to conclude a syntax having far
more characters and symbols is a better choice than Intel's own syntax.
Intel's syntax is much cleaner, and is the one they created for use with
their own hardware. I have never understood AT&T's reasoning in creating
what they did with x86 syntax. And, I have never understood why the
people working on GCC chose that syntax over Intel's. And I think it
speaks volumes that some developers went out of their way to rewrite the
assembly generation algorithms to also allow Intel syntax, when it is a
great deal of effort to do so, and did not exist for years (introduced
only in the mid-2000s).

As for CAlive, and all LibSF x86-based products, they will always and
likely only support Intel's syntax.

ma...@godbolt.org

unread,
Aug 23, 2016, 9:47:49 AM8/23/16
to
Lots of edits below - just dipping in to this conversation :)

> On Tuesday, August 23, 2016 at 8:42:49 AM UTC-4, mark.b...@gmail.com wrote:
> > I don't think Matt Godbolt will change his name to pander to your
> > frankly bizarre outlook.
On Tuesday, August 23, 2016 at 7:52:25 AM UTC-5, Rick C. Hodgin wrote:
> Well, it's my mistake then. Matt is the first "Godbolt" I've ever heard
> of in my 47 years. I had concluded based on the two names added together
> that it was something demeaning. My apologies to Matt and his family.

Rick emailed me off-list and apologised, there's honestly no offence taken. I have an unusual name :) It's weirdly become synonymous with Compiler Explorer: I hear people saying "Let's check that code on godbolt" -- clearly many people don't realise it's my family name!

As a friend said when I mentioned this "Talk about being religious about languages" :)

Regards to all, Matt

Rick C. Hodgin

unread,
Aug 23, 2016, 9:50:09 AM8/23/16
to
On Tuesday, August 23, 2016 at 9:31:20 AM UTC-4, David Brown wrote:
> On 23/08/16 14:23, Rick C. Hodgin wrote:
> > On Tuesday, August 23, 2016 at 3:14:28 AM UTC-4, David Brown wrote:
> >> An online compiler is marvellous for such testing:
> >> <http://gcc.godbolt.org/#>
> >
> > "God Bolt"? You don't know me at all, David, do you? :-)
> >
>
> It never occurred to me to notice - and even if I had, I would not have
> thought you would be bothered by some guy's name. It seems a big jump
> to assume that the domain name here is picked with some religious or
> anti-religious meaning just because it has the three letters g, o and d
> in sequence. For all you know, it could be from a different language -
> the web is not just made by "merkins" - in Norwegian, the word "god"
> simply means "good".

I contacted Matt to personally apologize, and he conveyed the same thing
in reply. He said the last name is likely a misspelling of "good bolt"
as in "good in archery" or "good in making crossbow bolts", etc.

I had no idea. But, it's interesting.

-----
Well, I definitely need to revisit my "quick assessment" of people's names
giving the global nature of our communication these days, and the multitude
of languages, names, and cultures I have insufficient knowledge of.

Lesson learned. :-)

David Brown

unread,
Aug 23, 2016, 9:56:36 AM8/23/16
to
On 23/08/16 15:01, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 8:52:25 AM UTC-4, Rick C. Hodgin wrote:
>> ...Stallman,
>> for example, believes several heinous things should be legal including
>> pedophelia ().
>

We don't need the link - people who are interested can look it up for
themselves. RMS is not shy about saying what he thinks.

He basically believes in "no harm, no foul". He is completely against
/harming/ children. But he is also against the idea that a particular
type of action /always/ causes harm, and therefore /always/ should be
treated the same way.

Norms change. What we we consider "heinous" changes - over time, from
place to place, from culture to culture, and amongst groups and
individuals. Why is it that consensual sex with a 16 year old on their
birthday (in most countries) is perfectly legal - but the same act the
day before is the "heinous" crime of paedophilia, carrying many years
jail sentence, even if the other party is just a couple of days older?
And you don't have to go too far back in history to the point where a 16
year old would often be married for several years, and have kids of
their own.

Of course, you believe in certain "absolute" rules because of your
faith, but many people prefer to be more concerned about the actual
effect of actions rather than drawing lines that must not be crossed.
RMS is one such person - and he has a fondness for expressing himself in
the most dramatic and provocative way he can.


Most people are able to separate a person's opinions and comments from
their work. RMS has some far-out opinions, but that does not mean that
the FSF, the gnu developers, or gnu users are a den of iniquity by
association.


David Brown

unread,
Aug 23, 2016, 9:57:58 AM8/23/16
to
Since you are online here, let me take the chance to thank you for that
website. It is /really/ useful!

Rick C. Hodgin

unread,
Aug 23, 2016, 9:59:33 AM8/23/16
to
On Tuesday, August 23, 2016 at 9:56:36 AM UTC-4, David Brown wrote:
> [snip]

Basically, it comes from placing value on God, and God's teachings. By
submitting to Him, you have absolute authority to return to. When you
leave God and God's teachings out of things, you are subject to your own
personal whims, personal interpretations. It's what we see in this world
with the "changing norms":

http://biblehub.com/kjv/judges/21-25.htm
"In those days there was no king in Israel: every man did that which
was right in his own eyes."

It is the same today, as people have abandoned their true King: Jesus.

David Brown

unread,
Aug 23, 2016, 10:06:10 AM8/23/16
to
On 23/08/16 15:43, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 9:31:20 AM UTC-4, David Brown wrote:
>> On 23/08/16 14:23, Rick C. Hodgin wrote:
>>> And if I may just say this one thing: it's flatly wrong to use AT&T
>>> syntax when Intel syntax exists. AT&T syntax is like trying to read
>>> what should be simplicity in a sea of chaos.
>>
>> Both gcc and clang will give you Intel syntax with the switch "-masm=intel".
>
> I am well aware of that. It's why I said, "...when Intel syntax exists."
>
> Along with AT&T syntax, I can also read Olde English text. I just prefer
> not to when modern English is available, and is so much easier to read.
>
> I think it's also presumptuous of AT&T to conclude a syntax having far
> more characters and symbols is a better choice than Intel's own syntax.
> Intel's syntax is much cleaner, and is the one they created for use with
> their own hardware. I have never understood AT&T's reasoning in creating
> what they did with x86 syntax.

As I noted, AT&T's syntax is older than Intel's. The question is, why
did Intel bother making their own syntax when AT&T's existed and was
familiar for many assembly programmers?

As for "cleaner", it's a matter of opinion. Google a bit, and you will
see both opinions expressed as though it was obvious and you'd have to
be mad to think the other way. From what I have seen, some things are
clearer in one syntax, others are clearer in the other syntax. Both
leave much to be desired.

> And, I have never understood why the
> people working on GCC chose that syntax over Intel's.

gcc and gas already supported the AT&T syntax for other processors when
they started working on x86. Details vary a little between processors,
but the aim of AT&T was to get a syntax that was as close to portable as
possible for assembly.

So while Intel syntax had appeal for those that think the world revolves
around x86, AT&T has always been popular with people who like a variety
of processors.

> And I think it
> speaks volumes that some developers went out of their way to rewrite the
> assembly generation algorithms to also allow Intel syntax, when it is a
> great deal of effort to do so, and did not exist for years (introduced
> only in the mid-2000s).

They added support for Intel syntax because gcc was beginning to be
popular on Windows rather than just *nix, and users were asking for
support. First, support for parsing the syntax in gas (the gnu
assembler) was needed - support for generating it in gcc was probably a
small task.

>
> As for CAlive, and all LibSF x86-based products, they will always and
> likely only support Intel's syntax.
>

Your choice.


David Brown

unread,
Aug 23, 2016, 10:13:42 AM8/23/16
to
On 23/08/16 15:22, Rick C. Hodgin wrote:
> On Tuesday, August 23, 2016 at 9:13:55 AM UTC-4, Malcolm McLean wrote:
>> On Tuesday, August 23, 2016 at 8:32:46 AM UTC+1, David Brown wrote:
>>>
>>> Picking on Apple as a company to be avoided at all costs
>>> is prejudice - they are no worse than any other major IT company, or
>>> pretty much any major company the world over. That is, unless you have
>>> some sort of personal grievance against the company other than the usual
>>> corporate immorality (exploiting workers, abusing monopoly positions,
>>> bullying other companies, over-charging customers, etc.).
>>>
>> There was a period when the Apple Mac had very much a niche market. The
>> desktop publishing software was good, and lots of small magazines and
>> other media organisations used it. It genuinely was for people who were
>> creative types and a bit different.
>> But they tried to keep the image going after the company's fortunes had
>> been restored and they were once more pumping out devices for the mass
>> consumer market. So for a long time, their advertising was very deceptive
>> and manipulative, encouraging customers to define themselves by their
>> consumption. They were called out on it by the Lord Chief Rabbi (for non-Brits,
>> he writes a column in The Times, mainly read by non-Jews).
>>
>> It's fairly rare for a British religious leader to single out a company for criticism
>> in that way.
>
> To be clear, in MY case, it is not just about Apple. It is about all
> large corporations. I see the same behavior in Oracle, Microsoft,
> Google, Intel, and many others.

That is precisely my point. I fully appreciate that you don't like
corporate greed, and see money (or the pursuit of money) as the root of
all evil. You don't have to be religious to agree with that.

All I object to is the singling out of Apple - you refuse to use clang,
simply because Apple has sponsored much of its development. Yet you
will use browsers paid for by Google, development tools written by MS,
chips made by Intel, and so on. If your dreams come true, you will
gradually be able to replace those companies' products - but until then,
you use them. So why does Apple deserve a special position in your
naughty list?


Rick C. Hodgin

unread,
Aug 23, 2016, 10:15:23 AM8/23/16
to
On Tuesday, August 23, 2016 at 10:06:10 AM UTC-4, David Brown wrote:
> On 23/08/16 15:43, Rick C. Hodgin wrote:
> > I think it's also presumptuous of AT&T to conclude a syntax having far
> > more characters and symbols is a better choice than Intel's own syntax.
> > Intel's syntax is much cleaner, and is the one they created for use with
> > their own hardware. I have never understood AT&T's reasoning in creating
> > what they did with x86 syntax.
>
> As I noted, AT&T's syntax is older than Intel's. The question is, why
> did Intel bother making their own syntax when AT&T's existed and was
> familiar for many assembly programmers?

Well, I think it's obvious ... they were trying to save developer's eyes.
They offered an alternative CPU, and an alternative assembler syntax, both
were better, and both were them trying to make the world a better place.
And might I just say that's definitely one area where Intel got it right!

:-)

AT&T's syntax older than Intel's ... people used to write in other languages,
including Olde English. Things change over time because improvements come
along ... like Intel's syntax.

Rick C. Hodgin

unread,
Aug 23, 2016, 10:35:33 AM8/23/16
to
It's not about religion with Jesus Christ, with Christianity. It's about
a personal relationship. Religion has almost nothing to do with it.

People can do all manner of things without knowing Jesus Christ. They
will just only be done here on this Earth, and then it will be the end
of those things and the people themselves because they were only out for
a pursuit of self. It's what the Bible means when it says:

http://biblehub.com/kjv/matthew/24-37.htm
37 But as the days of Noe were, so shall also the coming of the Son
of man be.
38 For as in the days that were before the flood they were eating
and drinking, marrying and giving in marriage, until the day that
Noe entered into the ark,
39 And knew not until the flood came, and took them all away; so shall
also the coming of the Son of man be.

God's blessings are real:

http://biblehub.com/kjv/matthew/5-45.htm
"That ye may be the children of your Father which is in heaven: for
he maketh his sun to rise on the evil and on the good, and sendeth
rain on the just and on the unjust."

He gives everyone the chance and opportunity time and time and time again
to come to Him, acknowledge their sin, ask forgiveness, and be saved. But
because of sin, and the lure of sin, and the desire for each person to be
their own personal god subject to none other, people do not want God.
"Everybody wants to rule the world."

Right and proper things tie back to Jesus Christ because He is right and
proper and has taught us those things which are right and proper, not
just because man feels so, but because God has declared.

The entirety of creation speaks of Jesus as the Creator, and us as the
created, and us needing Him, and Him providing for us. And the more we
learn about everything, the more we realize the extent to which He is
actively, continually, and in all ways, involved in everything in our
lives.

> All I object to is the singling out of Apple - you refuse to use clang,
> simply because Apple has sponsored much of its development. Yet you
> will use browsers paid for by Google, development tools written by MS,
> chips made by Intel, and so on. If your dreams come true, you will
> gradually be able to replace those companies' products - but until then,
> you use them. So why does Apple deserve a special position in your
> naughty list?

They have a particular spirit about them. And, FWIW, I felt that spirit
even back to the days of the Macintosh. IBM and Microsoft have done bad
things, but there has been a different spirit about them. That changed
with Microsoft in 2007/2008, by the way, with the announcement of Azure.
I don't know what exactly changed in them, but something changed that
very day, and it has persisted since.

It's okay if you don't understand or believe me. I can only point you
to what I know. Because I discern that spirit, I avoid it. And there
is outward evidence of that spirit at work in the way they operate:

Apple has a "I'm proud to be gay" homosexual CEO, and is hoarding billions
while people world-wide starve, and people in this nation go homeless.

Microsoft has been modifying their post-Win 7 operating systems to spy on
every user, even making it quite difficult to turn off that spying. And
on my wife's computer, she rebooted recently and it installed an update
for nearly two hours without asking her. Had she had some need to actually
use her computer in that time, she would've been harmed by that policy.

I don't expect you to understand, believe, or place any value on any of
these discernments of mine, David. I expect you will think it's all some
form of hogwash, some mental issue I possess, etc. And I will not reply
to your response to this post because of it. But, there it is for you to
have knowledge of. And, there's always hope. Perhaps someday it will
make sense to you.

One other point: I've shared these things with many Christians I've
gotten close to over the years, and about 1 in 200 has been able to
discern the same things I am. They have all said the same things, that
there is some kind of spirit at work in people and companies that they
are able to discern, and they avoid it/them for the same reason.

Starbuck's is a good example. There's a reason why it's so popular, and
it's not for the $6 coffee. There is a spirit about that company, and
it is to be avoided for the same reason as Apple.

Keith Thompson

unread,
Aug 23, 2016, 11:28:32 AM8/23/16
to
David Brown <david...@hesbynett.no> writes:
[...]
> Most people are able to separate a person's opinions and comments from
> their work. RMS has some far-out opinions, but that does not mean that
> the FSF, the gnu developers, or gnu users are a den of iniquity by
> association.

Most people are able to separate things that are topical on comp.lang.c
from things that are not. David, you seem to lack that ability.

I ask you one more time, please stop feeding the troll.

Gareth Owen

unread,
Aug 23, 2016, 2:08:39 PM8/23/16
to
David Brown <david...@hesbynett.no> writes:

> But that does not provide a rational, consistent or logical reason to
> avoid a piece of software simply because Apple is involved with its
> development

Maybe its an Apple from the Tree of Knowledge?

supe...@casperkitty.com

unread,
Aug 23, 2016, 2:54:17 PM8/23/16
to
On Monday, August 22, 2016 at 3:39:43 PM UTC-5, Keith Thompson wrote:
> Not everything is about the vast conspiracy to produce deliberately
> useless compilers.

Compiler writers don't sit down and say "Hey, let's write a useless
compiler". They do, however, fail to recognize what has traditionally
made C useful for many applications.

If general-purpose compilers for platforms with some common feature
have for decades made that feature available to the programmer, and
programmers have benefited from being able to use it, code which makes
use of that feature shouldn't be considered "defective". Instead, the
feature should be viewed as part of what makes C useful on such
platforms. If it is necessary to deprecate support for a feature, it
should be done via proper deprecation process, not by a compiler writer
claiming it was never defined (ignoring years of precedent showing both
support and usage).


Gareth Owen

unread,
Aug 23, 2016, 3:07:36 PM8/23/16
to
supe...@casperkitty.com writes:

> On Monday, August 22, 2016 at 3:39:43 PM UTC-5, Keith Thompson wrote:
>> Not everything is about the vast conspiracy to produce deliberately
>> useless compilers.
>
> Compiler writers don't sit down and say "Hey, let's write a useless
> compiler". They do, however, fail to recognize what has traditionally
> made C useful for many applications.

Of the uncountable stupid things you've said, this may be the stupidest.

supe...@casperkitty.com

unread,
Aug 23, 2016, 5:31:12 PM8/23/16
to
On Tuesday, August 23, 2016 at 2:07:36 PM UTC-5, gwowen wrote:
> supercat writes:
> > Compiler writers don't sit down and say "Hey, let's write a useless
> > compiler". They do, however, fail to recognize what has traditionally
> > made C useful for many applications.
>
> Of the uncountable stupid things you've said, this may be the stupidest.

There are many applications (including 99.99999% of embedded programs)
for which C's usefulness is a result of widespread support for features
beyond what the Standard mandates. Do you disagree with that?

At least some compiler writers have taken the attitude that code which
relies upon such features has no real meaning, and works only by
"happenstance", no matter how many earlier compilers have interpreted it
consistently. Do you disagree with that?

Such dismissal may reasonably be interpreted as a refusal to recognize the
existence and usefulness of such features. Do you disagree with that?

With what, then, are you disagreeing?

Gareth Owen

unread,
Aug 24, 2016, 1:25:51 AM8/24/16
to
supe...@casperkitty.com writes:

> On Tuesday, August 23, 2016 at 2:07:36 PM UTC-5, gwowen wrote:
>> supercat writes:
>> > Compiler writers don't sit down and say "Hey, let's write a useless
>> > compiler". They do, however, fail to recognize what has traditionally
>> > made C useful for many applications.
>>
>> Of the uncountable stupid things you've said, this may be the stupidest.
>
> There are many applications (including 99.99999% of embedded programs)
> for which C's usefulness is a result of widespread support for features
> beyond what the Standard mandates. Do you disagree with that?

No.

> At least some compiler writers have taken the attitude that code which
> relies upon such features has no real meaning, and works only by
> "happenstance", no matter how many earlier compilers have interpreted it
> consistently. Do you disagree with that?

Yes. Completely.

Rosario19

unread,
Aug 24, 2016, 3:05:50 AM8/24/16
to
On Mon, 22 Aug 2016 08:21:24 -0700 (PDT), "Rick C. Hodgin" wrote:

>On Monday, August 22, 2016 at 11:09:18 AM UTC-4, David Brown wrote:
>> On 22/08/16 16:56, Chad wrote:
>> > On Friday, August 19, 2016 at 2:48:24 PM UTC-7, Ben Bacarisse wrote:
>> >> Ian Collins <ian-...@hotmail.com> writes:
>> >>
>> >>> On 08/20/16 09:22 AM, Rick C. Hodgin wrote:
>> >>>> I tried to do this today, and the compiler balked. Is there a syntax
>> >>>> where this idea is legal (line 06 and the [c = "hello"] part)?
>> >>>>
>> >>>> 01: void myfunc(int value, char* text)
>> >>>> 02: {
>> >>>> 03: printf("%d %s\n", value, text);
>> >>>> 04: }
>> >>>> 05:
>> >>>> 06: #define abc(a, b, c = "hello") a(b, c)
>> >>>> 07:
>> >>>> 08: abc(myfunc, 2);
>> >>>
>> >>> If you want C++, you know where to find it!
>> >>
>> >> The irony is that he uses C++ despite posting in c.l.c.
>> >
>> > How did you know that homegirl was using C++?
>>
>> I am not sure what you mean with the "homegirl" reference - perhaps it
>> is some American slang that I don't get.
>
>Yep. It's Chad's derogatory name calling directed at me.
>
>> But we know that Rick programs basically in C, but using a C++ compiler,
>> because he has told us. He believes that full C++ is too complicated a
>> language, but takes advantage of a few C++ features in his basically C
>> programming.
>
>I think it's both too complex and an incredibly messy syntax, though I
>think there are some truly brilliant aspects of its extension of C.
>
>> And his (currently) preferred development environment is
>> MSVC, which has very good C++ support but quite poor C support (maybe
>> that's different now that they are using clang).
>
>Is that true that Microsoft is using clang now? Does that mean that clang
>is an optional compiler? Or that Microsoft is using clang as part of its
>VC++ compilers (as your reply seems to indicate)?
>
>If so, despite its many improvements I shall abandon all new aspects of
>Visual Studio 2015 and later, as I want nothing to do with Apple. Clang's
>logo is a fire breathing dragon, for crying out loud:
>
> http://llvm.org/Logo.html
> https://en.wikipedia.org/wiki/Wyvern
>
>All of Apple's products are to be avoided. The only reason I've used
>Microsoft's products are because I used them before I was saved, and God
>received me in my stead, and there are several cases where I can remember
>in my pre-salvation days the hand of some (at the time I thought) force
>at work in my life, guiding me, including my original introduction into
>Visual Studio 98, and later 2002 and 2003 (before I was saved in 2004).
>
>It's also the fundamental reason why I'm writing my own compiler, even
>my own complete hardware and software stack, so that I have my own tools
>upon which to build everything, tools founded upon an offering unto the
>Lord Jesus Christ.
>
>[My original attempt at posting failed. I apologize if this posts twice.]
>
>Best regards,
>Rick C. Hodgin

nobody can be sure "to be saved"

mark.b...@gmail.com

unread,
Aug 24, 2016, 3:49:56 AM8/24/16
to
On Wednesday, 24 August 2016 08:05:50 UTC+1, Rosario19 wrote:
...
> nobody can be sure "to be saved"

I can. My processor backs up to the cloud every

<reset>

What were we talking about?

David Brown

unread,
Aug 24, 2016, 4:34:11 AM8/24/16
to
On 23/08/16 23:30, supe...@casperkitty.com wrote:
> On Tuesday, August 23, 2016 at 2:07:36 PM UTC-5, gwowen wrote:
>> supercat writes:
>>> Compiler writers don't sit down and say "Hey, let's write a useless
>>> compiler". They do, however, fail to recognize what has traditionally
>>> made C useful for many applications.
>>
>> Of the uncountable stupid things you've said, this may be the stupidest.
>
> There are many applications (including 99.99999% of embedded programs)
> for which C's usefulness is a result of widespread support for features
> beyond what the Standard mandates. Do you disagree with that?

Most of the code in most embedded programs that are written in C can be
written using only standard-mandated features. And most of the rest can
all be done using implementation-dependent behaviour, as allowed by the
standards.

That is not to say that most embedded programs are written in a way that
does not depend on undefined or unspecified behaviour acting in a
particular way. And sometimes code can be smaller or faster while
relying on what one might call "undocumented implementation-dependent
behaviour" from compilers.

People who take embedded programming seriously - like other serious
programmers - strive to avoid undefined behaviour in their code. But
they are often much happier to rely on implementation-dependent
behaviour than "mainstream" C programmers, because they often know
exactly which compiler and target processor their code will run on.

But since serious embedded programmers know that they are fallible, and
they know that compiler writers are fallible, they will write code that
/should/ be independent of details of the compiler, flags, etc., and
then lock down exact versions of the tools and flags just in case they
have accidentally relied on "undocumented implementation-defined behaviour".

>
> At least some compiler writers have taken the attitude that code which
> relies upon such features has no real meaning, and works only by
> "happenstance", no matter how many earlier compilers have interpreted it
> consistently. Do you disagree with that?

I disagree with that. Certainly compiler writers will not bother unduly
with source code that has no real meaning - but if many compilers have
treated code in one way before, then they will take that into account.
But they will not artificially limit the efficiency of code written by
people who understand C programming, in order to placate those who don't
understand.

Let us take an example, and see how gcc handles it. In C, signed
integer overflow is undefined behaviour. But in the underlying
processor, in most cases, signed arithmetic overflow wraps as two's
complement arithmetic in a defined manner. Some people may have written
code that relies on this hardware behaviour - their code perhaps worked
on some compilers during testing. Other people may have written code
which can result in more efficient object code if the compiler assumes
integer overflow does not occur.

Should the compiler limit the efficiency of the good programmer's code,
just because the bad programmer does not understand the way C works
here? On the other hand (the one you favour), should the compiler break
code that worked fine previously, simply in order to slightly speed up
someone else's code?

gcc lets you answer "no" to both points - it gives /you/ the choice.
The compiler can optimise on the assumption that signed overflow will
not occur - but /only/ if the optimisation option is enabled (with
"-fstrict-overflow", or -O2). You can keep the old code working by
limiting optimisation to -O1 or -O0 (with -O0 being the default), or by
explicitly disabling that optimisation with "-fno-strict-overflow".

You can also tell gcc that you want integer overflow to work as two's
complement wraparound in all circumstances, and that the compiler can
optimise using that knowledge, with the "-fwrapv" option.

What if you are worried that you might accidentally have signed
overflow? gcc supports "-ftrapv" on some targets to catch overflows at
run-time. The newer "-fsanitize=signed-integer-overflow" also makes
checks, and is probably better for future use.

Then there are compile-time warnings, the "-Wstrict-overflow" flag that
will warn when strict overflow optimisations affect code. It is
configurable to different levels. Warnings on overflows of constant
calculations need to be explicitly disabled if you don't want them.


gcc is the compiler that you apparently consider most "evil" regarding
optimisations, with clang as a close competitor (clang supports the same
options here, I believe). Yet this shows just how much effort the gcc
folks put in to helping people work with legacy, incorrect code that
happened to work before - or people who would rather program in a
language slightly different from C in which integer overflow always
wraps. Other cases such as type-based alias analysis are similar, with
a similar set of flags and options.

And note that in the embedded world, far and away the biggest single
compiler used is gcc. (I don't mean most embedded programs are compiled
with gcc - but more are compiled with gcc than with any other compiler.)
It turns out that in real life, people seem quite capable of using the
"evil" gcc compiler to make working embedded systems. And the gcc
developers are perfectly aware that this is a major market for their
compiler, especially for C (as distinct from C++, Go, or other languages
supported by gcc).

>
> Such dismissal may reasonably be interpreted as a refusal to recognize the
> existence and usefulness of such features. Do you disagree with that?
>

See above.

> With what, then, are you disagreeing?
>

In a nutshell, I disagree with your ignorant paranoia and conspiracy
theories.

You have made good points in the past regarding surprising or hidden
undefined behaviour in innocent-looking code. But your ideas about
compiler optimisations, and the motivations and priorities of compiler
developers, are completely and utterly wrong.


Rick C. Hodgin

unread,
Aug 24, 2016, 8:41:05 AM8/24/16
to
Why this is marked as abuse? It has been marked as abuse.
Report not abuse
On Wednesday, August 24, 2016 at 3:05:50 AM UTC-4, Rosario19 wrote:
> nobody can be sure "to be saved"

We can. The born again nature gives us the eyes to see the things our
flesh cannot. It's like a blind person saying that we cannot know how
to cross an unknown room, for you must feel your way to get across, and
you cannot know with certainty the way. But to a person with eyes, a
single glance and the safe path across a room is known.

The born again nature gives people the assurance their flesh cannot
possess. It comes to all people who seek the truth, and ask forgiveness
of their sin from Jesus Christ:

http://biblehub.com/kjv/hebrews/11-1.htm
"Now faith is the substance of things hoped for, the evidence of
things not seen."

Faith gives substance to things hoped for, and evidence of things not
seen (by our eyes).

http://biblehub.com/kjv/2_corinthians/4-18.htm
"While we look not at the things which are seen, but at the things
which are not seen: for the things which are seen are temporal; but
the things which are not seen are eternal."

Our flesh-based eyes and flesh-based mind deceive us, for they only see
the things of this world. But the spirit of God living within us gives
us sight and insight unto those things which are eternal, are of God, and
are alive (truth, love, and charity in action).

For all who seek the truth, they will come to Jesus Christ, ask to be
forgiven of their sin, and the will be forgiven, and in that selfsame
instant they will be reborn of the spirit, alive again in eternity,
passing from judgment into life, no longer under condemnation from sin,
but alive forevermore, able to know truth, know the things of God, and
to bear witness to them here in this world as teachers, being servants
and ambassadors of God.

It takes a great deal of effort. And everyone who is saved and pursues
God in this way will tell you that we (as believers) still fail Him almost
continually. Yet, by His Grace we persist, and by His Grace we continue
on. We are unwavering and undaunted by naysayers, but our prayers go out
to all who do not believe that they will come to seek the truth and come
to faith in Jesus Christ, that they too might be saved, and might become
even mightier servants of God than we could ever hope to be.

Our goals are in others outpacing us through zealousness and a true inner
love for God manifested into action in this world, while at the same time
we pray to increase in being zealous in all of our efforts, honing and
forging our way unto Him through a purposefully constructed work over time.

It is a glorious thing to serve the Lord of the universe. And I pray each
of you come to know this ... and sooner rather than later.

Richard Damon

unread,
Aug 24, 2016, 8:50:14 AM8/24/16
to
On 8/24/16 4:34 AM, David Brown wrote:
> On 23/08/16 23:30, supe...@casperkitty.com wrote:
>> On Tuesday, August 23, 2016 at 2:07:36 PM UTC-5, gwowen wrote:
>>> supercat writes:
>>>> Compiler writers don't sit down and say "Hey, let's write a useless
>>>> compiler". They do, however, fail to recognize what has traditionally
>>>> made C useful for many applications.
>>>
>>> Of the uncountable stupid things you've said, this may be the stupidest.
>>
>> There are many applications (including 99.99999% of embedded programs)
>> for which C's usefulness is a result of widespread support for features
>> beyond what the Standard mandates. Do you disagree with that?
>
> Most of the code in most embedded programs that are written in C can be
> written using only standard-mandated features. And most of the rest can
> all be done using implementation-dependent behaviour, as allowed by the
> standards.
>
> That is not to say that most embedded programs are written in a way that
> does not depend on undefined or unspecified behaviour acting in a
> particular way. And sometimes code can be smaller or faster while
> relying on what one might call "undocumented implementation-dependent
> behaviour" from compilers.
>

In my experience, it is impossible to write a useful embedded program
(maybe you can even drop the embedded) without using something beyond
'standard mandated'. In the embedded case, you virtually always are
going to need to talk to some hardware, which will require you to
normally include a header outside the list mandated by the standard, or
write some statements yourself that either use an extension or invoke
the technical undefined behavior of casting an integer value (not gotten
from casting a pointer) to a pointer and referencing it. These are
normally well documented implementation defined behaviors.

supercat's 'problem' is that he want behavior that was never (or maybe
rarely back then) promised by implementations, but which happened to
have worked on simple implementations to be required by the standard to
always work that way. The key issue he misses is that it is basically
impossible to write in the standard such a requirement without breaking
some of the fundamental philosophies of the standard (that a C
implementation that is reasonably efficient be possible on virtually all
processors that meet some very basic minimum requirements). What he
misses is that the writers of the standard EXPECT that most programs are
going to rely on some features beyond the minimum the standard requires,
using implementation defined behavior, but hopefully do so in a
deliberate manner where they can document what behavior they need.

He also misses that most of the systems he rails against DO provide the
promises he want IF he uses the right set of options.

He sometimes proposes a method of letting the program 'make requests'
for certain possible implementation defined behaviors (often of things
the standard for simplicity just leaves undefined). Normally his
proposal is just a bit of a snippet of a sample way to do this. Perhaps
if he really wants to be useful (or to understand why it hasn't been
done) he should take the effort to formally develop it to be closer to a
real proposal to the committee. Make a REAL listing of the vast majority
of behaviors that having, documenting, and being able to
programmatically test would be useful. Make it broad enough that so that
other programmers who may want different assumptions than what he wants
can find it useful.

The alternative, is to define an auxiliary standard (in sort of a
similar manner as POSIX) that adds the defined behavior that he wants so
that just by invoking the implement in the mode documented to be
compliant to this extended standard gets what he wants. This leaves the
basic standard clean, and possible to use on the oddball machines as has
been the policy of the committee, and still lets people like him get
what they want. The fact that this hasn't already been done makes me
think that the existing implementations do well enough at this with the
right options that it isn't really that highly demanded as a formal (or
informal) standard.



David Brown

unread,
Aug 24, 2016, 9:37:52 AM8/24/16
to
Exactly - such behaviour is implementation-dependent behaviour, not
undefined behaviour. Sometimes the documentation is not too great, and
there is a grey area around "implementation-dependent behaviour that the
vendor forgot to document" and "undefined or unspecified behaviour that
works one way now, but may change in future versions of the compiler".

>
> supercat's 'problem' is that he want behavior that was never (or maybe
> rarely back then) promised by implementations, but which happened to
> have worked on simple implementations to be required by the standard to
> always work that way.

Yes.

> The key issue he misses is that it is basically
> impossible to write in the standard such a requirement without breaking
> some of the fundamental philosophies of the standard (that a C
> implementation that is reasonably efficient be possible on virtually all
> processors that meet some very basic minimum requirements). What he
> misses is that the writers of the standard EXPECT that most programs are
> going to rely on some features beyond the minimum the standard requires,
> using implementation defined behavior, but hopefully do so in a
> deliberate manner where they can document what behavior they need.

Yes.

>
> He also misses that most of the systems he rails against DO provide the
> promises he want IF he uses the right set of options.
>

Yes.

> He sometimes proposes a method of letting the program 'make requests'
> for certain possible implementation defined behaviors (often of things
> the standard for simplicity just leaves undefined). Normally his
> proposal is just a bit of a snippet of a sample way to do this. Perhaps
> if he really wants to be useful (or to understand why it hasn't been
> done) he should take the effort to formally develop it to be closer to a
> real proposal to the committee. Make a REAL listing of the vast majority
> of behaviors that having, documenting, and being able to
> programmatically test would be useful. Make it broad enough that so that
> other programmers who may want different assumptions than what he wants
> can find it useful.
>
> The alternative, is to define an auxiliary standard (in sort of a
> similar manner as POSIX) that adds the defined behavior that he wants so
> that just by invoking the implement in the mode documented to be
> compliant to this extended standard gets what he wants. This leaves the
> basic standard clean, and possible to use on the oddball machines as has
> been the policy of the committee, and still lets people like him get
> what they want. The fact that this hasn't already been done makes me
> think that the existing implementations do well enough at this with the
> right options that it isn't really that highly demanded as a formal (or
> informal) standard.
>

Agreed.

Sometimes I think that such an auxiliary standard could be useful, and
define "normal C systems". It could say things like "CHAR_BIT is always
either 8, 16 or 32," "signed ints are always two's complement," and
"integer types are always either little-endian or big-endian, with no
mixtures, padding bits, or other awkwardness". Exactly what goes into
the standard is debatable - people will have different opinions. But it
would mean that at least some types of code could be more easily written
in a way that is portable across a range of systems - basically it's
just a minimum set of implementation-dependent behaviour.

But as you say, no such standard exists. That is likely to be because
people do well enough with the normal C standard, the posix standard,
the defacto "standard" of gcc behaviour (which is followed to a fair
degree by clang, icc, and several other compilers), and the
implementation-dependent behaviour of the target they are actually using.


supe...@casperkitty.com

unread,
Aug 24, 2016, 2:58:25 PM8/24/16
to
On Wednesday, August 24, 2016 at 8:37:52 AM UTC-5, David Brown wrote:
> Exactly - such behaviour is implementation-dependent behaviour, not
> undefined behaviour. Sometimes the documentation is not too great, and
> there is a grey area around "implementation-dependent behaviour that the
> vendor forgot to document" and "undefined or unspecified behaviour that
> works one way now, but may change in future versions of the compiler".

The vast majority of documentation for products of all kinds relies upon
the reader to make assumptions beyond what is stated; documentation that
says something behaves exactly as any reasonable person would expect it
to behave is often viewed as redundant.

When reading any kind of old documentation, it is thus important to keep
in mind the expected mindset of the target audience. Do you think that
many compiler writers or programmers in 1988 would have imagined that
a two's-complement implementation on silent-wraparound hardware would
have any reason not to interpret "uint1x = usmall1 * usmall2;" as being
equivalent to "uint1x = (unsigned)usmall1 * usmall2;", 100% of the time?
If there was no reason to imagine that a compiler might do such a thing,
what would be the benefit of promising that it wouldn't?

Also, what terminology do you think that the Standard would use to describe
situations in which 90%+ of implementations behaved in the same consistent
fashion, but mandating that the remaining platforms document a consistent
behavior would impose a significant runtime cost? Do you believe that the
authors of the Standard intended that it be interpreted as doing anything
other than maintaining the status quo in such cases?

It would have been helpful if the Standard had indicated that compilers
that offer features and guarantees beyond the Standard should document them
if they expect programmers to use them, and had included a list of common
guarantees for which implementations should document such support when
applicable. The Standard, however, implies that such features are not to
be considered "extensions", since the it lists extensions which were
supported by a minority of implementations, but fails to list behavioral
guarantees which, according to the authors, were applicable to a majority.
If the authors of the Standard wanted to improve the state of documentation
they should have set a better example.

> > The alternative, is to define an auxiliary standard (in sort of a
> > similar manner as POSIX) that adds the defined behavior that he wants so
> > that just by invoking the implement in the mode documented to be
> > compliant to this extended standard gets what he wants. This leaves the
> > basic standard clean, and possible to use on the oddball machines as has
> > been the policy of the committee, and still lets people like him get
> > what they want. The fact that this hasn't already been done makes me
> > think that the existing implementations do well enough at this with the
> > right options that it isn't really that highly demanded as a formal (or
> > informal) standard.
>
> Agreed.

Embedded compilers have to date generally refrained from the kinds of
aggressive optimizations that seem to be becoming fashionable elsewhere.
Unfortunately, their documentation tends to be rather sloppy with regard
to such issues. What should one assume about future versions of an
embedded compiler whose documentation says nothing about type-based
aliasing because it ensures all variables' states are kept in memory
any time a non-restrict-qualified pointer is accessed?

> Sometimes I think that such an auxiliary standard could be useful, and
> define "normal C systems". It could say things like "CHAR_BIT is always
> either 8, 16 or 32," "signed ints are always two's complement," and
> "integer types are always either little-endian or big-endian, with no
> mixtures, padding bits, or other awkwardness". Exactly what goes into
> the standard is debatable - people will have different opinions. But it
> would mean that at least some types of code could be more easily written
> in a way that is portable across a range of systems - basically it's
> just a minimum set of implementation-dependent behaviour.

POSIX defines some such issues, but also mandates a lot of other things
which would not be meaningful on many free-standing implementations.

> But as you say, no such standard exists. That is likely to be because
> people do well enough with the normal C standard, the posix standard,
> the defacto "standard" of gcc behaviour (which is followed to a fair
> degree by clang, icc, and several other compilers), and the
> implementation-dependent behaviour of the target they are actually using.

Is there any logical reason why "implementation-dependent" behavior
should be limited to actions for which every platform is capable of
defining a fully-predictable behavior? Is a programmer any more entitled
to expect that a silent-wraparound two's-complement implementation will
regard "(unsigned)(int)x" as equivalent to "(unsigned)x" than to expect
that "-1<<x" will be equivalent to "-(1<<x)" for values of x from 0 to 14?

David Brown

unread,
Aug 24, 2016, 7:28:07 PM8/24/16
to
On 24/08/16 20:58, supe...@casperkitty.com wrote:

> Embedded compilers have to date generally refrained from the kinds of
> aggressive optimizations that seem to be becoming fashionable elsewhere.
> Unfortunately, their documentation tends to be rather sloppy with regard
> to such issues. What should one assume about future versions of an
> embedded compiler whose documentation says nothing about type-based
> aliasing because it ensures all variables' states are kept in memory
> any time a non-restrict-qualified pointer is accessed?
>

I don't know what embedded compilers /you/ have been using, but I have
been using ones that do "aggressive optimisations" such as type-based
alias analysis for 20+ years. I have used a few that have very little
in the way of optimisation (effectively treating all variables as
"volatile", for example). But many embedded compilers are far more
sophisticated and advanced than you seem to think.

Documentation quality does vary, though - from very good to very poor,
with little regard for price of the product.

supe...@casperkitty.com

unread,
Aug 25, 2016, 1:28:27 AM8/25/16
to
On Wednesday, August 24, 2016 at 6:28:07 PM UTC-5, David Brown wrote:
> I don't know what embedded compilers /you/ have been using, but I have
> been using ones that do "aggressive optimisations" such as type-based
> alias analysis for 20+ years.

Got some names? How aggressively did they do type-based analysis, and to
what extent did they try to recognize indications that aliasing was likely?
Given a construct like:

void set_lower_half_to_0x8000(uint32_t *n)
{
((uint16_t*)n)[IS_BIG_ENDIAN] = 0x8000;
}

would the compilers of 20 years ago assume that the method would only be
called with the addresses of uint16_t objects, or would they recognize
that the code is modifying something that has been identified with a
pointer of type uint32_t, and thus is likely modifying something of type
uint32_t?

How do the compilers you use regard the relative sequencing of volatile-
qualified and non-volatile-qualified operations? If volatile accesses
aren't treated as memory barriers, how would you handle something like
copying arbitrary-type data to a buffer and then starting a DMA transfer?
There is no standard library version of "memcpy" which uses volatile-
qualified pointers, and copying data one byte at a time is apt to be
sufficiently slow as to negate much of the speed benefit from using DMA
in the first place.

Malcolm McLean

unread,
Aug 25, 2016, 2:38:28 AM8/25/16
to
On Thursday, August 25, 2016 at 6:28:27 AM UTC+1, supe...@casperkitty.com wrote:
>
> How do the compilers you use regard the relative sequencing of volatile-
> qualified and non-volatile-qualified operations? If volatile accesses
> aren't treated as memory barriers, how would you handle something like
> copying arbitrary-type data to a buffer and then starting a DMA transfer?
> There is no standard library version of "memcpy" which uses volatile-
> qualified pointers, and copying data one byte at a time is apt to be
> sufficiently slow as to negate much of the speed benefit from using DMA
> in the first place.
>
There's really no answer to that one.
"volatile" means that the object can change at any time, except when
the program is actually in the process of writing to it. So if
doubles are implemented in software on an 8 bit machine, the
object has to be locked for 8 bytes of read / write, then unlocked.

But what does it mean to declare a buffer volatile? You can't just
read or write from it, you're going to need some, probably trivial
index operation. Really you need to lock the buffer, perform your
operation, then unlock it, just as with the volatile double on an
8 bit machine, assuming that the buffer needs to be in a coherent
state.
Once you've locked the buffer, it is of course no longer volatile.

David Brown

unread,
Aug 25, 2016, 4:32:07 AM8/25/16
to
On 25/08/16 07:28, supe...@casperkitty.com wrote:
> On Wednesday, August 24, 2016 at 6:28:07 PM UTC-5, David Brown wrote:
>> I don't know what embedded compilers /you/ have been using, but I have
>> been using ones that do "aggressive optimisations" such as type-based
>> alias analysis for 20+ years.
>
> Got some names? How aggressively did they do type-based analysis, and to
> what extent did they try to recognize indications that aliasing was likely?

Diab Data was one I remember from that age (it's now owned by Wind
River, which is in turn owned by Intel).

> Given a construct like:
>
> void set_lower_half_to_0x8000(uint32_t *n)
> {
> ((uint16_t*)n)[IS_BIG_ENDIAN] = 0x8000;
> }

I would not write such code - not even 20 years ago - so I can't say for
sure.

>
> would the compilers of 20 years ago assume that the method would only be
> called with the addresses of uint16_t objects, or would they recognize
> that the code is modifying something that has been identified with a
> pointer of type uint32_t, and thus is likely modifying something of type
> uint32_t?
>
> How do the compilers you use regard the relative sequencing of volatile-
> qualified and non-volatile-qualified operations? If volatile accesses
> aren't treated as memory barriers, how would you handle something like
> copying arbitrary-type data to a buffer and then starting a DMA transfer?

Volatile access are not, and never have been, memory barriers. If you
think compilers sequence non-volatile and volatile operations, then you
are simply wrong. You are not alone in this - it's a mistake many
people have made - but you are wrong. And if you think older compilers
guaranteed something different, then you are /very/ wrong.

How do you make sure your buffer transfers are sequenced correctly with
respect to the volatile transfers? You have three options:

1. Use volatile accesses when doing the buffer transfers. Since you
will be copying the data anyway, you are not going to lose efficiency.
Note that "what constitutes a volatile access is implementation defined"
- but most (probably all) compilers define simple and direct volatile
reads and writes in the obvious way.

2. Use a memory barrier. This has traditionally been
implementation-dependent, such as "asm volatile("" ::: "memory")" for
gcc. With C11, "atomic_thread_fence(memory_order_seq_cst)" should do
the job.

3. Use a more limited compiler, or artificially limit your compiler by
using low optimisation options or other flags.

> There is no standard library version of "memcpy" which uses volatile-
> qualified pointers, and copying data one byte at a time is apt to be
> sufficiently slow as to negate much of the speed benefit from using DMA
> in the first place.
>

memcpy is not volatile - that is certainly true. But often it does not
copy bytes at a time - a good compiler will inline faster copying loops
if it can see the size of the copy and it knows the alignments of the
source and destination. Library versions will also often try to copy in
larger sizes than single bytes.

David Brown

unread,
Aug 25, 2016, 4:44:57 AM8/25/16
to
On 25/08/16 08:38, Malcolm McLean wrote:
> On Thursday, August 25, 2016 at 6:28:27 AM UTC+1, supe...@casperkitty.com wrote:
>>
>> How do the compilers you use regard the relative sequencing of volatile-
>> qualified and non-volatile-qualified operations? If volatile accesses
>> aren't treated as memory barriers, how would you handle something like
>> copying arbitrary-type data to a buffer and then starting a DMA transfer?
>> There is no standard library version of "memcpy" which uses volatile-
>> qualified pointers, and copying data one byte at a time is apt to be
>> sufficiently slow as to negate much of the speed benefit from using DMA
>> in the first place.
>>
> There's really no answer to that one.
> "volatile" means that the object can change at any time, except when
> the program is actually in the process of writing to it. So if
> doubles are implemented in software on an 8 bit machine, the
> object has to be locked for 8 bytes of read / write, then unlocked.

You are confusing volatile accesses with atomic accesses. They are
significantly different things.

Volatile accesses are used when you want to read or write something in
memory in a controlled manner - no more and no fewer accesses than the
source code says, and in the given order with respect to other volatile
accesses (and other observable behaviour).

Atomic accesses are used when code (especially other threads or signal
handlers) must see a consistent view of an object - either the complete
old value, or the complete new value, but never a mixture of the two.

Often the concepts are used together, and you are not alone in confusing
them - but they are different, and they can be used independently.

>
> But what does it mean to declare a buffer volatile? You can't just
> read or write from it, you're going to need some, probably trivial
> index operation.

Yes, you just read or write to it.

volatile uint8_t buffer[128];

void copyToBuffer(const uint8_t *pSrc, int count) {
int n = 0;
while (count--) {
buffer[n++] = *pSrc++;
}
}

Problem solved.


You can also use a buffer that is not declared volatile, and make the
accesses volatile:

uint8_t buffer[128];

void copyToBuffer(const uint8_t *pSrc, int count) {
int n = 0;
while (count--) {
*((volatile uint8_t*) (&(buffer[n++]))) = *pSrc++;
}
}

(I know there are unnecessary extra brackets there - it's often best to
do this sort of thing with macros such as Linux's ACCESS_ONCE, or
alternatively to use a "volatile uint8_t *" pointer rather than an
explicit cast.)


> Really you need to lock the buffer, perform your
> operation, then unlock it, just as with the volatile double on an
> 8 bit machine, assuming that the buffer needs to be in a coherent
> state.

No, really you don't need to lock the buffer. You just need to
understand what you are doing.

> Once you've locked the buffer, it is of course no longer volatile.
>

As long as your lock methods have appropriate memory barriers, then
locking the buffer will work. But it is completely unnecessary in most
cases.


supe...@casperkitty.com

unread,
Aug 25, 2016, 10:29:43 AM8/25/16
to
On Thursday, August 25, 2016 at 1:38:28 AM UTC-5, Malcolm McLean wrote:
> Once you've locked the buffer, it is of course no longer volatile.

Since many freestanding implementations of C generally won't support the
C11 threading extensions (since that would require that the implementation
be much heavier weight than one which lets programmers operate on bare
metal), by what standard means would you suggest that programmers tell the
compiler that something is locked?

One of the things that made C useful for embedded systems was that a
programmer who was familiar with a hardware platform could pretty well
guess what needed to be written in C to access it. If a platform has
a uniform linear addressing space and one wants to write a 32-bit word
0xCAFEBABE to address 0xABCD1234, a non-oddball implementation for that
platform will allow that to be accomplished via

* ((uint32_t volatile*)0xABCD1234) = 0xCAFEBABE;

If a compiler treats volatile variables as memory barriers, then code
which ensures that compiler-produced accesses to a variable will be
separated from other accesses by volatile I/O can work correctly without
having to use any further language extensions.

Rick C. Hodgin

unread,
Aug 25, 2016, 10:32:49 AM8/25/16
to
On Thursday, August 25, 2016 at 10:29:43 AM UTC-4, supe...@casperkitty.com
> One of the things that made C useful for embedded systems was that a
> programmer who was familiar with a hardware platform could pretty well
> guess what needed to be written in C to access it. If a platform has
> a uniform linear addressing space and one wants to write a 32-bit word
> 0xCAFEBABE to address 0xABCD1234, a non-oddball implementation for that
> platform will allow that to be accomplished via
>
> * ((uint32_t volatile*)0xABCD1234) = 0xCAFEBABE;
>
> If a compiler treats volatile variables as memory barriers, then code
> which ensures that compiler-produced accesses to a variable will be
> separated from other accesses by volatile I/O can work correctly without
> having to use any further language extensions.

Does the C Standard allow for volatile casts on non-volatile variables?
And to explicitly cast hard-coded values like that as pointers?

Malcolm McLean

unread,
Aug 25, 2016, 11:28:20 AM8/25/16
to
On Thursday, August 25, 2016 at 3:29:43 PM UTC+1, supe...@casperkitty.com wrote:
> On Thursday, August 25, 2016 at 1:38:28 AM UTC-5, Malcolm McLean wrote:
> > Once you've locked the buffer, it is of course no longer volatile.
>
> Since many freestanding implementations of C generally won't support the
> C11 threading extensions (since that would require that the implementation
> be much heavier weight than one which lets programmers operate on bare
> metal), by what standard means would you suggest that programmers tell the
> compiler that something is locked?
>
A flag will do.

The difficult isn't in setting a flag, assuming that writes of a single byte are atomic
and can't be interrupted. It's what to do if the resource is locked. Sometimes you
can yield until it comes free, sometimes you can skip the operation and hope to
pick it up on the next pass, sometimes you can use double buffering.


supe...@casperkitty.com

unread,
Aug 25, 2016, 1:01:30 PM8/25/16
to
On Thursday, August 25, 2016 at 3:32:07 AM UTC-5, David Brown wrote:
> Diab Data was one I remember from that age (it's now owned by Wind
> River, which is in turn owned by Intel).

Never heard of it, but I'd be interested to know more. What was the
target platform?

> > Given a construct like:
> >
> > void set_lower_half_to_0x8000(uint32_t *n)
> > {
> > ((uint16_t*)n)[IS_BIG_ENDIAN] = 0x8000;
> > }
>
> I would not write such code - not even 20 years ago - so I can't say for
> sure.

The particular example was contrived to ask what a compiler would do with
it, but the need to process bunches of bits in various size groups comes
up a lot in some kinds of code (e.g. bitmap graphics). If you have e.g.
a bitmap screen upon which it's necessary to draw 16-bit-wide tiles on
16-bit boundaries and 32-bit-wide tiles on 32-bit boundaries, how would
you do that efficiently in a fashion consistent with C's rules?

IMHO, the Standard really needs to add "compiler" memory barriers; any
implementation should be able to uphold the guarantees they would imply,
and almost any implementation would have to be able to in order to uphold
the semantics of something like:

unsigned char barrier_dummy;
unsigned char volatile * volatile barrier_dummy_ptr = &barrier_dummy;
void volatile * volatile barrier_dummy_ptr2;
void memory_barrier(void *x)
{
barrier_dummy_ptr2=x;
*barrier_dummy_ptr += 1;
}

Unless an implementation could know everything about the volatile
variables, it would have to presume that storing any address to
barrier_dummy_ptr2 could cause barrier_dummy to hold any address
which differed by a fixed displacement; if that occurred, the
aliasing rules would require that a compiler recognize the aliasing
between *barrier_dummy_pointer and the object whose address appeared
there. A compiler could in theory generate code to test whether
barrier_dummy_ptr had changed and skip the memory barrier if it hadn't,
but it seems doubtful that such behavior would improve efficiency.

Of course, generating all the code for the above when a memory barrier
is required would be horrendously inefficient, but any implementation
which has any volatile locations outside its control would need to be
able to handle memory barriers at the compiler level (even if they
simply had no effect), so I see nothing to be gained by not having
a standard means by which code could forbid the compiler from moving
accesses to any lvalues associated with a pointer across a certain point
in the code.

> How do you make sure your buffer transfers are sequenced correctly with
> respect to the volatile transfers? You have three options:
>
> 1. Use volatile accesses when doing the buffer transfers. Since you
> will be copying the data anyway, you are not going to lose efficiency.
> Note that "what constitutes a volatile access is implementation defined"
> - but most (probably all) compilers define simple and direct volatile
> reads and writes in the obvious way.

If the buffer may contain data of arbitrary types, do you do the transfers
a byte at a time? If the source pointer will usually be word aligned, that
would seem to imply at least a 4x performance hit.

> 2. Use a memory barrier. This has traditionally been
> implementation-dependent, such as "asm volatile("" ::: "memory")" for
> gcc. With C11, "atomic_thread_fence(memory_order_seq_cst)" should do
> the job.

By my understanding, C11 can only offer that if it includes aspects of
threading which requires a higher level of OS integration than many
freestanding implementations would support.

> 3. Use a more limited compiler, or artificially limit your compiler by
> using low optimisation options or other flags.

What should be the right approach would be to have standardized targeted
ways of preventing breaking "optimizations". Otherwise, disabling such
optimizations with a sledge hammer will have some performance cost, but
depending upon the application that cost may be less than the cost of
foregoing efficient techniques that a platform would support but the
Standard doesn't.

> > There is no standard library version of "memcpy" which uses volatile-
> > qualified pointers, and copying data one byte at a time is apt to be
> > sufficiently slow as to negate much of the speed benefit from using DMA
> > in the first place.
> >
>
> memcpy is not volatile - that is certainly true. But often it does not
> copy bytes at a time - a good compiler will inline faster copying loops
> if it can see the size of the copy and it knows the alignments of the
> source and destination. Library versions will also often try to copy in
> larger sizes than single bytes.

Of course memcpy is optimized. The problem is that unless one wants to use
a compiler-specific extension to do something no compiler should have any
problem supporting, one can't use memcpy on buffers which are going to be
accessed asynchronously, even if a volatile action would separate the memcpy
from any time when external accesses could occur.

Chris M. Thomasson

unread,
Aug 25, 2016, 1:02:16 PM8/25/16
to
FWIW, "asm volatile("" ::: "memory")" is a compiler barrier, not a full
memory barrier like "atomic_thread_fence(memory_order_seq_cst)". I think
that a compiler barrier in C11 is "atomic_signal_fence".

supe...@casperkitty.com

unread,
Aug 25, 2016, 1:15:00 PM8/25/16
to
On Thursday, August 25, 2016 at 10:28:20 AM UTC-5, Malcolm McLean wrote:
> On Thursday, August 25, 2016 at 3:29:43 PM UTC+1, supercat wrote:
> > Since many freestanding implementations of C generally won't support the
> > C11 threading extensions (since that would require that the implementation
> > be much heavier weight than one which lets programmers operate on bare
> > metal), by what standard means would you suggest that programmers tell the
> > compiler that something is locked?
> >
> A flag will do.
>
> The difficult isn't in setting a flag, assuming that writes of a single byte are atomic
> and can't be interrupted. It's what to do if the resource is locked.

Do you means a "flag", as in:

unsigned char data_source_1[64], data_source_2[64];
unsigned char data_buff[64];

volatile atomic_byte buffer_busy;

while(buffer_busy)
;
memcpy(data_buff, data_source_1, 64);
buffer_busy = 1; // Tell interrupt to send data & clear flag
while(buffer_busy)
;
memcpy(data_buff, data_source_1, 64);
buffer_busy = 1;

That's the pattern I'd like to use, but nothing in the Standard would
forbid a compiler from moving the memcpy operation before the "while()"
statement.

supe...@casperkitty.com

unread,
Aug 25, 2016, 1:27:04 PM8/25/16
to
If an implementation does not define uintptr_t or intptr_t, it may
specify that every pointer-to-integer cast will yield UB; even if an
implementation does so, however, it would still be required to accept
the syntax. IMHO, it would have been better to make such casts an
optional feature, since mandating that a compiler accept a syntax
without mandating that it have a useful behavior seems a bit silly.

I don't think an integer-to-pointer cast is allowed to ever invoke UB
(despite the fact that on some implementations, trapping on invalid
casts could be a useful feature) but if an implementation does not
define intptr_t or uintptr_t, there is no requirement that there exist
any integer values that would not produce trap representations.

Casting addresses to pointers and accessing them in the indicated fashion
is among the useful behaviors which should be part of a "commonplace C"
standard. Someone using a new compiler on a linear-address platform should
not have to consult the documentation to see how it defines integer-to-
pointer conversions in the absence of any particular reason to expect it to
do anything other than yield a pointer to the indicated address.

supe...@casperkitty.com

unread,
Aug 25, 2016, 1:33:52 PM8/25/16
to
On Thursday, August 25, 2016 at 12:02:16 PM UTC-5, Chris M. Thomasson wrote:
> FWIW, "asm volatile("" ::: "memory")" is a compiler barrier, not a full
> memory barrier like "atomic_thread_fence(memory_order_seq_cst)". I think
> that a compiler barrier in C11 is "atomic_signal_fence".

Does the C11 standard define any means via which an implementation could
provide things like atomic_signal_fence without being required to provide
things like full memory barriers? In many freestanding implementations,
the programmer will know more than the compiler about the behavior of the
caching controller, and so user code may be able to extend compiler barriers
to full barriers even if the compiler can't, but if there isn't some form
of compiler barrier available user code won't be able to guarantee anything
except through crude and inefficient kludges.

Rick C. Hodgin

unread,
Aug 25, 2016, 2:11:48 PM8/25/16
to
On Thursday, August 25, 2016 at 1:27:04 PM UTC-4, supe...@casperkitty.com wrote:
> On Thursday, August 25, 2016 at 9:32:49 AM UTC-5, Rick C. Hodgin wrote:
> > On Thursday, August 25, 2016 at 10:29:43 AM UTC-4, supe...@casperkitty.com
> > > One of the things that made C useful for embedded systems was that a
> > > programmer who was familiar with a hardware platform could pretty well
> > > guess what needed to be written in C to access it. If a platform has
> > > a uniform linear addressing space and one wants to write a 32-bit word
> > > 0xCAFEBABE to address 0xABCD1234, a non-oddball implementation for that
> > > platform will allow that to be accomplished via
> > >
> > > * ((uint32_t volatile*)0xABCD1234) = 0xCAFEBABE;
> > >
> > > If a compiler treats volatile variables as memory barriers, then code
> > > which ensures that compiler-produced accesses to a variable will be
> > > separated from other accesses by volatile I/O can work correctly without
> > > having to use any further language extensions.
> >
> > Does the C Standard allow for volatile casts on non-volatile variables?
> > And to explicitly cast hard-coded values like that as pointers?
>
> If an implementation does not define uintptr_t or intptr_t, it may
> specify that every pointer-to-integer cast will yield UB; even if an
> implementation does so, however, it would still be required to accept
> the syntax. IMHO, it would have been better to make such casts an
> optional feature, since mandating that a compiler accept a syntax
> without mandating that it have a useful behavior seems a bit silly.

Very silly.

> I don't think an integer-to-pointer cast is allowed to ever invoke UB
> (despite the fact that on some implementations, trapping on invalid
> casts could be a useful feature) but if an implementation does not
> define intptr_t or uintptr_t, there is no requirement that there exist
> any integer values that would not produce trap representations.
>
> Casting addresses to pointers and accessing them in the indicated fashion
> is among the useful behaviors which should be part of a "commonplace C"
> standard. Someone using a new compiler on a linear-address platform should
> not have to consult the documentation to see how it defines integer-to-
> pointer conversions in the absence of any particular reason to expect it to
> do anything other than yield a pointer to the indicated address.

Agreed.

Can you cast an otherwise non-volatile thing to be volatile on an
instance use, or the other-way around cast a volatile thing to be
non-volatile on an instance use?

I was not aware of that ability (if it's legal in the C Standard).

supe...@casperkitty.com

unread,
Aug 25, 2016, 3:03:51 PM8/25/16
to
On Thursday, August 25, 2016 at 1:11:48 PM UTC-5, Rick C. Hodgin wrote:
> Can you cast an otherwise non-volatile thing to be volatile on an
> instance use, or the other-way around cast a volatile thing to be
> non-volatile on an instance use?

Conversions between pointer types that are identical but for the presence
or absence of a "volatile" qualifier are fully defined. Use of a pointer
that is not volatile-qualified to access an object that is volatile qualified
invokes UB, regardless of whether the variable is ever accessed in any way
the compiler doesn't know about [likely to allow for the possibility that
a system might locate such variables in a region of memory that requires
special instructions to access, but a silly restriction on implementations
where volatile variables are allocated the same as any other]. It's not
clear what rules would apply to storage supplied by "malloc()", since there
are applications that need the ability to create buffers that can be
accessed outside the compiler's control.

Chris M. Thomasson

unread,
Aug 25, 2016, 5:13:08 PM8/25/16
to
On 8/25/2016 10:33 AM, supe...@casperkitty.com wrote:
> On Thursday, August 25, 2016 at 12:02:16 PM UTC-5, Chris M. Thomasson wrote:
>> FWIW, "asm volatile("" ::: "memory")" is a compiler barrier, not a full
>> memory barrier like "atomic_thread_fence(memory_order_seq_cst)". I think
>> that a compiler barrier in C11 is "atomic_signal_fence".
>
> Does the C11 standard define any means via which an implementation could
> provide things like atomic_signal_fence without being required to provide
> things like full memory barriers?

I do not know.

> In many freestanding implementations,
> the programmer will know more than the compiler about the behavior of the
> caching controller, and so user code may be able to extend compiler barriers
> to full barriers even if the compiler can't, but if there isn't some form
> of compiler barrier available user code won't be able to guarantee anything
> except through crude and inefficient kludges.

atomic_signal_fence is the compiler barrier you are looking for. C11 has
them, but many C11 impls do not support any of them. C++11 is a
different story.

supe...@casperkitty.com

unread,
Aug 25, 2016, 5:22:16 PM8/25/16
to
On Thursday, August 25, 2016 at 4:13:08 PM UTC-5, Chris M. Thomasson wrote:
> atomic_signal_fence is the compiler barrier you are looking for. C11 has
> them, but many C11 impls do not support any of them. C++11 is a
> different story.

By my understanding, threading support is optional, but I don't recall there
being a means via which implementations can promise to support some of the
features without having to support all, even though some features (like
atomic_signal_fence) should be supportable on all implementations. If the
Standard were to specify that any data may be accessed as any type provided
that accesses of disjoint types were separated by such a barrier, that would
enormously improve the semantics of the language, allowing many hot loops to
be sped up by 2x, and in extreme cases increasing by orders of magnitude the
speed with which some APIs could be implemented.

Malcolm McLean

unread,
Aug 26, 2016, 3:54:35 AM8/26/16
to

Malcolm McLean

unread,
Aug 26, 2016, 4:00:25 AM8/26/16
to
Exactly.
Details would vary according to the thread mechanism (interrupt
service routines, threads, etc). But the basic idea is that a
single flag says who owns the data and whether it is safe to read
or safe to write. Because you can only read from the buffer when it
is in a coherent state.
If this is not allowed, then it is deeply problematic for anyone
using C for low-level, hardware interacting routines.

Ben Bacarisse

unread,
Aug 26, 2016, 6:17:47 AM8/26/16
to
Maybe you have a broader concept of what's a detail than I do, but
unless there is some magic involved, a single atomic flag won't work
even with two threads using the buffer as above. I think supercat is
thinking that there will only ever be one thread putting data in the
buffer. Than again, you may not be suggesting that this scheme
generalises -- I was not sure exactly what your point was.

> Because you can only read from the buffer when it
> is in a coherent state.
> If this is not allowed, then it is deeply problematic for anyone
> using C for low-level, hardware interacting routines.

C11 has atomic read/modify/write operations and various memory access
ordering primitives, but I think this thread grew out of a desire to
mange things with only minimal language support. If you don't have
these, you pretty much have to rely on other implementation-provided
mechanisms and guarantees. That's obviously what's going on in
supercat's example.

--
Ben.

Malcolm McLean

unread,
Aug 26, 2016, 7:08:35 AM8/26/16
to
On Friday, August 26, 2016 at 11:17:47 AM UTC+1, Ben Bacarisse wrote:
> Malcolm McLean <malcolm...@btinternet.com> writes:
>
> Maybe you have a broader concept of what's a detail than I do, but
> unless there is some magic involved, a single atomic flag won't work
> even with two threads using the buffer as above. I think supercat is
> thinking that there will only ever be one thread putting data in the
> buffer. Than again, you may not be suggesting that this scheme
> generalises -- I was not sure exactly what your point was.
>
The problem is contention for a resource, two things trying to access it
at the same time. So the solution is a flag or token. Only if you hold the
token can you use the resource, and you must give it back as soon as you
are done.
But you can potentially have contention for the token - that's one detail.
However you can use an expensive solution for that because getting the
token is likely only a couple of machine instructions. We can say "freeze
entire system except me" for a couple of instruction, but not for a long
buffer write.
The other problem is what to do if you are denied the token. Sometimes
you can just busy idle until it come available again, but not always
(eg if you hold a token from elsewhere in the program, you can potentially
have a deadlock as two threads both busy idle until the token are exchanged).
Then if, as is commonly the situation, data is coming from an external source
and you need to put it in the buffer, what happens if another packet arrives
whilst you are busy idling?
It's not easy, but the basic flag or token concept is sound and is used in
practiced.
>
> > Because you can only read from the buffer when it
> > is in a coherent state.
> > If this is not allowed, then it is deeply problematic for anyone
> > using C for low-level, hardware interacting routines.
>
> C11 has atomic read/modify/write operations and various memory access
> ordering primitives, but I think this thread grew out of a desire to
> mange things with only minimal language support. If you don't have
> these, you pretty much have to rely on other implementation-provided
> mechanisms and guarantees. That's obviously what's going on in
> supercat's example.
>
People hooked up processors to IO ports that sent data at unpredictable
intervals long before 2011. You can write a token system in most dialects
of C. But supercat might be right, theoretically an aggressive or perverse
optimiser could wreck the system if it moves the buffer access outside
the token guards.

Ben Bacarisse

unread,
Aug 26, 2016, 8:45:35 AM8/26/16
to
Malcolm McLean <malcolm...@btinternet.com> writes:

> On Friday, August 26, 2016 at 11:17:47 AM UTC+1, Ben Bacarisse wrote:
>> Malcolm McLean <malcolm...@btinternet.com> writes:
>>
>> Maybe you have a broader concept of what's a detail than I do, but
>> unless there is some magic involved, a single atomic flag won't work
>> even with two threads using the buffer as above. I think supercat is
>> thinking that there will only ever be one thread putting data in the
>> buffer. Than again, you may not be suggesting that this scheme
>> generalises -- I was not sure exactly what your point was.
>>
> The problem is contention for a resource, two things trying to access it
> at the same time. So the solution is a flag or token. Only if you hold the
> token can you use the resource, and you must give it back as soon as you
> are done.

Yes, but the issue is how to take and release it. It's a well
understood problem with well known solutions, I just got the feeling you
were saying you could just use a shared "atomic" volatile object as the
flag. (I'm sure you understand the problem, it's the impression you
might give others that made me post.)

> But you can potentially have contention for the token - that's one
> detail.

I though you might be including more than I would in the word "detail"!
Ut's rather more than a detail in my book, but there's not much point in
arguing over what is or is not a detail.

<snip>
>> C11 has atomic read/modify/write operations and various memory access
>> ordering primitives, but I think this thread grew out of a desire to
>> mange things with only minimal language support. If you don't have
>> these, you pretty much have to rely on other implementation-provided
>> mechanisms and guarantees. That's obviously what's going on in
>> supercat's example.
>>
> People hooked up processors to IO ports that sent data at unpredictable
> intervals long before 2011.

Yes. These rely on implementation-defined guarantees.

,snip>
--
Ben.

Malcolm McLean

unread,
Aug 26, 2016, 9:09:43 AM8/26/16
to
On Friday, August 26, 2016 at 1:45:35 PM UTC+1, Ben Bacarisse wrote:
> Malcolm McLean <malcolm...@btinternet.com> writes:
>

> > The problem is contention for a resource, two things trying to access it
> > at the same time. So the solution is a flag or token. Only if you hold the
> > token can you use the resource, and you must give it back as soon as you
> > are done.
>
> Yes, but the issue is how to take and release it. It's a well
> understood problem with well known solutions, I just got the feeling you
> were saying you could just use a shared "atomic" volatile object as the
> flag. (I'm sure you understand the problem, it's the impression you
> might give others that made me post.)
>
It's not something I do in practice very often.
I was under the impression that if we declare a basic type volatile and,
say, an interrupt serve routine also accesses it, the compiler is
guaranteed to emit

disable interrupts
read variable
enable interrupts

if that's the only way of ensuring that you read a whole object, not
one that is part written to.

But I might be wrong on that. As I said, it's not something I actually
do very often.



supe...@casperkitty.com

unread,
Aug 26, 2016, 11:59:20 AM8/26/16
to
On Friday, August 26, 2016 at 6:08:35 AM UTC-5, Malcolm McLean wrote:
> On Friday, August 26, 2016 at 11:17:47 AM UTC+1, Ben Bacarisse wrote:

> > Maybe you have a broader concept of what's a detail than I do, but
> > unless there is some magic involved, a single atomic flag won't work
> > even with two threads using the buffer as above. I think supercat is
> > thinking that there will only ever be one thread putting data in the
> > buffer. Than again, you may not be suggesting that this scheme
> > generalises -- I was not sure exactly what your point was.
> >
> The problem is contention for a resource, two things trying to access it
> at the same time. So the solution is a flag or token. Only if you hold the
> token can you use the resource, and you must give it back as soon as you
> are done.

On a single-core system, if the instructions that write the buffer can be
guaranteed to execute before the later instructions in the same thread that
set the ready flag, and if the instructions that read the buffer can be
guaranteed not to execute until after a test of the ready flag has confirmed
that the buffer is ready, there is no difficulty.

The difficulty is that in C, the only way to guarantee that is to make all
accesses to the buffer "volatile", which then means that it cannot be
written efficiently with things like memcpy. The most efficient approach
using purely features described in the C Standard would likely be to use
memcpy to copy the source data into a uint64_t[] (or maybe uint32_t), copy
that one value at a time into a uint64_t volatile[], and then write a
volatile flag to indicate that the buffer is ready (if memcpy can copy 32
or more bits at a time, and the user-code loop can do likewise, the cost
of copying the data twice that way may be less than the cost of reading it
fro the source a byte at a time using `char*`.

> It's not easy, but the basic flag or token concept is sound and is used in
> practiced.

It should be sound on many hardware platforms, and there's no good reason
it should require the use of compiler-specific extensions on such
platforms, but it does. I'm curious what optimizations would be enabled
in real-world use cases that would not be enabled by suitable use of
"restrict" [if one assumes that operations on pointers in "restrict"-
qualified variables are unsequenced relative to operations on anything
*whose address cannot have been derived from such pointers*]. Language
rules that mandate the use of one or two extra copy steps that a compiler
can't optimize out would have a rather high cost, so I would think the
extra optimizations that are enabled by allowing compilers to resequence
non-qualified operations across volatile-qualified ones would have to be
pretty impressive to justify that cost.

> People hooked up processors to IO ports that sent data at unpredictable
> intervals long before 2011. You can write a token system in most dialects
> of C. But supercat might be right, theoretically an aggressive or perverse
> optimiser could wreck the system if it moves the buffer access outside
> the token guards.

My complaint is that it is fashionable for compiler writers to base their
optimizations upon what the Standard allows, without regard for whether
that would make it impossible to achieve good performance without relying
upon compiler-specific extensions, and there's no general way to know what
will work on any given compiler without having to read a lot of documentation
for it (which makes it impossible to write efficient code which can be
safely ported to any compiler the author doesn't know about).

A much more helpful approach would be to say that unless code specifies
otherwise, volatile operations act as compiler memory barriers (though not
necessarily hardware barriers) with regard to anything that might be hit
by "*someVolatileCharPointer += 1;". If achieving a desired level of
performance would require using a directive to waive that requirement,
failure to use the directive may yield sub-optimal performance, but that
is a far less severe problem than having code arbitrarily fail.

Ben Bacarisse

unread,
Aug 26, 2016, 12:11:45 PM8/26/16
to
That's an unlikely translation (I've never seen it), but an
implementation can do pretty much what it like with volatile accesses
and so may well choose to do that.

My point was about something else altogether: using a flag to
synchronise threads, but that seems to have been buried now.

> But I might be wrong on that. As I said, it's not something I actually
> do very often.

--
Ben.

supe...@casperkitty.com

unread,
Aug 26, 2016, 12:14:57 PM8/26/16
to
On Friday, August 26, 2016 at 7:45:35 AM UTC-5, Ben Bacarisse wrote:
> Yes. These rely on implementation-defined guarantees.

Unfortunately, from what I've seen, compiler documentation seldom bothers
to guarantee particular behaviors in cases where the compiler writers can't
imagine any sane compiler for the target platform doing anything else, and
even more rarely provides any easy way for a programmer to easily identify
whether a compiler will behave in a fashion that is, or at least used to be,
commonplace.

As such, they are "guaranteed" in the same sense that if someone buys a
can of soda it is "guaranteed" to be have an opening mechanism on the top
so it can be drunk without a can opener. If a customer buys a case of
soda cans that lack built-in openers, should the company respond by saying
that it never promised to include opening devices?

Malcolm McLean

unread,
Aug 26, 2016, 12:31:35 PM8/26/16
to
So you are saying that if we have this code

C

volatile counter = 0;
int prevecounter;

if(counter > prevcounter)
{
prevcounter = counter;
do_something();
}

ISR (assembly )

load byte (counter low) into accumlator
increment acumulator
write byte (counter low) from accumlator
if accumulator != 0
return from interrupt
load byte (counter high) into accumulator
increment accumulator
write byte (counter high) from accumulator
return from interrupt

then every so often we'll hit the point where
the counter is in the state low byte = zero, high byte =
previous high, and fail catastrophically?

Jerry Stuckle

unread,
Aug 26, 2016, 12:48:22 PM8/26/16
to
It is possible, if it takes more than one assembler instruction to
access the two bytes. A small window, to be sure. But it does exist.

And user-space programs cannot disable interrupts.


--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

supe...@casperkitty.com

unread,
Aug 26, 2016, 12:50:38 PM8/26/16
to
On Friday, August 26, 2016 at 11:31:35 AM UTC-5, Malcolm McLean wrote:
> So you are saying that if we have this code
>
> C
>
> volatile counter = 0;
> int prevecounter;
>
> if(counter > prevcounter)
> {
> prevcounter = counter;
> do_something();
> }
>
> ISR (assembly )
>
> load byte (counter low) into accumlator
> increment acumulator
> write byte (counter low) from accumlator
> if accumulator != 0
> return from interrupt
> load byte (counter high) into accumulator
> increment accumulator
> write byte (counter high) from accumulator
> return from interrupt
>
> then every so often we'll hit the point where
> the counter is in the state low byte = zero, high byte =
> previous high, and fail catastrophically?

On 8-bit platforms such failures would be commonplace. If one wants a
reliable 16-bit counter on an 8-bit platform, one would need to use some
mechanism to ensure validity, either by disabling interrupts when writing
it, or by doing something like:

int sampled_counter;
do
{
sampled_counter = counter;
} while (sampled_counter != counter);

The question is what should be guaranteed about the code:

unsigned char foo;
volatile unsigned char bar;

MAINLINE:
foo=0;
for (int i=0; i<=10; i++)
foo += i;
bar=1;
while(bar);
foo++;

INTERRUPT:
if (bar)
{
SOME_IO_PORT = foo;
foo++;
bar=0;
}

If a compiler guarantees that even without a volatile qualifier on "foo",
the volatile qualifier on "bar" will force it to store the value "55"
into "foo" before it writes "bar", and leave that value there until "bar"
reads zero, then it will only be necessary to write "foo" twice in the
main line and read it once (write 55, read 56, write 57). If the only
way to guarantee correct behavior would be to add a "volatile" qualifier
to "foo" as well as "bar", that would force the compiler to generate code
for the mainline that writes bar 12 times and reads it 11 times--not
nearly as efficient.

Chris M. Thomasson

unread,
Aug 26, 2016, 3:24:22 PM8/26/16
to
FWIW, clever use of atomic ops via lock/wait free algorithms, can
guarantee all of this works with interrupt handlers _and_ multiple threads.
It is loading more messages.
0 new messages