Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Operators for min/max, or min/max-assignment operators?

190 views
Skip to first unread message

Rick C. Hodgin

unread,
Jun 18, 2018, 2:54:55 PM6/18/18
to
Is there a native operator for min and max in C or C++?

// Traditional way
#define min(a, b) ((a <= b) ? a : b)
#define max(a, b) ((a >= b) ? a : b)

int a, b, c, d;

a = 5;
b = 8;

// Assign min and max:
c = min(a, b); d = max(a, b);

Is there an operator for this? Like /\ for max, \/ for min?

// Assign min and max using operators:
c = a \/ b; d = a /\ b;

Or min/max-assignment operators?

// Min of b and a is assigned to b
b \/= a; // Equivalent of b = min(b, a)

// Max of b and a is assigned to b
b /\= a; // Equivalent of b = max(b, a)

--
Rick C. Hodgin

Bart

unread,
Jun 18, 2018, 3:03:24 PM6/18/18
to
On 18/06/2018 19:54, Rick C. Hodgin wrote:
> Is there a native operator for min and max in C or C++?

Not in C. (In C++ the answer to any such question is apparently always
Yes. If it doesn't have it already, you can implement it.)

>     // Traditional way
>     #define min(a, b) ((a <= b) ? a : b)
>     #define max(a, b) ((a >= b) ? a : b)
>
>     int a, b, c, d;
>
>     a = 5;
>     b = 8;
>
>     // Assign min and max:
>     c = min(a, b);    d = max(a, b);
>
> Is there an operator for this?  Like /\ for max, \/ for min?

Why not just call then max and min? Then everyone will immediately
understand what they do.

Anyway /\ and \/ won't work for obvious reasons.

--
bart

David Brown

unread,
Jun 18, 2018, 3:33:52 PM6/18/18
to
On 18/06/18 20:54, Rick C. Hodgin wrote:
> Is there a native operator for min and max in C or C++?
>
>     // Traditional way
>     #define min(a, b) ((a <= b) ? a : b)
>     #define max(a, b) ((a >= b) ? a : b)
>

Traditional C would be:

#define MIN(a, b) (((a) <= (b)) ? (a) : (b))
#define MAX(a, b) (((a) >= (b)) ? (a) : (b))


In C++, you'd be better with a template. That would let you avoid the
"min(a++, b++)" problem.

If you are a gcc extension fan, you might also like their suggestion
using "typeof":

<https://gcc.gnu.org/onlinedocs/gcc/Typeof.html>

>     int a, b, c, d;
>
>     a = 5;
>     b = 8;
>
>     // Assign min and max:
>     c = min(a, b);    d = max(a, b);
>
> Is there an operator for this?  Like /\ for max, \/ for min?
>
>     // Assign min and max using operators:
>     c = a \/ b;       d = a /\ b;
>
> Or min/max-assignment operators?
>
>     // Min of b and a is assigned to b
>     b \/= a;    // Equivalent of b = min(b, a)
>
>     // Max of b and a is assigned to b
>     b /\= a;    // Equivalent of b = max(b, a)
>

gcc used to have "a <? b" as min(a, b) and "a >? b" as max(a, b), as an
extension. They dropped it a good while ago. I don't know exactly why,
but I expect it was very rarely used.

Daniel

unread,
Jun 18, 2018, 3:37:25 PM6/18/18
to
On Monday, June 18, 2018 at 2:54:55 PM UTC-4, Rick C. Hodgin wrote:
> Is there a native operator for min and max in C or C++?
>
> // Traditional way
> #define min(a, b) ((a <= b) ? a : b)
> #define max(a, b) ((a >= b) ? a : b)
>
> int a, b, c, d;
>
> a = 5;
> b = 8;
>
> // Assign min and max:
> c = min(a, b); d = max(a, b);
>
> Is there an operator for this? Like /\ for max, \/ for min?
>

No, but there are std::min and std::max, which are typically used as

c = (std::min)(a,b); d = (std::max)(a,b);

to avoid conflicts with min and max #defines.

Daniel

Rick C. Hodgin

unread,
Jun 18, 2018, 3:53:08 PM6/18/18
to
On 6/18/2018 3:03 PM, Bart wrote:
> On 18/06/2018 19:54, Rick C. Hodgin wrote:
>>      // Assign min and max:
>>      c = min(a, b);    d = max(a, b);
>>
>> Is there an operator for this?  Like /\ for max, \/ for min?
> Anyway /\ and \/ won't work for obvious reasons.


Why wouldn't those character combinations work?

--
Rick C. Hodgin

Öö Tiib

unread,
Jun 18, 2018, 4:10:43 PM6/18/18
to
Also it might be worth to note that several arguments can be supplied
to std::min and std::max since C++14:

auto m = std::max({a, b, c, d, e, f});

For sequences (or whole containers) there are std::min_element and
std::max_element.

If someone really wants to go nuts with custom operators then Swift
is perhaps good language for them. To me such experiments look too
cryptic and trying to remember keyboard shortcut for each dingbat
is also annoying. I know Rick won't use Swift since he hates Apple
for unknown reasons.

Scott Lurndal

unread,
Jun 18, 2018, 4:13:00 PM6/18/18
to
David Brown <david...@hesbynett.no> writes:
>On 18/06/18 20:54, Rick C. Hodgin wrote:
>> Is there a native operator for min and max in C or C++?
>>
>>     // Traditional way
>>     #define min(a, b) ((a <= b) ? a : b)
>>     #define max(a, b) ((a >= b) ? a : b)
>>
>
>Traditional C would be:
>
>#define MIN(a, b) (((a) <= (b)) ? (a) : (b))
>#define MAX(a, b) (((a) >= (b)) ? (a) : (b))
>
>
>In C++, you'd be better with a template. That would let you avoid the
>"min(a++, b++)" problem.

like std::min and std::max?

Scott Lurndal

unread,
Jun 18, 2018, 4:13:45 PM6/18/18
to
What is the meaning and purpose of the '\' character in C?

Bart

unread,
Jun 18, 2018, 4:19:55 PM6/18/18
to
Because \ is usually involved with line continuation. There would be
ambiguities.

--
bart

red floyd

unread,
Jun 18, 2018, 6:44:54 PM6/18/18
to
Not to mention that the "correct" way to do the "traditional" way is
#define min(a,b) (((a) <= (b)) ? (a) : (b))
#define max(a,b) (((a) >= (b)) ? (a) : (b))

Christian Gollwitzer

unread,
Jun 19, 2018, 12:41:58 AM6/19/18
to
Am 18.06.18 um 22:19 schrieb Bart:
I don't see that. Even now, the \ character does lie continuation only
if it is immediately foloowed by a newline character. In case somebody
necessarily wants to break the expresssion over multiple lines, a space
can be inserted:

a /\<newline> -> line continuation

a /\<space><newline> -> /\ operator followed by newline

At least the pure \ character works flawlessly in other languages like
Matlab as an operator, it means "matrix left divide". I've done a
similar language, here is the PEG grammar:

https://github.com/auriocus/VecTcl/blob/master/generic/vexpr.peg#L66

As you can see in line 66, \ followed by newline is parsed as
"whitespace". Line 58 defines the backslash as a multiplicative operator.

Christian

David Brown

unread,
Jun 19, 2018, 1:27:47 AM6/19/18
to
It is usually as an escape character, for writing characters like \n or
\t - I'd say line continuation was a good deal less common. But in
either case you'd risk ambiguities or complications. Using \ as part of
an operator would mean changing details of the C parsing. I'd expect it
to be doable, however.

David Brown

unread,
Jun 19, 2018, 1:40:56 AM6/19/18
to
Yes, just like that :-)

Juha Nieminen

unread,
Jun 19, 2018, 3:15:30 AM6/19/18
to
In comp.lang.c++ red floyd <dont....@its.invalid> wrote:
> Not to mention that the "correct" way to do the "traditional" way is
> #define min(a,b) (((a) <= (b)) ? (a) : (b))
> #define max(a,b) (((a) >= (b)) ? (a) : (b))

Which is very bad because a or b will be evaluated twice, which can be
bad for many reasons. For starters, if they are very heavy to evaluate
(eg. they are function calls that perform some heavy calculations),
it will take twice as long as necessary. More damningly, if either a
or b have any side-effects, those side-effects will be applied twice,
which may break things. (Side effects don't always necessarily
affect the variable itself, like in min(a++, b++), but can affect
other things somewhere else.)

Louis Krupp

unread,
Jun 19, 2018, 4:01:37 AM6/19/18
to
In my experience, the backslash has *always* been an escape
character, and I would expect using it for anything else to be a
well-intentioned but unpopular and ultimately disastrous effort.

Off the top of my head -- and based on absolutely no relevant
experience -- I would expect the addition of a new digraph or trigraph
to be almost as unpopular but with a better chance at success.

Louis

David Brown

unread,
Jun 19, 2018, 4:39:54 AM6/19/18
to
Of course. That is why the traditional way is to call these MIN and
MAX, not min and max, as a warning to users that they are macros and you
should not "call" them with arguments with side-effects.

Chris Vine

unread,
Jun 19, 2018, 6:28:27 AM6/19/18
to
All macro systems for languages with side effects and eager evaluation
run into the problem of side-effectful arguments, and the normal way
around that when writing a macro for such languages is to assign the
value of each argument to a local variable at the beginning of the macro
definition. The problem with that in the case of C and C++ is that
pre-processor macros are unhygienic and inject local variable names
into the call site[1].

The way around that in C and C++ is to put every macro definition with
local variables within its own statement block. But then you have the
problem that in C++ statement blocks are not expressions (they cannot
return values). So you end up having to pass in an additional argument
as an out parameter.

So you could have something like this as a mostly hygienic version of
MAX:

#define MAX(a, b, res) {auto aa = a; auto bb = b; res = (((aa) >= (bb)) ? (aa) : (bb));}

Yuck. As you have said, for something simple like 'max' and 'min',
template functions are much to be preferred in C++. They also have the
advantage that they can be made variadic.

Chris

[1] Less problematically, pre-processor macros also resolve external
identifier names used in the macro (in particular, function and operator
names) from the macro's call site and not from its definition site.
Hygienic macro systems do not do this.

Rick C. Hodgin

unread,
Jun 19, 2018, 8:32:07 AM6/19/18
to
On 6/19/2018 8:21 AM, Stefan Ram wrote:
> Louis Krupp <lkr...@nospam.pssw.com.invalid> writes:
>> Off the top of my head -- and based on absolutely no relevant
>> experience -- I would expect the addition of a new digraph or trigraph
>> to be almost as unpopular but with a better chance at success.
>
> On could use "⊤" for "max" and "⊥" for "min" akin to
> their usage in lattice theory.
>
> 2 ⊥ 8 == 2
> 2 ⊤ 8 == 8


That's not a standard character. How would you propose it be input
into the text editor by people seeking to use it?

--
Rick C. Hodgin

David Brown

unread,
Jun 19, 2018, 8:43:33 AM6/19/18
to
The way around this, whenever possible, is to use inline functions
rather than function-like macros. In C++, templates let you generalise
such functions - in C, C11 generics give you a more limited and
cumbersome solution, but enough to write safe min or max macros.

And some compilers (gcc and clang, maybe others) recognised this
limitation long ago and introduced an extension to give you statement
blocks with expression values. Whether or not you want to use such
extensions, is another matter.

>
> So you could have something like this as a mostly hygienic version of
> MAX:
>
> #define MAX(a, b, res) {auto aa = a; auto bb = b; res = (((aa) >= (bb)) ? (aa) : (bb));}
>
> Yuck. As you have said, for something simple like 'max' and 'min',
> template functions are much to be preferred in C++. They also have the
> advantage that they can be made variadic.
>

With enough effort, you can get a variadic hygienic polymorphic min and
max macro in C11. But it is not nearly as neat as with C++ templates,
or, even better, C++ concepts:

auto max(auto a, auto b) {
return (a >= b) ? a : b;
}

It's hard to get neater than that!

(With concepts, you could add addition constraints that would improve
compile-time errors.)

David Brown

unread,
Jun 19, 2018, 9:14:25 AM6/19/18
to
They are standard unicode characters, but I don't think they are
standard for maximum or minimum. In lattice theory, they are symbols
for "top" and "bottom" - the maximum of all elements, and the minimum of
all elements. You would normally use ∨ or ∧ for maximum and minimum -
or \/ and /\, or v and ^, if you are restricted to ASCII. Another
alternative would be ⊔ and ⊓.

That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
minimum, while you wanted to use the opposite symbols. I believe you
are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
or b", which is the maximum of two boolean values. If you like
axiomatic set theory and the Peano integers, then "maximum" is the same
as set union ∪, and "minimum" is set intersection ∩.

All these symbols have the disadvantage that they are difficult to type
with normal keyboard layouts - you need a character map applet, ugly
unicode-by-number input, or you would have to tweak your keyboard layout
files or compose key files (easy enough on *nix if you know what you are
doing, a good deal more difficult on Windows AFAIK). On my keyboard, I
can easily type ↓ and ↑, which would be an option - but that will not
apply to everyone.

Rick C. Hodgin

unread,
Jun 19, 2018, 9:15:10 AM6/19/18
to
I don't think so. While it's theoretically possible to break existing
code, it would have to be a very specific form, and it would be easy
to generate a diagnostic on /\ when it's declared as the last thing on
a line, that there may be ambiguity.

I look at additions like this to the language as a new thing, such that
it would introduce new processing logic to the compiler, and therefore
comments like "...won't work for obvious reasons" are rendered of none
effect (due to the enhancements to the compiler's parsing engineb).

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Jun 19, 2018, 9:27:12 AM6/19/18
to
On 6/19/2018 9:14 AM, David Brown wrote:
> On 19/06/18 14:31, Rick C. Hodgin wrote:
> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
> minimum, while you wanted to use the opposite symbols. I believe you
> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
> or b", which is the maximum of two boolean values. If you like
> axiomatic set theory and the Peano integers, then "maximum" is the same
> as set union ∪, and "minimum" is set intersection ∩.

Never heard of it. I think "a ∨ b" is confusing as a maximum, and
likewise "a ∧ b" is confusing as a minimum. I like the idea of the
arrow pointing up or down for max or min.

> All these symbols have the disadvantage that they are difficult to type
> with normal keyboard layouts - you need a character map applet, ugly
> unicode-by-number input, or you would have to tweak your keyboard layout
> files or compose key files (easy enough on *nix if you know what you are
> doing, a good deal more difficult on Windows AFAIK). On my keyboard, I
> can easily type ↓ and ↑, which would be an option - but that will not
> apply to everyone.

CAlive will use /\ for max, and \/ for min, and /\= for max assignment,
and \/= for min assignment. Other languages are free to use whatever
other syntaxes they choose.

You can always add this to CAlive to make it work otherwise:

#define ∨ /\
#define ∧ \/

Or:

#define ↑ /\
#define ↓ \/

And then you're good to go.

--
Rick C. Hodgin

Ben Bacarisse

unread,
Jun 19, 2018, 9:32:27 AM6/19/18
to
Wildly off-topic now...

David Brown <david...@hesbynett.no> writes:
<snip>
> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
> minimum, while you wanted to use the opposite symbols. I believe you
> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
> or b", which is the maximum of two boolean values. If you like
> axiomatic set theory and the Peano integers, then "maximum" is the same
> as set union ∪, and "minimum" is set intersection ∩.

Peano's axioms don't define numbers as sets. You are thinking of von
Neumann's construction of the naturals as sets. They can be used to
build a model of Peano arithmetic that has the property you state. But
other constructions are possible (such a Zermelo's or Frege's) which
give rise to models that don't have that property.

<snip>
--
Ben.

Chris Vine

unread,
Jun 19, 2018, 10:53:01 AM6/19/18
to
On Tue, 19 Jun 2018 14:43:22 +0200
David Brown <david...@hesbynett.no> wrote:
[snip]
> With enough effort, you can get a variadic hygienic polymorphic min and
> max macro in C11.

Do you know enough about C11 macros to show how that is done? (I would
be interested for that matter in how you get a non-variadic hygienic
macro in C, if that is by using a different technique from the one I
mentioned. I am still stuck on C89/90 as far as C pre-processors are
concerned.)

David Brown

unread,
Jun 19, 2018, 11:01:00 AM6/19/18
to
Yes, it was the von Neumann construction I was thinking of. It's quite
a number of years since I studied this stuff!

David Brown

unread,
Jun 19, 2018, 11:08:13 AM6/19/18
to
On 19/06/18 15:27, Rick C. Hodgin wrote:
> On 6/19/2018 9:14 AM, David Brown wrote:
>> On 19/06/18 14:31, Rick C. Hodgin wrote:
>> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
>> minimum, while you wanted to use the opposite symbols. I believe you
>> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
>> or b", which is the maximum of two boolean values. If you like
>> axiomatic set theory and the Peano integers, then "maximum" is the same
>> as set union ∪, and "minimum" is set intersection ∩.
>
> Never heard of it. I think "a ∨ b" is confusing as a maximum, and
> likewise "a ∧ b" is confusing as a minimum. I like the idea of the
> arrow pointing up or down for max or min.
>

Whether or not /you/ find it confusing, this is how it is used in
mathematics. You can decide that your language will never be of
interest to mathematicians or people who know about boolean logic, and
pick symbols that are the direct opposite of the standard. In the same
way, you can decide that your booleans will use "oui" for false and
"non" for true on the basis that you don't know French and don't care
about confusing Frenchmen.

Maximum and minimum are not so common operations that they need an
operator. Few programming languages have them as operators - many have
them as built-in functions named "max" and "min". That would seem to me
to be the best approach, avoiding confusing anyone.

It's your language. Listen to the advice you are given, or make the
decisions on your own - it's up to you. But remember that every step
you take that is against the flow limits the likelihood of anyone other
than you ever using the language.

Rick C. Hodgin

unread,
Jun 19, 2018, 11:32:53 AM6/19/18
to
On 6/19/2018 11:08 AM, David Brown wrote:
> On 19/06/18 15:27, Rick C. Hodgin wrote:
>> On 6/19/2018 9:14 AM, David Brown wrote:
>>> On 19/06/18 14:31, Rick C. Hodgin wrote:
>>> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
>>> minimum, while you wanted to use the opposite symbols. I believe you
>>> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
>>> or b", which is the maximum of two boolean values. If you like
>>> axiomatic set theory and the Peano integers, then "maximum" is the same
>>> as set union ∪, and "minimum" is set intersection ∩.
>>
>> Never heard of it. I think "a ∨ b" is confusing as a maximum, and
>> likewise "a ∧ b" is confusing as a minimum. I like the idea of the
>> arrow pointing up or down for max or min.
>>
>
> Whether or not /you/ find it confusing, this is how it is used in
> mathematics. You can decide that your language will never be of
> interest to mathematicians or people who know about boolean logic, and
> pick symbols that are the direct opposite of the standard.

I think very few people will use CAlive, David. I think those who do
will be of a particular mindset and it will work for them. It's been
demonstrated below how to fix it up for those who have need.

>> CAlive will use /\ for max, and \/ for min, and /\= for max assignment,
>> and \/= for min assignment. Other languages are free to use whatever
>> other syntaxes they choose.
>>
>> You can always add this to CAlive to make it work otherwise:
>>
>> #define ∨ /\
>> #define ∧ \/
>>
>> Or:
>>
>> #define ↑ /\
>> #define ↓ \/
>>
>> And then you're good to go.

I would like to see these operators and operator assignments added to
the C programming language, as well as C++. If they choose to use the
opposite direction then CAlive will support it.

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Jun 19, 2018, 11:47:35 AM6/19/18
to
On 6/19/2018 11:08 AM, David Brown wrote:
> ...remember that every step
> you take that is against the flow limits the likelihood of anyone other
> than you ever using the language.

These are new features, David. They don't have to be used.

-----
I wouldn't waste my time worrying about anything related to me or CAlive.
The language and my offering of its abilities will be given to people.
It will either be received or not, but my effort is in the giving, not in
its success. I seek to give people the best tool with the most features
available, and I am going the extra mile to make it be compatible with C,
and some of C++. This is my offering to the Lord first, and people second.

If I get it completed before leave this world, it will be given to people
for free, unencumbered, source code and all, in a variant of the public
domain license (where each person who receives it is asked to use it in a
proper manner, giving their changes and enhancements to people as they also
received it, but I don't force it by man's legal authority, but only leave
it between them and God to voluntarily follow that guidance).

But regardless, each person can choose what to do with these skills the
Lord first gave me that I have, in return, given back to Him first, and
to each of you second.

-----
When it's released, use it, don't use it. It won't bother me. I am
doing the best I can for my Lord. I am not receiving help in the form
of productive work on the project. Nobody's written a line of source
code on it. The entire creation is my authoring, my doing. And I am
proceeding despite receiving only constant criticism on the choices I
make.

At some point you'll have to realize I'm not doing this for you, David,
or anyone else who is critical of my work. I'm doing it for the Lord,
and for those who will receive it.

It is the same thing Jesus offers people with salvation. He will not
save those who reject Him, but for all who come to Him, He gives them
forgiveness and eternal life.

CAlive is my best offering along those lines. I give it freely to
people, and they will choose to receive it or reject it, but my offer-
ing will tie back explicitly to my giving the Lord the best of what He
first gave me.

As I say, I wouldn't waste my time worrying about anything related to me
or CAlive, David. Life's too short. Let it go.

--
Rick C. Hodgin

Keith Thompson

unread,
Jun 19, 2018, 12:00:38 PM6/19/18
to
David Brown <david...@hesbynett.no> writes:
[...]
> It is usually as an escape character, for writing characters like \n or
> \t - I'd say line continuation was a good deal less common. But in
> either case you'd risk ambiguities or complications. Using \ as part of
> an operator would mean changing details of the C parsing. I'd expect it
> to be doable, however.

It would require changing translation phase 2, which is where line
splicing occurs.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Keith Thompson

unread,
Jun 19, 2018, 12:02:04 PM6/19/18
to
Which is why it's traditional to write macro names in all-caps (MIN()
and MAX()) so the reader is reminded that they're macros and can have
odd interactions with side effects.

Keith Thompson

unread,
Jun 19, 2018, 12:18:24 PM6/19/18
to
David Brown <david...@hesbynett.no> writes:
[...]
> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
> minimum, while you wanted to use the opposite symbols. I believe you
> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
> or b", which is the maximum of two boolean values. If you like
> axiomatic set theory and the Peano integers, then "maximum" is the same
> as set union ∪, and "minimum" is set intersection ∩.

I would oppose defining either "a ∨ b" or "a \/ b" as max(a, b)
because most readers (myself included) are going to assume the
symbol means "min".

I would oppose defining either "a ∨ b" or "a \/ b" as min(a, b)
because it's apparently the opposite of mathematical usage.

I'm not at all convinced that min and max operators, with whatever
syntax, are worth adding to the language at all. If they were to be
added, agreeing on a syntax would be difficult.

james...@alumni.caltech.edu

unread,
Jun 19, 2018, 12:32:27 PM6/19/18
to
On Monday, June 18, 2018 at 2:54:55 PM UTC-4, Rick C. Hodgin wrote:
> Is there a native operator for min and max in C or C++?
>
> // Traditional way
> #define min(a, b) ((a <= b) ? a : b)
> #define max(a, b) ((a >= b) ? a : b)
>
> int a, b, c, d;
>
> a = 5;
> b = 8;
>
> // Assign min and max:
> c = min(a, b); d = max(a, b);
>
> Is there an operator for this? Like /\ for max, \/ for min?
>
> // Assign min and max using operators:
> c = a \/ b; d = a /\ b;
>
> Or min/max-assignment operators?
>
> // Min of b and a is assigned to b
> b \/= a; // Equivalent of b = min(b, a)
>
> // Max of b and a is assigned to b
> b /\= a; // Equivalent of b = max(b, a)

In IDL, "a < b" gives the minimum of a or b, and "a > b" gives the
maximum. This can be particularly useful when either operand is an
array. "a lt b" and "a gt b" are how you do the equivalent of C's
"a < b" and "a > g". Most IDL newbies get burned with this at least once
before learning that distinction.

Rick C. Hodgin

unread,
Jun 19, 2018, 12:48:17 PM6/19/18
to
I've tried to come up with a single-character operator for this. I
like Stefan's idea of the T-like character, and the inverted T-like
character, but they can't easily be typed. I suppose using T and !T
might work.

If anyone has better ideas, I'm all for them. So far, I like the
/\ max and \/ min combos. They can be rendered graphically in the
editor into some other character.

--
Rick C. Hodgin

Mr Flibble

unread,
Jun 19, 2018, 12:53:00 PM6/19/18
to
You like them? So what? They would never make it into C++ and we don't
care about your kooky god bothering custom language.

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That’s what I would say."

David Brown

unread,
Jun 19, 2018, 2:19:00 PM6/19/18
to
On 19/06/18 17:47, Rick C. Hodgin wrote:

> As I say, I wouldn't waste my time worrying about anything related to me
> or CAlive, David.  Life's too short.  Let it go.
>

I am not worrying about you or CAlive. I'm just giving general advice
to people who post in groups I follow. Perhaps my posts will help
people, perhaps they will lead to discussions, perhaps I will learn from
other posts. That's my motivation. Whether you agree with what I write
or not is up to you - I really don't mind one way or the other.

Rick C. Hodgin

unread,
Jun 19, 2018, 2:22:33 PM6/19/18
to
On 6/19/2018 2:18 PM, David Brown wrote:
> I am not worrying about you or CAlive.

Good to hear.

--
Rick C. Hodgin

David Brown

unread,
Jun 19, 2018, 2:26:41 PM6/19/18
to
On 19/06/18 18:18, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
>> minimum, while you wanted to use the opposite symbols. I believe you
>> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
>> or b", which is the maximum of two boolean values. If you like
>> axiomatic set theory and the Peano integers, then "maximum" is the same
>> as set union ∪, and "minimum" is set intersection ∩.
>
> I would oppose defining either "a ∨ b" or "a \/ b" as max(a, b)
> because most readers (myself included) are going to assume the
> symbol means "min".
>

Agreed.

> I would oppose defining either "a ∨ b" or "a \/ b" as min(a, b)
> because it's apparently the opposite of mathematical usage.

Agreed.

>
> I'm not at all convinced that min and max operators, with whatever
> syntax, are worth adding to the language at all. If they were to be
> added, agreeing on a syntax would be difficult.
>

Agreed.

The mathematical usage of these symbols can be surprising to people not
familiar with these fields. And even those that know them - for
example, in boolean logic - may not have considered them as maximum and
minimum operators. But I would be opposed to having symbol choices that
are a direct opposite of the mathematical ones.

So /if/ I were adding such operators to a language (and I almost
certainly would not), I'd be inclined to look at something different -
such as the old gcc <? and >? operators. Perhaps APL's maximum and
minimum operators a⌈b and a⌊b would be an option with existing practice,
but which would be equally unknown to almost everyone.

David Brown

unread,
Jun 19, 2018, 2:29:28 PM6/19/18
to
On 19/06/18 18:01, Keith Thompson wrote:
> Juha Nieminen <nos...@thanks.invalid> writes:
>> In comp.lang.c++ red floyd <dont....@its.invalid> wrote:
>>> Not to mention that the "correct" way to do the "traditional" way is
>>> #define min(a,b) (((a) <= (b)) ? (a) : (b))
>>> #define max(a,b) (((a) >= (b)) ? (a) : (b))
>>
>> Which is very bad because a or b will be evaluated twice, which can be
>> bad for many reasons. For starters, if they are very heavy to evaluate
>> (eg. they are function calls that perform some heavy calculations),
>> it will take twice as long as necessary. More damningly, if either a
>> or b have any side-effects, those side-effects will be applied twice,
>> which may break things. (Side effects don't always necessarily
>> affect the variable itself, like in min(a++, b++), but can affect
>> other things somewhere else.)
>
> Which is why it's traditional to write macro names in all-caps (MIN()
> and MAX()) so the reader is reminded that they're macros and can have
> odd interactions with side effects.
>

Personally, I am not a fan of all-caps macro names in general. But I
think they help in cases like this, where they indicate that you have to
avoid side-effects (or be /really/ sure you know what you are doing!).
If a function-like macro is as safe as a function - perhaps using gcc's
extensions here - then I prefer a small letter name.

Robert Wessel

unread,
Jun 19, 2018, 3:00:26 PM6/19/18
to
On Tue, 19 Jun 2018 20:26:31 +0200, David Brown
<david...@hesbynett.no> wrote:

>On 19/06/18 18:18, Keith Thompson wrote:
>> David Brown <david...@hesbynett.no> writes:
>> [...]
>>> That reminds me - in maths, "a ? b" is the /maximum/ and "a ? b" is the
>>> minimum, while you wanted to use the opposite symbols. I believe you
>>> are familiar with boolean logic - "a ? b" is a way to write "a inclusive
>>> or b", which is the maximum of two boolean values. If you like
>>> axiomatic set theory and the Peano integers, then "maximum" is the same
>>> as set union ?, and "minimum" is set intersection ?.
>>
>> I would oppose defining either "a ? b" or "a \/ b" as max(a, b)
>> because most readers (myself included) are going to assume the
>> symbol means "min".
>>
>
>Agreed.
>
>> I would oppose defining either "a ? b" or "a \/ b" as min(a, b)
>> because it's apparently the opposite of mathematical usage.
>
>Agreed.
>
>>
>> I'm not at all convinced that min and max operators, with whatever
>> syntax, are worth adding to the language at all. If they were to be
>> added, agreeing on a syntax would be difficult.
>>
>
>Agreed.
>
>The mathematical usage of these symbols can be surprising to people not
>familiar with these fields. And even those that know them - for
>example, in boolean logic - may not have considered them as maximum and
>minimum operators. But I would be opposed to having symbol choices that
>are a direct opposite of the mathematical ones.
>
>So /if/ I were adding such operators to a language (and I almost
>certainly would not), I'd be inclined to look at something different -
>such as the old gcc <? and >? operators. Perhaps APL's maximum and
>minimum operators a?b and a?b would be an option with existing practice,
>but which would be equally unknown to almost everyone.


Old HP-2000 BASIC just used MIN and MAX as the binary infix operators.
I also remember another language, which I can't remember right now,
that also had binary infix MIN and MAX operators, but used them in the
reverse sense (a MAX 10) would be interpreted as use A, but with a
maximum value of 10, so in effect it was what was more commonly MIN
IIRC, it also had more conventional MIN() and MAX() functions.
Presumably I remember that because it bit me on the posterior.

Probably not very applicable to C.

David Brown

unread,
Jun 19, 2018, 3:12:23 PM6/19/18
to
#define make_max(name, type) \
static inline type max_ ## name (type a, type b) { return a > b ? a
: b; }

make_max(char, char)
make_max(uchar, unsigned char)
make_max(schar, signed char)
make_max(short, short)
make_max(ushort, unsigned short)
make_max(int, int)
make_max(uint, unsigned int)
make_max(long, long)
make_max(ulong, unsigned long)
make_max(llong, long long)
make_max(ullong, unsigned long long)
make_max(float, float)
make_max(double, double)
make_max(ldouble, long double)


#define max(a, b) _Generic((a), \
char : max_char, unsigned char : max_uchar, signed char : max_schar, \
short : max_short, unsigned short : max_ushort, \
int : max_int, unsigned int : max_int, \
long : max_long, unsigned long : max_ulong, \
long long : max_llong, unsigned long long : max_ullong, \
float : max_float, double : max_double, long double : max_ldouble \
)(a, b)

David Brown

unread,
Jun 19, 2018, 3:13:58 PM6/19/18
to
You can persuade C++ to use MIN and MAX as binary infix operators. I am
far from convinced it would be a good idea, but it is possible.

Alf P. Steinbach

unread,
Jun 19, 2018, 3:54:53 PM6/19/18
to
On 19.06.2018 18:18, Keith Thompson wrote:
> David Brown <david...@hesbynett.no> writes:
> [...]
>> That reminds me - in maths, "a ∨ b" is the /maximum/ and "a ∧ b" is the
>> minimum, while you wanted to use the opposite symbols. I believe you
>> are familiar with boolean logic - "a ∨ b" is a way to write "a inclusive
>> or b", which is the maximum of two boolean values. If you like
>> axiomatic set theory and the Peano integers, then "maximum" is the same
>> as set union ∪, and "minimum" is set intersection ∩.
>
> I would oppose defining either "a ∨ b" or "a \/ b" as max(a, b)
> because most readers (myself included) are going to assume the
> symbol means "min".
>
> I would oppose defining either "a ∨ b" or "a \/ b" as min(a, b)
> because it's apparently the opposite of mathematical usage.

Both are good points.

But let's look at things from the boolean operations point of view.

I'd like "or" (and operator "||" which means the same) to just mean "max".

That way it would work nicely with three-value boolean logic.

Like, false, maybe and true as 0, 1 and 2.

E.g. or( false, maybe) == max( false, maybe ) == maybe.


> I'm not at all convinced that min and max operators, with whatever
> syntax, are worth adding to the language at all. If they were to be
> added, agreeing on a syntax would be difficult.

Maybe the time has come for more Unicode symbols in programming languages.

Just not all the way to APL...


Cheers!

- Alf

Chris Vine

unread,
Jun 19, 2018, 4:28:23 PM6/19/18
to
On Tue, 19 Jun 2018 21:12:11 +0200
Thanks, that's interesting. So far as I understand it comprises a
macro which constructs a series of inline functions, which is combined
with another generic macro which calls one of them, depending on type.
Somewhat like a template function in C++ but with more boilerplate.

The macros themselves would still be unhygienic but that wouldn't
matter here.

David Brown

unread,
Jun 20, 2018, 2:06:37 AM6/20/18
to
By "unhygienic", do you mean they don't act like functions in the way
they handle parameters with side effects, or have multiple statements to
cause trouble in conditionals or loops without braces? No, the "max"
macro is not "unhygienic" - it is perfectly safe. It is fine to worry
about function-like macros that have risks, like the traditional C "max"
(or "MAX") macro. But labelling safe macros as "unhygienic" sounds like
prejudice.

(The "make_max" is a utility macro to reduce typing and the risk of
copy-and-paste errors - it is not a function-like macro.)

Chris Vine

unread,
Jun 20, 2018, 7:08:07 AM6/20/18
to
On Wed, 20 Jun 2018 08:06:27 +0200
By "unhygienic" I mean that C89/90 macros (and as I understand it
from your example, also C11 macros) are "cut and paste" pre-processor
macros. Such macros systems are by definition unhygienic macro
systems. A macro system is generally called unhygienic when it does
not bind identifiers in the way that functions do: in particular when
it binds identifiers at the call site and not the definition site and
when it injects its own identifier names into the call site; in short,
when it does something at macro expansion time which could cause
identifier names to be accidentally captured or shadowed.

It is wrong to say that the problem arising from invoking:

#define max(a,b) (((a) >= (b)) ? (a) : (b))

with (++a,++b) arguments is due to lack of hygiene in that sense.
Instead, the lack of hygience in C macros makes it more difficult to
resolve the problem in the traditional way, by initializing local
variables at the beginning of the macro definition to simulate eager
evaluation of the arguments. The vagaries of C syntax with respect to
statement blocks also contributes to that.

However, not all code that unhygienic macro systems emit is unsafe.
Where they are used to construct inline functions ('make_max' in your
example), and to hand off directly to such an inline function ('max' in
your example), that usage is safe. It is safe because inline functions
are hygienic. 'max' is just the textual substitution of a function
call, so unwanted identifier capture is irrelevant. That particular use
of the macro is hygienic.

The only criticism of your macro that one could make is that the
the binding of the '>' operator is taken at the point at which the
'make_max' macro is called and not at the point where that macro is
defined. That is not something that would bother me. (It might bother
some who are especially firm advocates of hygienic macro systsms.)

In summary, all your code seems to do is to act like a template
function. That's fine.

Chris Vine

unread,
Jun 20, 2018, 7:18:15 AM6/20/18
to
On Wed, 20 Jun 2018 12:07:53 +0100
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> wrote:
[snip]
> The only criticism of your macro that one could make is that the
> the binding of the '>' operator is taken at the point at which the
> 'make_max' macro is called and not at the point where that macro is
> defined. That is not something that would bother me. (It might bother
> some who are especially firm advocates of hygienic macro systsms.)

And of course in C, as opposed to C++, you cannot (I believe, I don't
know C as well as you) rebind operator > anyway, so it's not a problem.

David Brown

unread,
Jun 20, 2018, 8:13:40 AM6/20/18
to
All C and C++ macros are based on text substitution - "cut and paste"
macros.

No, that does not make them "by definition unhygienic" - at best, you
are defining "unhygienic" as "what C macros are", which is not at all
helpful.

> A macro system is generally called unhygienic when it does
> not bind identifiers in the way that functions do: in particular when
> it binds identifiers at the call site and not the definition site and
> when it injects its own identifier names into the call site; in short,
> when it does something at macro expansion time which could cause
> identifier names to be accidentally captured or shadowed.
>

To me, "unhygienic" would mean there is a potential problem or hazard
with a particular macro - /not/ that all macros are "unhygienic" because
/some/ macros could have problems.

> It is wrong to say that the problem arising from invoking:
>
> #define max(a,b) (((a) >= (b)) ? (a) : (b))
>
> with (++a,++b) arguments is due to lack of hygiene in that sense.
> Instead, the lack of hygience in C macros makes it more difficult to
> resolve the problem in the traditional way, by initializing local
> variables at the beginning of the macro definition to simulate eager
> evaluation of the arguments. The vagaries of C syntax with respect to
> statement blocks also contributes to that.
>
> However, not all code that unhygienic macro systems emit is unsafe.
> Where they are used to construct inline functions ('make_max' in your
> example), and to hand off directly to such an inline function ('max' in
> your example), that usage is safe. It is safe because inline functions
> are hygienic. 'max' is just the textual substitution of a function
> call, so unwanted identifier capture is irrelevant. That particular use
> of the macro is hygienic.

The macro "max" as I gave it is hygienic - it is safe to use in most
practical ways, similar to a function.

>
> The only criticism of your macro that one could make is that the
> the binding of the '>' operator is taken at the point at which the
> 'make_max' macro is called and not at the point where that macro is
> defined. That is not something that would bother me. (It might bother
> some who are especially firm advocates of hygienic macro systsms.)

We are talking about /C/ here. Operators are "bound" when the
post-processed source is compiled.

I don't know what languages you have in mind with "hygienic macros", but
I think what you are talking about is "functions" or perhaps
"templates", not macros.

In C, there are many sorts of macros for many kinds of usage. For
function-like macros, you can divide them into two groups - "safe" ones
that treat parameters with side-effects in a similar manner to
functions, and "unsafe" ones that can cause trouble. (There is also the
issue of multi-statement macros having trouble with conditionals and
loops without braces - usually that can be avoided by the "do {} while
(0)" idiom.) If you want to call these two types of function-like
macros "hygienic" and "unhygienic", that's okay.

David Brown

unread,
Jun 20, 2018, 9:58:04 AM6/20/18
to
You can't "rebind" it in C++ either - unless perhaps you mean having a
virtual operator> member for a class.

Chris Vine

unread,
Jun 20, 2018, 12:51:15 PM6/20/18
to
On Wed, 20 Jun 2018 14:13:29 +0200
David Brown <david...@hesbynett.no> wrote:
[snip]
> All C and C++ macros are based on text substitution - "cut and paste"
> macros.
>
> No, that does not make them "by definition unhygienic" - at best, you
> are defining "unhygienic" as "what C macros are", which is not at all
> helpful.

That's wrong.

Keith Thompson

unread,
Jun 20, 2018, 1:48:48 PM6/20/18
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:
> On Wed, 20 Jun 2018 08:06:27 +0200
> David Brown <david...@hesbynett.no> wrote:
>> On 19/06/18 22:28, Chris Vine wrote:
[...]
>> > The macros themselves would still be unhygienic but that wouldn't
>> > matter here.
>>
>> By "unhygienic", do you mean they don't act like functions in the way
>> they handle parameters with side effects, or have multiple statements to
>> cause trouble in conditionals or loops without braces? No, the "max"
>> macro is not "unhygienic" - it is perfectly safe. It is fine to worry
>> about function-like macros that have risks, like the traditional C "max"
>> (or "MAX") macro. But labelling safe macros as "unhygienic" sounds like
>> prejudice.
>>
>> (The "make_max" is a utility macro to reduce typing and the risk of
>> copy-and-paste errors - it is not a function-like macro.)
>
> By "unhygienic" I mean that C89/90 macros (and as I understand it
> from your example, also C11 macros) are "cut and paste" pre-processor
> macros. Such macros systems are by definition unhygienic macro
> systems. A macro system is generally called unhygienic when it does
> not bind identifiers in the way that functions do: in particular when
> it binds identifiers at the call site and not the definition site and
> when it injects its own identifier names into the call site; in short,
> when it does something at macro expansion time which could cause
> identifier names to be accidentally captured or shadowed.

Apparently the term "hygienic macro" is in some general use. Wikipedia
defines them as "macros whose expansion is guaranteed not to cause the
accidental capture of identifiers". The problem is collisions between
identifiers defined within the macro and identifiers in the scope
surrounding the macro invocation.

https://en.wikipedia.org/wiki/Hygienic_macro

Chris Vine

unread,
Jun 20, 2018, 2:14:57 PM6/20/18
to
Indeed, although I would include "accidental shadowing" as an addition
to "accidental capture".

None of this is novel, or specific to C. Common lisp (defmacro) macros
are unhygienic if used naively, although the injection of local
identifiers into call sites is addressed by using gensym, which is
guaranteed to provide unique (non-clashing) identifier names. The
binding of identifiers at the call site rather than the definition site
is largely circumvented by lisp 2's placing of function names in a
different namespace from other names, and by prohibiting the rebinding
of core function names.

By contrast scheme's syntax-rules/syntax-case macros are wholly hygienic
by default, as are its renaming macros. Reputedly rust's macro's are
also hygienic although I have not written any rust code to verify that.

Juha Nieminen

unread,
Jun 21, 2018, 3:26:49 AM6/21/18
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
> auto max(auto a, auto b) {
> return (a >= b) ? a : b;
> }

Wouldn't that be passign the variables by value, and returning the result
by value? Copying may be a very heavy operation in some cases. (Also, in
the case of C++, copying might be disabled completely.)

Juha Nieminen

unread,
Jun 21, 2018, 3:29:02 AM6/21/18
to
In comp.lang.c++ Keith Thompson <ks...@mib.org> wrote:
> Which is why it's traditional to write macro names in all-caps (MIN()
> and MAX()) so the reader is reminded that they're macros and can have
> odd interactions with side effects.

Good thing that the C standard followed that convention with names
like assert and FILE.

David Brown

unread,
Jun 21, 2018, 4:06:29 AM6/21/18
to
Yes, it is by value - but compilers will omit trivial copying when
possible (such a function is likely to be inline). In C++, copy elision
is allowed to skip copy constructors (but you can't make copies of
uncopyable objects).

A more complete implementation - such as for the standard library -
would have const and constexpr, references, efficiency considerations
(like trying to move rather than copy).


Juha Nieminen

unread,
Jun 21, 2018, 8:33:44 AM6/21/18
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
> On 21/06/18 09:26, Juha Nieminen wrote:
>> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>>> auto max(auto a, auto b) {
>>> return (a >= b) ? a : b;
>>> }
>>
>> Wouldn't that be passign the variables by value, and returning the result
>> by value? Copying may be a very heavy operation in some cases. (Also, in
>> the case of C++, copying might be disabled completely.)
>>
>
> Yes, it is by value - but compilers will omit trivial copying when
> possible (such a function is likely to be inline). In C++, copy elision
> is allowed to skip copy constructors (but you can't make copies of
> uncopyable objects).

But suppose I wanted a reference to one of two large data containers,
depending on which one compares "larger than":

std::vector<T> hugeVector1, hugeVector2;
...
std::vector<T>& bigger = max(hugeVector1, hugeVector2);

I would prefer if max() took references and returned a reference,
so that bigger would be a reference to either the original hugeVector1
or hugeVector2. For two reasons: Modifying 'bigger' would modify the
original (rather than a temporary copy) and, of course, to elide
needless copying (even if we aren't modifying anything).

Richard Bos

unread,
Jun 30, 2018, 6:29:44 AM6/30/18
to
Keith Thompson <ks...@mib.org> wrote:

> I'm not at all convinced that min and max operators, with whatever
> syntax, are worth adding to the language at all. If they were to be
> added, agreeing on a syntax would be difficult.

It's a very simple idea, which has been mentioned before, but has
(ttbomk) never appeared as an extension in any compiler _in a
generalisable, Standardisable manner_. Ad-hoc hacks, yes; something we
could have in the Standard, no.
That, to me, strongly suggests that there is no real desire for such a
feature in general. Everybody wants a _specific_ instance from time to
time, but those specific instances are too easy to write with what we
already have for the general feature to be worth including.

Richard

Bart

unread,
Jun 30, 2018, 7:17:20 AM6/30/18
to
If min/max were operators, so that they could be written as 'a max b',
perhaps as well as 'max(a,b)', then the following becomes possible:

a max= b;

as well as:

A[++i] max= lowerlim;

This I would use (as I do use when it is available elsewhere).

(And actually, the x64 processor has native MIN and MAX instructions for
floating point; presumably they were considered useful enough to
implement in hardware.)


--
bartc

Not C:

real x, y
x max:= y

Compiler output:

movq XMM0, [x]
maxsd XMM0, [y]
movq [x], XMM0

Wouter Verhelst

unread,
Jul 1, 2018, 3:44:38 AM7/1/18
to
On 30-06-18 13:17, Bart wrote:
> On 30/06/2018 11:29, Richard Bos wrote:
>> Keith Thompson <ks...@mib.org> wrote:
>>
>>> I'm not at all convinced that min and max operators, with whatever
>>> syntax, are worth adding to the language at all.  If they were to be
>>> added, agreeing on a syntax would be difficult.
>>
>> It's a very simple idea, which has been mentioned before, but has
>> (ttbomk) never appeared as an extension in any compiler _in a
>> generalisable, Standardisable manner_. Ad-hoc hacks, yes; something we
>> could have in the Standard, no.
>> That, to me, strongly suggests that there is no real desire for such a
>> feature in general. Everybody wants a _specific_ instance from time to
>> time, but those specific instances are too easy to write with what we
>> already have for the general feature to be worth including.
>
> If min/max were operators, so that they could be written as 'a max b',
> perhaps as well as 'max(a,b)', then the following becomes possible:
>
>    a max= b;

Which can be written as a = max(a, b); just as well

> as well as:
>
>    A[++i] max= lowerlim;

A[++i] = max(A[i], b);

... and then max just needs to either be a safe macro or an inline function.

With gcc, you can make it a safe macro like so:

#define max(a, b) ({ typeof(a) _a = (a); typeof(b) _b = (b); _a > _b ?
_a : _b; })

typeof is a gcc extension that evaluates to the type of the expression,
without evaluating the expression (so it has no side effects), except as
necessary to do what it needs to do:

#include <stdio.h>
int main(void) {
int i = 0;
char c = 2;
typeof(i++) j = i;
printf("%d %d\n", i, j); // output: 0 0
typeof(++i == 1 ? i : c) c2 = c;
printf("%d %d\n", i, (int)c2); // output: 0 2
}

the ({ ...code... }) syntax is also a gcc extension called a "statement
expression", which evaluates to the value of the final expression in the
inner code block.

https://gcc.gnu.org/onlinedocs/gcc-8.1.0/gcc/Statement-Exprs.html

> This I would use (as I do use when it is available elsewhere).
>
> (And actually, the x64 processor has native MIN and MAX instructions for
> floating point; presumably they were considered useful enough to
> implement in hardware.)

I'm sure a decent optimizing compiler will optimize something like a max
function or safe macro implemented with a trinary operator or a simple
if() to that instruction.

Barry Schwarz

unread,
Jul 1, 2018, 4:54:26 AM7/1/18
to
On Sun, 1 Jul 2018 09:23:54 +0200, Wouter Verhelst <w...@uter.be> wrote:

>On 30-06-18 13:17, Bart wrote:
>> On 30/06/2018 11:29, Richard Bos wrote:
>>> Keith Thompson <ks...@mib.org> wrote:
>>>
>>>> I'm not at all convinced that min and max operators, with whatever
>>>> syntax, are worth adding to the language at all.  If they were to be
>>>> added, agreeing on a syntax would be difficult.
>>>
>>> It's a very simple idea, which has been mentioned before, but has
>>> (ttbomk) never appeared as an extension in any compiler _in a
>>> generalisable, Standardisable manner_. Ad-hoc hacks, yes; something we
>>> could have in the Standard, no.
>>> That, to me, strongly suggests that there is no real desire for such a
>>> feature in general. Everybody wants a _specific_ instance from time to
>>> time, but those specific instances are too easy to write with what we
>>> already have for the general feature to be worth including.
>>
>> If min/max were operators, so that they could be written as 'a max b',
>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>
>>    a max= b;
>
>Which can be written as a = max(a, b); just as well
>
>> as well as:
>>
>>    A[++i] max= lowerlim;
>
>A[++i] = max(A[i], b);

Unfortunately, this invokes undefined behavior.


--
Remove del for email

Bart

unread,
Jul 1, 2018, 5:08:25 AM7/1/18
to
On 01/07/2018 08:23, Wouter Verhelst wrote:
> On 30-06-18 13:17, Bart wrote:

>> If min/max were operators, so that they could be written as 'a max b',
>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>
>>    a max= b;
>
> Which can be written as a = max(a, b); just as well

Sure. As can 'a += b'.

But there must be a reason why such 'assignment operators' (not
'augmented assignment' as I thought they were called) were present even
in early C.

>> (And actually, the x64 processor has native MIN and MAX instructions for
>> floating point; presumably they were considered useful enough to
>> implement in hardware.)
>
> I'm sure a decent optimizing compiler will optimize something like a max
> function or safe macro implemented with a trinary operator or a simple
> if() to that instruction.

I don't use an optimising compiler. Evaluating 'max(a,b)' when a,b are
ints, and 'max' is a function (not a macro, not even an inline function)
involves executing some 14 instructions, including call, return and
branches [on x64].

But when 'max' is an intrinsic operator, then it can be trivially done
in 4 instructions when a and b are int (3 is possible), and 2
instructions when they are floats, none of which are branches.

You also get the advantage of overloading via the usual operator
mechanisms, without needing to rely on anything else (safe macros,
overloaded functions etc).


--
bart

Alf P. Steinbach

unread,
Jul 1, 2018, 6:15:42 AM7/1/18
to
On 01.07.2018 11:08, Bart wrote:
> On 01/07/2018 08:23, Wouter Verhelst wrote:
>> On 30-06-18 13:17, Bart wrote:
>
>>> If min/max were operators, so that they could be written as 'a max b',
>>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>>
>>>     a max= b;
>>
>> Which can be written as a = max(a, b); just as well
>
> Sure. As can 'a += b'.
>
> But there must be a reason why such 'assignment operators' (not
> 'augmented assignment' as I thought they were called) were present even
> in early C.

They corresponded directly, and still correspond, to very common
processor instructions, and at that time it was more the programmer's
job, and not the compiler's, to optimize the resulting machine code.

Typically an integer add instruction, say, adds to a register, like on
the i8086 (the original IBM PC processor, except for data bus width):

add ax, bx ; add the contents of register ax, to bx

So in C and C++:

ax += bx;

Of course, historically that was ten years or so after development of C
started, PC 1981 versus plain C 1971. I think the original C development
was on a PDP-10. I had some limited exposure to the PDP-11, but all I
remember about the assembly language was that it was peppered with @
signs (probably indicating macros), and that the registers were numbered
and memory-mapped. But I'm pretty sure that if the PDP-11 didn't have
add to register, I'd remember that. And so, presumably also the PDP-10.

Disclaimer: maybe C development started a year or two later. And maybe
it was a PDP-9, if such a beast existed. Google could probably cough up
the more exact history, but it doesn't matter here.

[snip]


Cheers!,

- Alf

Alf P. Steinbach

unread,
Jul 1, 2018, 6:17:21 AM7/1/18
to
On 01.07.2018 12:15, Alf P. Steinbach wrote:
> [snip]
> Typically an integer add instruction, say, adds to a register, like on
> the i8086 (the original IBM PC processor, except for data bus width):
>
>     add ax, bx    ; add the contents of register ax, to bx

"of register bx, to ax"


> So in C and C++:
>
>     ax += bx;
> [snip]


Sorry,

- Alf

Ben Bacarisse

unread,
Jul 1, 2018, 7:20:50 AM7/1/18
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:

> On 01.07.2018 11:08, Bart wrote:
<snip>
>> But there must be a reason why such 'assignment operators' (not
>> 'augmented assignment' as I thought they were called) were present
>> even in early C.
>
> They corresponded directly, and still correspond, to very common
> processor instructions, and at that time it was more the programmer's
> job, and not the compiler's, to optimize the resulting machine code.

I think that is an unlikely explanation. First, they came into C from
Algol68 via B, and Algol68 never had any intention of providing
operators for the purpose of helping the programmer optimise code!
Secondly, there never was a B compiler in the traditional sense of one
that generated machine instructions.

However, there's a grain of truth here. B was implemented as "threaded
code" where each operation is translated into a jump to the code that
performs the appropriate action. There was no possibility of doing any
optimisation in 8K (on a PDP-7) but a =+ b was bound to be more
efficient that a = a + b simply because it could be translated to single
jump.

> I think the original C development was on a PDP-10.

Even "New B" development was done on the PDP-11 and anything that can
reasonably be called C certainly was.

> Disclaimer: maybe C development started a year or two later. And maybe
> it was a PDP-9, if such a beast existed.

B was born on the PDP-7 and C on the PDP-11.

> Google could probably cough
> up the more exact history, but it doesn't matter here.

True, but it's nice to keep the history straight.

--
Ben.

Bart

unread,
Jul 1, 2018, 7:20:50 AM7/1/18
to
On 01/07/2018 11:15, Alf P. Steinbach wrote:
> On 01.07.2018 11:08, Bart wrote:
>> On 01/07/2018 08:23, Wouter Verhelst wrote:
>>> On 30-06-18 13:17, Bart wrote:
>>
>>>> If min/max were operators, so that they could be written as 'a max b',
>>>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>>>
>>>>     a max= b;
>>>
>>> Which can be written as a = max(a, b); just as well
>>
>> Sure. As can 'a += b'.
>>
>> But there must be a reason why such 'assignment operators' (not
>> 'augmented assignment' as I thought they were called) were present
>> even in early C.
>
> They corresponded directly, and still correspond, to very common
> processor instructions, and at that time it was more the programmer's
> job, and not the compiler's, to optimize the resulting machine code.


Sometimes they will correspond to hardware instructions, other times
they don't (eg. floating point on modern x86).

Such assignments were of part of Algol68 (created around 1968 I
believe), and there were doubtless earlier precedents.

They are useful to better express intent, and make it easier for a
compiler to generate code (otherwise they have to decide where a[f(x)].b
= a[f(x)].b + 1 is the same thing as a[f(x)].b += 1; it's a lot easier
if the programmer just writes the latter).

They can also indicate something subtly different from A = A+1, as in my
more complex example, where f(x) with possible side-effects might be
called once or twice.

Also in this example (not from C):

s = s + "A"

This takes a string s, creates a new string with "A" appended, then
assigns back into s, destroying the original. Lots of string processing
going on.

But write it like this:

s += "A"

and now it can be taken to mean in-place append. Given adequate
capacity, which is usually the case, this now writes adds one byte onto
the end of s, and updates its length.

> Of course, historically that was ten years or so after development of C
> started, PC 1981 versus plain C 1971. I think the original C development
> was on a PDP-10. I had some limited exposure to the PDP-11, but all I
> remember about the assembly language was that it was peppered with @
> signs (probably indicating macros), and that the registers were numbered
> and memory-mapped. But I'm pretty sure that if the PDP-11 didn't have
> add to register, I'd remember that. And so, presumably also the PDP-10.

I don't think C was developed for PDP10, not at first anyway. But as I
said, augmented assignment was not just limited to C, it has other
benefits that just mapping neatly to hardware instructions.


--
bart

Rick C. Hodgin

unread,
Jul 1, 2018, 9:02:15 AM7/1/18
to
On 06/30/2018 07:17 AM, Bart wrote:
> If min/max were operators, so that they could be written as 'a max b',
> perhaps as well as 'max(a,b)', then the following becomes possible:
>
>    a max= b;


You won't find a better syntax than:

a /\= b;

It takes a moment to learn, but once you get the concept it's easy to
use everywhere.

a = (b /\ c) * d;

I've thought about your comment regarding it not working due to the
use of the \ character in certain circumstances. But, this wouldn't
be a \ character in and of itself, but would be the /\ combination in
some cases, and \/ in others. The compiler would have no issues in
parsing that.

--
Rick C. Hodgin

Wouter Verhelst

unread,
Jul 1, 2018, 9:29:27 AM7/1/18
to
On 01-07-18 11:08, Bart wrote:
> I don't use an optimising compiler.

Your loss.

> Evaluating 'max(a,b)' when a,b are
> ints, and 'max' is a function (not a macro, not even an inline function)
> involves executing some 14 instructions, including call, return and
> branches [on x64].

Sure, but on a decent optimizing compiler there won't be a difference --
and no need to complicate the language with extra operators etc.

To me, that is a huge benefit.

Ben Bacarisse

unread,
Jul 1, 2018, 12:15:16 PM7/1/18
to
"Rick C. Hodgin" <rick.c...@gmail.com> writes:

> On 06/30/2018 07:17 AM, Bart wrote:
>> If min/max were operators, so that they could be written as 'a max b',
>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>
>>    a max= b;
>
> You won't find a better syntax than:
>
> a /\= b;

Except that (a) there is an existing implementation that uses <? and >?;
(b) your symbols are the reverse of what some people would expect; and
(c) there is an ambiguity to resolve when /\ oecus at the end of a
line.

There is a strong argument against proliferating incompatible extensions
to C. (b) is, in my opinion, comparatively minor, but (c) must be
resolved in such a way as to respect the meaning of existing code making
the new operator a little off (see below).

> It takes a moment to learn, but once you get the concept it's easy to
> use everywhere.
>
> a = (b /\ c) * d;
>
> I've thought about your comment regarding it not working due to the
> use of the \ character in certain circumstances. But, this wouldn't
> be a \ character in and of itself, but would be the /\ combination in
> some cases, and \/ in others. The compiler would have no issues in
> parsing that.

The problem is not the compiler -- you can make a compiler that
implements whatever rules you want about /\. The problem is whether or
not you break existing code because this

a = b /\
2;

already has a meaning. You can, of course, write the new rules so that
the meaning of such code does not change, but it makes the new operator
a little odd in that you can't break an expression across lines after
it.

--
Ben.

Rick C. Hodgin

unread,
Jul 1, 2018, 12:35:49 PM7/1/18
to
On 07/01/2018 12:14 PM, Ben Bacarisse wrote:
> "Rick C. Hodgin" <rick.c...@gmail.com> writes:
>
>> On 06/30/2018 07:17 AM, Bart wrote:
>>> If min/max were operators, so that they could be written as 'a max b',
>>> perhaps as well as 'max(a,b)', then the following becomes possible:
>>>
>>> a max= b;
>>
>> You won't find a better syntax than:
>>
>> a /\= b;
>
> Except that (a) there is an existing implementation that uses <? and >?;

I considered <? and >? when mentioned before, but they're not for me.
I think /\ and \/ convey the meaning very clearly, and I've been con-
sidering the existing mathematical use of the opposite direction, and
I really think they got it wrong. Especially when the use a closed
triangle.

My opinion. In CAlive, it will be as I have indicated. Other C-like
languages, including C or C++, are able to implement them however they
choose ... though I do not believe they will ever be implemented in C
or C++, so it's a non-issue.

My original inquiry was to see if there was a min or max operator. I
have never seen one in my 30+ years of programming. I could've used
one in database programming a great many time. Have always had to use
the min(a,b) syntax. An operator seems a better choice.

> (b) your symbols are the reverse of what some people would expect; and
> (c) there is an ambiguity to resolve when /\ oecus at the end of a
> line.
>
> There is a strong argument against proliferating incompatible extensions
> to C. (b) is, in my opinion, comparatively minor, but (c) must be
> resolved in such a way as to respect the meaning of existing code making
> the new operator a little off (see below).
>
>> It takes a moment to learn, but once you get the concept it's easy to
>> use everywhere.
>>
>> a = (b /\ c) * d;
>>
>> I've thought about your comment regarding it not working due to the
>> use of the \ character in certain circumstances. But, this wouldn't
>> be a \ character in and of itself, but would be the /\ combination in
>> some cases, and \/ in others. The compiler would have no issues in
>> parsing that.
>
> The problem is not the compiler -- you can make a compiler that
> implements whatever rules you want about /\. The problem is whether or
> not you break existing code because this
>
> a = b /\
> 2;
>
> already has a meaning. You can, of course, write the new rules so that
> the meaning of such code does not change, but it makes the new operator
> a little odd in that you can't break an expression across lines after
> it.


Does the \ work everywhere in C? Or just in a #define block? I don't
think I've ever seen it anywhere else.

The rule has been written, Ben B. In CAlive, the native interpretation
of your example will be as "a = b /\ 2;":

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Jul 1, 2018, 12:36:37 PM7/1/18
to
On 07/01/2018 12:30 PM, Rick C. Hodgin wrote:
> The rule has been written, Ben B. In CAlive, the native interpretation
> of your example will be as "a = b /\ 2;":

To be clear about this, in CAlive if you wanted to use your example of
a continuation it would need to be written as:

Rick C. Hodgin

unread,
Jul 1, 2018, 12:44:42 PM7/1/18
to
On 07/01/2018 12:37 PM, Bart wrote:
> On 01/07/2018 17:30, Rick C. Hodgin wrote:
>> The rule has been written, Ben B.  In CAlive, the native interpretation
>> of your example will be as "a = b /\ 2;":
>
> Is /\ min or max? Because, you know, one end is small so it could mean
> min, but then the other end is fat so it could be max.

You've mentioned that before.

> Or are you supposed to use device such as that /\ looks a bit like 'A',
> which is the second letter of MAX? Assuming it is actually max.

The arrow points toward the value being sought. The up arrow /\ is for
max (upper value), and the down arrow \/ is for min (lower value).

People wouldn't use the new operator without learning what it is. It
won't be confusing to people who take advantage of the new feature. If
someone came across the syntax in code and didn't know what it meant, a
nearby comment may explain it, or a simple test.ca program compiled in
CAlive would let them see the result.

--
Rick C. Hodgin

Bart

unread,
Jul 1, 2018, 12:47:22 PM7/1/18
to
Don't forget the cost also of having 'max', if implemented as a macro
expanded to that safe macro at each instance of max, of expanding that
macro each time max occurs, reparsing the tokens again, building that
bit of AST, doing the same optimisations and the same <max> pattern
recognition and the conversion into intrinsic versions or into optimised
code, over and over again.

This is the extra cost of having these things defined on top of the
language instead of being built-in. In the case of C++, this seems to
apply to just about everything in it.

Yes it is nice to have a good optimiser, but there's plenty more it can
spend its time on. Implementing 'max' etc efficiently can be done once,
instead of in N different places in a program, every time it is
compiled, and for every other programmer and application that uses it.

--
bart

Bart

unread,
Jul 1, 2018, 1:42:08 PM7/1/18
to
On 01/07/2018 17:44, Rick C. Hodgin wrote:

>> Or are you supposed to use device such as that /\ looks a bit like 'A',
>> which is the second letter of MAX? Assuming it is actually max.
>
> The arrow points toward the value being sought. The up arrow /\ is for
> max (upper value), and the down arrow \/ is for min (lower value).

So is the max value at the top or the bottom?

If the max value at the top, and /\ means MAX, then that's the opposite
of how two values, or a set of values if sorted, are usually displayd on
a page.

I'm sorry but it's completely unintuitive. I've probably said that
before too.

--
bart

Rick C. Hodgin

unread,
Jul 1, 2018, 1:45:38 PM7/1/18
to
On 07/01/2018 01:41 PM, Bart wrote:
> On 01/07/2018 17:44, Rick C. Hodgin wrote:
>
>>> Or are you supposed to use device such as that /\ looks a bit like 'A',
>>> which is the second letter of MAX? Assuming it is actually max.
>>
>> The arrow points toward the value being sought.  The up arrow /\ is for
>> max (upper value), and the down arrow \/ is for min (lower value).
>
> So is the max value at the top or the bottom?
>
> If the max value at the top, and /\ means MAX, then that's the opposite
> of how two values, or a set of values if sorted, are usually displayd on
> a page.

Read carefully this time:

The arrow points toward the value being sought. The up arrow /\ is for
max (upper value), and the down arrow \/ is for min (lower value).

If that's not clear ... well, I'm sorry.

> I'm sorry but it's completely unintuitive. I've probably said that
> before too.

You did. And I disagree. I've shown the idea to developers in real
and they've been impressed. To them it was completely intuitive.

--
Rick C. Hodgin

David Brown

unread,
Jul 2, 2018, 1:40:20 PM7/2/18
to
On 01/07/18 18:47, Bart wrote:
> On 01/07/2018 14:27, Wouter Verhelst wrote:
>> On 01-07-18 11:08, Bart wrote:
>>> I don't use an optimising compiler.
>>
>> Your loss.
>>
>>> Evaluating 'max(a,b)' when a,b are
>>> ints, and 'max' is a function (not a macro, not even an inline function)
>>> involves executing some 14 instructions, including call, return and
>>> branches [on x64].
>>
>> Sure, but on a decent optimizing compiler there won't be a difference --
>> and no need to complicate the language with extra operators etc.
>>
>> To me, that is a huge benefit.
>
> Don't forget the cost also of having 'max', if implemented as a macro
> expanded to that safe macro at each instance of max, of expanding that
> macro each time max occurs, reparsing the tokens again, building that
> bit of AST, doing the same optimisations and the same <max> pattern
> recognition and the conversion into intrinsic versions or into optimised
> code, over and over again.

This extra cost - for a macro like "max" - is negligible. You can argue
that the compilation time is the sum of lots of such negligible little
parts, but you cannot reasonably claim that having an operator for max
rather than a macro will make any measurable difference to compile times.

And for optimising compilers, there is even less difference - because
these do not turn operators into "optimised code". They turn them into
AST's and other internal structures, combined with surrounding code, and
then optimise those. Pattern matching for turning arithmetic operations
into code sequences comes later, and does not necessarily match up with
the arithmetic operations that you had in the source code. Something
like a "max" operation is quite likely to be split into parts and then
separated by other code to improve processor scheduling.

>
> This is the extra cost of having these things defined on top of the
> language instead of being built-in. In the case of C++, this seems to
> apply to just about everything in it.
>
> Yes it is nice to have a good optimiser, but there's plenty more it can
> spend its time on. Implementing 'max' etc efficiently can be done once,
> instead of in N different places in a program, every time it is
> compiled, and for every other programmer and application that uses it.
>

You have that ass-backwards. Once you have a good optimiser, you no
longer have to concern yourself about trying to implement something like
"max" efficiently - the compiler will handle it smoothly already.


Tim Rentsch

unread,
Jul 2, 2018, 1:40:35 PM7/2/18
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:

> On Wed, 20 Jun 2018 08:06:27 +0200
> David Brown <david...@hesbynett.no> wrote:
>
>> On 19/06/18 22:28, Chris Vine wrote:
>>
>>> On Tue, 19 Jun 2018 21:12:11 +0200
>>> David Brown <david...@hesbynett.no> wrote:
>>>
>>>> On 19/06/18 16:52, Chris Vine wrote:
>>>>
>>>>> On Tue, 19 Jun 2018 14:43:22 +0200
>>>>> David Brown <david...@hesbynett.no> wrote:
>>>>> [snip]
>>>>>
>>>>>> With enough effort, you can get a variadic hygienic polymorphic min and
>>>>>> max macro in C11.
>>>>>
>>>>> Do you know enough about C11 macros to show how that is done? (I would
>>>>> be interested for that matter in how you get a non-variadic hygienic
>>>>> macro in C, if that is by using a different technique from the one I
>>>>> mentioned. I am still stuck on C89/90 as far as C pre-processors are
>>>>> concerned.)
>>>>
>>>> #define make_max(name, type) \
>>>> static inline type max_ ## name (type a, type b) { return a > b ? a
>>>> : b; }
>>>>
>>>> make_max(char, char)
>>>> make_max(uchar, unsigned char)
>>>> make_max(schar, signed char)
>>>> make_max(short, short)
>>>> make_max(ushort, unsigned short)
>>>> make_max(int, int)
>>>> make_max(uint, unsigned int)
>>>> make_max(long, long)
>>>> make_max(ulong, unsigned long)
>>>> make_max(llong, long long)
>>>> make_max(ullong, unsigned long long)
>>>> make_max(float, float)
>>>> make_max(double, double)
>>>> make_max(ldouble, long double)
>>>>
>>>>
>>>> #define max(a, b) _Generic((a), \
>>>> char : max_char, unsigned char : max_uchar, signed char : max_schar, \
>>>> short : max_short, unsigned short : max_ushort, \
>>>> int : max_int, unsigned int : max_int, \
>>>> long : max_long, unsigned long : max_ulong, \
>>>> long long : max_llong, unsigned long long : max_ullong, \
>>>> float : max_float, double : max_double, long double : max_ldouble \
>>>> )(a, b)

[...]

> The only criticism of your macro that one could make is that the
> the binding of the '>' operator is taken at the point at which the
> 'make_max' macro is called and not at the point where that macro is
> defined. [...]

I see at least three other criticisms:

(1) Doesn't cover all standard types;

(2) A bug in one of the types it does cover;

(3) Falls down badly in cases where the two arguments have
different types.

Ben Bacarisse

unread,
Jul 2, 2018, 1:40:41 PM7/2/18
to
You appear to be multi-posting in comp.lang.c and comp.lang.c++.
Multi-posting (as opposed to cross-posting) is generally regarded as bad
netiquette.

--
Ben.

Wouter Verhelst

unread,
Jul 2, 2018, 1:40:46 PM7/2/18
to
On 01-07-18 18:47, Bart wrote:
> On 01/07/2018 14:27, Wouter Verhelst wrote:
>> On 01-07-18 11:08, Bart wrote:
>>> I don't use an optimising compiler.
>>
>> Your loss.
>>
>>> Evaluating 'max(a,b)' when a,b are
>>> ints, and 'max' is a function (not a macro, not even an inline function)
>>> involves executing some 14 instructions, including call, return and
>>> branches [on x64].
>>
>> Sure, but on a decent optimizing compiler there won't be a difference --
>> and no need to complicate the language with extra operators etc.
>>
>> To me, that is a huge benefit.
>
> Don't forget the cost also of having 'max', if implemented as a macro
> expanded to that safe macro at each instance of max, of expanding that
> macro each time max occurs, reparsing the tokens again, building that
> bit of AST, doing the same optimisations and the same <max> pattern
> recognition and the conversion into intrinsic versions or into optimised
> code, over and over again.

Yeah, that's true, but who cares? If the compiler is slow during compile
time so that it can produce a fast program, I couldn't care less.

The resulting program will have the optimized max opcode at every place
where it is needed, and that's what matters -- not how the compiler gets
there, IMO.

> This is the extra cost of having these things defined on top of the
> language instead of being built-in. In the case of C++, this seems to
> apply to just about everything in it.
>
> Yes it is nice to have a good optimiser, but there's plenty more it can
> spend its time on. Implementing 'max' etc efficiently can be done once,
> instead of in N different places in a program, every time it is
> compiled, and for every other programmer and application that uses it.

As far as I'm concerned, the compiler can spend an hour optimizing a
simple "hello world" program if that means that program's runtime is cut
in half. Yes, I'm exaggerating, but you get the point.

Given your past statements, I'm sure you disagree with that. That's fine :-)

Tim Rentsch

unread,
Jul 2, 2018, 1:41:24 PM7/2/18
to
Chris Vine <chris@cvine--nospam--.freeserve.co.uk> writes:

> On Tue, 19 Jun 2018 14:43:22 +0200
> David Brown <david...@hesbynett.no> wrote:
> [snip]
>
>> With enough effort, you can get a variadic hygienic polymorphic min and
>> max macro in C11.
>
> Do you know enough about C11 macros to show how that is done? (I would
> be interested for that matter in how you get a non-variadic hygienic
> macro in C, if that is by using a different technique from the one I
> mentioned. I am still stuck on C89/90 as far as C pre-processors are
> concerned.)

There are two parts to this question - the variadic part, and
the polymorphic part.

The variadic part is done using preprocessor capabilities added
in C99, and which are also present in C++ (since C++11). A macro
definition of the form

#define X( ... ) [etc]

or

#define XX( parameter-list , ... ) [etc]

may use __VA_ARGS__ to refer to all the many arguments covered by
the ellipsis in the macro definition.

The polymorphic part is done using the type-selecting capability
provided by _Generic, which was added in C11. A use of _Generic
has a control expression, whose type is used to select from a
set of type-labeled expressions (like a 'switch()' on types),
including an optional 'default:' case in the event that no other
type-case matches.

Writing a variadic, polymorphic max function (or min function) is a
non-trivial exercise, along both axes. It isn't too hard to write
something that mostly works in the common cases. It's much harder
to write something that works correctly in some less common but
still important corner cases. I put something together which did a
fair job of covering a lot of the bases (and also would work in C++,
using some '#if __cplusplus' selections), and it ended up being a
fair chunk of code, even after making use of a pre-written macro
for dealing with variadicness. A full solution, covering a full
range of possibilities that the Standard(s) allow, is a significant
exercise.

Keith Thompson

unread,
Jul 2, 2018, 1:41:25 PM7/2/18
to
"Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:
> On 01.07.2018 11:08, Bart wrote:
[...]
>> But there must be a reason why such 'assignment operators' (not
>> 'augmented assignment' as I thought they were called) were present even
>> in early C.
>
> They corresponded directly, and still correspond, to very common
> processor instructions, and at that time it was more the programmer's
> job, and not the compiler's, to optimize the resulting machine code.
>
[...]
>
> Of course, historically that was ten years or so after development of C
> started, PC 1981 versus plain C 1971. I think the original C development
> was on a PDP-10.

Off-by-three error. It was a PDP-7.

> I had some limited exposure to the PDP-11, but all I
> remember about the assembly language was that it was peppered with @
> signs (probably indicating macros),

No, @ (or parentheses) indicate "deferred" addressing modes.
For example R3 refers to the R3 register, (R3) or @R3 refers to
the memory location whose address is stored in R3.

> and that the registers were numbered
> and memory-mapped.

Numbered, yes (R0..R7, where R7 is the PC (Program Counter)), but not
memory-mapped.

> But I'm pretty sure that if the PDP-11 didn't have
> add to register, I'd remember that. And so, presumably also the PDP-10.

If I recall correctly,

ADD #42, R0

would add 42 to the contents of R0 and store the sum in R0.

I'm less familiar with the PDP-7, but I think it also had 2-operand
instructions, where one of the operands was the target.

[...]

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Rick C. Hodgin

unread,
Jul 2, 2018, 2:08:26 PM7/2/18
to
Who are you replying to?

--
Rick C. Hodgin

Rick C. Hodgin

unread,
Jul 2, 2018, 2:43:33 PM7/2/18
to
On Monday, July 2, 2018 at 1:40:41 PM UTC-4, Ben Bacarisse wrote:
I just saw in a tree view you were replying to me.

Yes, when you posted this message:

https://groups.google.com/d/msg/comp.lang.c++/Mpf93i0SEzA/ScCYIP24CQAJ
https://groups.google.com/forum/#!original/comp.lang.c++/Mpf93i0SEzA/ScCYIP24CQAJ

You had it set to "follow-up" only comp.lang.c even though you had
posted the message to both comp.lang.c and comp.land.c++.

I missed that mistake, so I posted my reply to comp.lang.c++ so it
would be in both groups.

--
Rick C. Hodgin

Bart

unread,
Jul 2, 2018, 2:51:10 PM7/2/18
to
On 02/07/2018 08:02, Wouter Verhelst wrote:
> On 01-07-18 18:47, Bart wrote:
>> On 01/07/2018 14:27, Wouter Verhelst wrote:
>>> On 01-07-18 11:08, Bart wrote:
>>>> I don't use an optimising compiler.
>>>
>>> Your loss.
>>>
>>>> Evaluating 'max(a,b)' when a,b are
>>>> ints, and 'max' is a function (not a macro, not even an inline function)
>>>> involves executing some 14 instructions, including call, return and
>>>> branches [on x64].
>>>
>>> Sure, but on a decent optimizing compiler there won't be a difference --
>>> and no need to complicate the language with extra operators etc.
>>>
>>> To me, that is a huge benefit.
>>
>> Don't forget the cost also of having 'max', if implemented as a macro
>> expanded to that safe macro at each instance of max, of expanding that
>> macro each time max occurs, reparsing the tokens again, building that
>> bit of AST, doing the same optimisations and the same <max> pattern
>> recognition and the conversion into intrinsic versions or into optimised
>> code, over and over again.
>
> Yeah, that's true, but who cares? If the compiler is slow during compile
> time so that it can produce a fast program, I couldn't care less.

Suppose it's unnecessarily slow?

> As far as I'm concerned, the compiler can spend an hour optimizing a
> simple "hello world" program if that means that program's runtime is cut
> in half. Yes, I'm exaggerating, but you get the point.

Do you really need the maximum possible speed for the 99.9% of builds
where you are only testing or debugging? That would be rather a lot of
wasted hours.

But some of these overheads can impinge on build speeds even with
optimisation turned off.

--
bart

David Brown

unread,
Jul 2, 2018, 3:33:04 PM7/2/18
to
I'm missing bool, which could easily be added. "max" doesn't make sense
for the complex types. Are things like size_t and wchar_t always
typedef'ed (or #define'd) to standard types, or could they be different
types outside the standard integer types? I'm not sure how they would
fit with a _Generic like this - but size_t at least should be there.

Any others that I am missing?

> (2) A bug in one of the types it does cover;

Whoops, so there is. I can't claim it was well-tested code - it was
showing an idea, rather than being recommended real code.

>
> (3) Falls down badly in cases where the two arguments have
> different types.
>

Yes, that's a challenge. It will be fine if "a" is at least as big a
type as "b", but not vice versa. The only way I can think of to solve
that problem would be nested _Generic's - perhaps with a utility
_Generic macro for giving a common type for arithmetic operations on two
different types. However it is done, it would be ugly.

Bart

unread,
Jul 2, 2018, 3:47:31 PM7/2/18
to
In every program, in each place it's used, on each compile. That's the
point, and the disadvantage of having large chunks of a language
implemented in itself in a way that requires that prologue code to be
constantly recompiled.

David Brown

unread,
Jul 2, 2018, 4:06:25 PM7/2/18
to
Computers are /good/ at doing repetitive, menial tasks. They are /good/
at re-compiling a "max" macro in a few microseconds. Yes, over time
this re-compilation waste will build up - perhaps to the tune of several
whole seconds of wasted time every single year!

Is that really so bad that it is worth your time implementing a "max"
operator that will only ever exist on one compiler that no one else
uses? Including test suites, documentation, etc.?

There are enough challenges in writing a good compiler, and enough
places where careful thought and design can make a real difference. And
there are enough other things in life to spend your time on. So if you
want to have fun writing a compiler, I'd recommend concentrating on the
important bits (unless you think implementing a max operator is fun - in
which case, go for it! But call it fun, don't pretend it is relevant to
performance). And if you are not having fun writing a compiler, get a
decent optimising compiler, learn how to use it, and stop worrying about
lost microseconds.

Bart

unread,
Jul 2, 2018, 4:44:14 PM7/2/18
to
On 02/07/2018 21:06, David Brown wrote:
> On 02/07/18 21:47, Bart wrote:
>> On 02/07/2018 07:43, David Brown wrote:
>>> On 01/07/18 18:47, Bart wrote:
>>
>>> You have that ass-backwards.  Once you have a good optimiser, you no
>>> longer have to concern yourself about trying to implement something like
>>> "max" efficiently - the compiler will handle it smoothly already.
>>
>> In every program, in each place it's used, on each compile. That's the
>> point, and the disadvantage of having large chunks of a language
>> implemented in itself in a way that requires that prologue code to be
>> constantly recompiled.
>
> Computers are /good/ at doing repetitive, menial tasks.  They are /good/
> at re-compiling a "max" macro in a few microseconds.  Yes, over time
> this re-compilation waste will build up - perhaps to the tune of several
> whole seconds of wasted time every single year!

So, when people say that C++ is slow to compile (compared to equivalent
code in other languages), then what is the reason?

--
bart

Richard

unread,
Jul 2, 2018, 5:41:24 PM7/2/18
to
[Please do not mail me a copy of your followup]

Bart <b...@freeuk.com> spake the secret code
<o8w_C.1146749$Vm6.2...@fx45.am4> thusly:

>So, when people say that C++ is slow to compile (compared to equivalent
>code in other languages), then what is the reason?

The megabytes of transitively included header files that have to be
re-parsed, semantically analyzed and digested every time you compile a
translation unit.

In languages like C# and Java, the necessary information is stashed by
the compiler in a binary form and referenced from "import statements"
(Java) or "using statements" (C#).

C++ Modules are an ongoing effort to bring such a binary specification
of interface declarations to C++ with an eye towards the same speed
improvements.
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Ben Bacarisse

unread,
Jul 2, 2018, 8:22:06 PM7/2/18
to
I'd expect max to work for pointers too. For any two pointers into an
array of type T, p1 > p2 ? p1 : p2 is well-defined. Equally for
pointers into struct and union objects.

<snip>
>> (3) Falls down badly in cases where the two arguments have
>> different types.
>>
>
> Yes, that's a challenge. It will be fine if "a" is at least as big a
> type as "b", but not vice versa. The only way I can think of to solve
> that problem would be nested _Generic's - perhaps with a utility
> _Generic macro for giving a common type for arithmetic operations on
> two different types. However it is done, it would be ugly.

You can cover a lot of cases by using an expression that converts to a
common type. For example, rather that (a) as the control clause in the
_Generic expression, you can use 1?(a):(b).

--
Ben.

David Brown

unread,
Jul 3, 2018, 1:34:23 AM7/3/18
to
It is partly the complexity of the language (it is a lot harder to parse
than C), but mainly the large header files that need to be re-processed
for every compilation. The problem is not in the couple of lines of
"max" macro definition that gets re-processed - it is in the million
lines of standard library headers, boost, and other template libraries
that get re-processed.

Thus improving this system - by separately built "modules" - is an
important area for the future of C++.

Making "max" into an operator, on the other hand, is utterly irrelevant
from an efficiency viewpoint (either compiler efficiency, or generated
code efficiency).

It's okay to want a "max" operator because you think code would be
neater if it were an operator. General opinion seems to be that it is
not worth the effort, but certainly some people would like it. Some
people would like ways to define their own operators, either as
punctuation combinations (like /\) or as identifiers (like "max"). In
C++, you'd be looking at user definable functions here, rather than
built-in operators.

Arguing that making "max" an operator would improve compile times or the
quality of generated code, however, is not going to impress anyone.

David Brown

unread,
Jul 3, 2018, 1:43:07 AM7/3/18
to
Yes, I thought about that. But I really can't see any type-safe way to
do this with C _Generics. (C++ templates, and gcc extensions allow it.)
Although pointer max is well defined (for appropriate pointers), I
can't see it being a big use-case.

>
> <snip>
>>> (3) Falls down badly in cases where the two arguments have
>>> different types.
>>>
>>
>> Yes, that's a challenge. It will be fine if "a" is at least as big a
>> type as "b", but not vice versa. The only way I can think of to solve
>> that problem would be nested _Generic's - perhaps with a utility
>> _Generic macro for giving a common type for arithmetic operations on
>> two different types. However it is done, it would be ugly.
>
> You can cover a lot of cases by using an expression that converts to a
> common type. For example, rather that (a) as the control clause in the
> _Generic expression, you can use 1?(a):(b).
>

Good idea - I hadn't thought of that. Yes, that should work. Using
"(a) + (b)" would be alternative.


Wouter Verhelst

unread,
Jul 3, 2018, 4:17:13 AM7/3/18
to
On 02-07-18 20:51, Bart wrote:
> On 02/07/2018 08:02, Wouter Verhelst wrote:
>> On 01-07-18 18:47, Bart wrote:
>>> On 01/07/2018 14:27, Wouter Verhelst wrote:
>>>> To me, that is a huge benefit.
>>>
>>> Don't forget the cost also of having 'max', if implemented as a macro
>>> expanded to that safe macro at each instance of max, of expanding that
>>> macro each time max occurs, reparsing the tokens again, building that
>>> bit of AST, doing the same optimisations and the same <max> pattern
>>> recognition and the conversion into intrinsic versions or into optimised
>>> code, over and over again.
>>
>> Yeah, that's true, but who cares? If the compiler is slow during compile
>> time so that it can produce a fast program, I couldn't care less.
>
> Suppose it's unnecessarily slow?

What does it matter? Not like those extra few cycles will kill me.

>> As far as I'm concerned, the compiler can spend an hour optimizing a
>> simple "hello world" program if that means that program's runtime is cut
>> in half. Yes, I'm exaggerating, but you get the point.
>
> Do you really need the maximum possible speed for the 99.9% of builds
> where you are only testing or debugging? That would be rather a lot of
> wasted hours.
>
> But some of these overheads can impinge on build speeds even with
> optimisation turned off.

For debugging, you obviously compile with -O0, so that single-stepping
through your program doesn't make your debugger go all over the place.

Typically, you also use a build system that supports incremental builds
-- like "make" has done since the dawn of time, but you can use other
options too -- and then it doesn't matter anymore how much overhead a
compiler needs, because whether compiling the single file I changed
takes a second or just a tenth of one, I probably haven't finished
scanning the compiler output for issues yet by the time compilation has
finished -- if I've even finished my "move finger up from the <enter>
key" action.

a...@littlepinkcloud.invalid

unread,
Jul 3, 2018, 5:15:32 AM7/3/18
to
In comp.lang.c Keith Thompson <ks...@mib.org> wrote:
> "Alf P. Steinbach" <alf.p.stein...@gmail.com> writes:
>> I had some limited exposure to the PDP-11, but all I
>> remember about the assembly language was that it was peppered with @
>> signs (probably indicating macros),
>
> No, @ (or parentheses) indicate "deferred" addressing modes.
> For example R3 refers to the R3 register, (R3) or @R3 refers to
> the memory location whose address is stored in R3.
>
>> and that the registers were numbered
>> and memory-mapped.
>
> Numbered, yes (R0..R7, where R7 is the PC (Program Counter)), but not
> memory-mapped.

IIRC on earlier PDP-11s they were mapped at 17777700-17777717. You
needed this on order to be able to initialize registers from the front
panel. I think they gave up doing this in some later VLSI versions.

Andrew.


bitsavers.trailing-edge.com/pdf/dec/pdp11/handbooks/PDP11_Handbook1979.pdf

David Brown

unread,
Jul 3, 2018, 5:24:03 AM7/3/18
to
I usually compile with -O1 (with gcc) for debugging - it doesn't do
nearly as much code re-arrangement as -O2 or -Os, but it also avoids the
huge wastage of using the stack for all data and thus gives assembly
code that is a lot easier to read. Of course, this will vary according
to your needs, your target processors, your compiler, and the kind of
debugging you do (I like to see the assembly, and sometimes single-step
it). You also need at least -O1 to get good static warnings.

Sure, many people prefer -O0 for debugging - but it is not "obviously"
the case :-)

>
> Typically, you also use a build system that supports incremental builds
> -- like "make" has done since the dawn of time, but you can use other
> options too -- and then it doesn't matter anymore how much overhead a
> compiler needs, because whether compiling the single file I changed
> takes a second or just a tenth of one, I probably haven't finished
> scanning the compiler output for issues yet by the time compilation has
> finished -- if I've even finished my "move finger up from the <enter>
> key" action.
>

For some projects, I've had "automatically build on save" enabled in my
IDE - rebuilds are so quick that I don't even need to consider them as a
separate action. For bigger projects, I find that it is often linking
that takes the noticeable time, rather than compiling (unless I change a
commonly used header, forcing many files to be re-compiled).


Ian Collins

unread,
Jul 3, 2018, 6:21:10 AM7/3/18
to
On 03/07/18 21:23, David Brown wrote:
>
> For some projects, I've had "automatically build on save" enabled in my
> IDE - rebuilds are so quick that I don't even need to consider them as a
> separate action. For bigger projects, I find that it is often linking
> that takes the noticeable time, rather than compiling (unless I change a
> commonly used header, forcing many files to be re-compiled).

If you are using a platform where it is supported, the gold linker takes
most of the pain out of that step for large projects.

--
Ian.

Bart

unread,
Jul 3, 2018, 7:00:45 AM7/3/18
to
On 03/07/2018 06:34, David Brown wrote:
> On 02/07/18 22:44, Bart wrote:

>> So, when people say that C++ is slow to compile (compared to equivalent
>> code in other languages), then what is the reason?
>>
>
> It is partly the complexity of the language (it is a lot harder to parse
> than C), but mainly the large header files that need to be re-processed
> for every compilation. The problem is not in the couple of lines of
> "max" macro definition that gets re-processed - it is in the million
> lines of standard library headers, boost, and other template libraries
> that get re-processed.
>
> Thus improving this system - by separately built "modules" - is an
> important area for the future of C++.
>
> Making "max" into an operator, on the other hand, is utterly irrelevant
> from an efficiency viewpoint (either compiler efficiency, or generated
> code efficiency).

To use min and max the C++ way requires that these are defined as part
of those libraries and thus their implementation code needs to be
reprocessed for each compile whether it is used or not. And if it it is
used, it requires that expansion (instantiation or whatever you call it).

It is this principle, applying to just about everything in the language
not just max, then might be the reason for those huge headers.

(FWIW I tried compiling 100,000 lines of 'a=MAX(a,b);', where MAX is the
safe macro that someone posted, as C code with gcc -O0, and it crashed.
But by extrapolation would have taken some 18 seconds. -O3 didn't crash
and was much faster, but still much slower than compiling a=a+b;

Elsewhere, I can compile 100K lines of 'a max:=b' in 0.25 seconds.)

> It's okay to want a "max" operator because you think code would be
> neater if it were an operator.

That too. Slicker compilation is a bonus.

I can see also the attraction of implementing such things outside the
core language, so that they can be applied to new user types more
easily, but that seems to have a cost. However that doesn't stop binary
+ and - being built-in (I assume, in the case of C++) but still being
applied to user types.

> Arguing that making "max" an operator would improve compile times

Not just that by itself, no. Not unless the program consists of nothing
but max's like my example above. But a program /could/ consist largely
of things that do need to be expanded through things in those large
headers, things that /might/ be built-in in another language.

--
bart

Bart

unread,
Jul 3, 2018, 8:04:44 AM7/3/18
to
On 03/07/2018 09:14, Wouter Verhelst wrote:
> On 02-07-18 20:51, Bart wrote:
>> On 02/07/2018 08:02, Wouter Verhelst wrote:
>>> On 01-07-18 18:47, Bart wrote:
>>>> On 01/07/2018 14:27, Wouter Verhelst wrote:
>>>>> To me, that is a huge benefit.
>>>>
>>>> Don't forget the cost also of having 'max', if implemented as a macro
>>>> expanded to that safe macro at each instance of max, of expanding that
>>>> macro each time max occurs, reparsing the tokens again, building that
>>>> bit of AST, doing the same optimisations and the same <max> pattern
>>>> recognition and the conversion into intrinsic versions or into optimised
>>>> code, over and over again.
>>>
>>> Yeah, that's true, but who cares? If the compiler is slow during compile
>>> time so that it can produce a fast program, I couldn't care less.
>>
>> Suppose it's unnecessarily slow?
>
> What does it matter? Not like those extra few cycles will kill me.


but you can use other
> options too -- and then it doesn't matter anymore how much overhead a
> compiler needs, because whether compiling the single file I changed
> takes a second or just a tenth of one,

What is your tolerance threshold? Mine is about half a second beyond
which any delay is annoying and a distraction if I'm in the middle of
development.

My current projects each build from scratch in about 0.2 seconds or
under when generating native code.

All my five main language projects together (three compilers, an
interpreter and assembler/linker all written in static code, about
100Kloc total), can be built from source to new production exes in about
0.7 seconds. One of those five projects is a C compiler.

None of the programs involve use optimised code; it's all quite poor.
Yet the programs are still fast. None of them are written in C.

I reckon I could double that speed because there are some
inefficiencies, but for the time being it's adequate, since I can build
a project from scratch faster than I can press the Enter key.

(An older compiler in dynamic code could be built from source to
byte-code in some 30msec. It was arranged so that every time I ran it
it, the whole thing is recompiled, but you couldn't tell. Now /that/ was
scary.)

> What does it matter? Not like those extra few cycles will kill me.

Why am I saying all this? Because extra cycles do matter to some of us
who like small tools that work more or less instantly. Compilation
considered as a task that converts a few hundred KB of input to a few
hundred KB of output, shouldn't really take that long.

--
bart

David Brown

unread,
Jul 3, 2018, 9:37:14 AM7/3/18
to
On 03/07/18 13:00, Bart wrote:
> On 03/07/2018 06:34, David Brown wrote:
>> On 02/07/18 22:44, Bart wrote:
>
>>> So, when people say that C++ is slow to compile (compared to equivalent
>>> code in other languages), then what is the reason?
>>>
>>
>> It is partly the complexity of the language (it is a lot harder to parse
>> than C), but mainly the large header files that need to be re-processed
>> for every compilation. The problem is not in the couple of lines of
>> "max" macro definition that gets re-processed - it is in the million
>> lines of standard library headers, boost, and other template libraries
>> that get re-processed.
>>
>> Thus improving this system - by separately built "modules" - is an
>> important area for the future of C++.
>>
>> Making "max" into an operator, on the other hand, is utterly irrelevant
>> from an efficiency viewpoint (either compiler efficiency, or generated
>> code efficiency).
>
> To use min and max the C++ way requires that these are defined as part
> of those libraries and thus their implementation code needs to be
> reprocessed for each compile whether it is used or not. And if it it is
> used, it requires that expansion (instantiation or whatever you call it).

Yes and no. For the std::min and std::max functions, you need to
#include <algorithm>. If you don't want anything from that header, you
don't include it and there is no cost. And if you do include that
header, then the cost for min and max is negligible - they are very
simple templates.

A reasonably complex C++ file often includes a good number of standard
library files. It is not uncommon for the compiler to be chewing
through several hundred thousand lines before getting to the user's
code. This /is/ an issue with C++ compilation - it is a known issue,
and work is being done to improve the matter with C++ modules.
(Pre-compiled headers can help, but have a lot of limitations.)

The three or four lines for "max" and "min" templates in this lot does
not matter. It is /irrelevant/ for compilation time. It is
/irrelevant/ for generating efficient code. It is a /good/ thing from
the point of view of language maintenance and development - the more of
the language that is in the library rather than the core language, the
easier it is to add new features, improve existing features, deprecate
bad features, and generally develop the language.

>
> It is this principle, applying to just about everything in the language
> not just max, then might be the reason for those huge headers.
>

There are many thousands of types, functions and templates in the C++
standard library. Do you think these should all be built-in parts of
the language?

> (FWIW I tried compiling 100,000 lines of 'a=MAX(a,b);', where MAX is the
> safe macro that someone posted, as C code with gcc -O0, and it crashed.
> But by extrapolation would have taken some 18 seconds. -O3 didn't crash
> and was much faster, but still much slower than compiling a=a+b;
>
> Elsewhere, I can compile 100K lines of 'a max:=b' in 0.25 seconds.)
>

If you find a use for a file consisting of 100,000 lines of "a = MAX(a,
b)", then please let us know.

>> It's okay to want a "max" operator because you think code would be
>> neater if it were an operator.
>
> That too. Slicker compilation is a bonus.

No, the slicker compilation is irrelevant.

>
> I can see also the attraction of implementing such things outside the
> core language, so that they can be applied to new user types more
> easily, but that seems to have a cost.

The attraction is real - the cost is not.

> However that doesn't stop binary
> + and - being built-in (I assume, in the case of C++) but still being
> applied to user types.

You have to have /something/ built in to the language. You need /some/
functions, operators, types as your fundamentals on which to build. You
pick these based on having the most necessary, common and convenient
features in the language - with other parts in the library. Thus
although multiplication could, in theory, be defined in terms of looped
additional and a smart optimiser, it makes sense to include such a
common operating in the core language. But "max" is rarely used, easily
defined in a library, and easily optimised by the compiler - there are
no advantages in putting it in the core.

If and when some of the more advanced proposals for C++ make it to the
language - reflection, modules, metaclasses - then a number of parts of
what is currently in the core language, could be done in libraries.
"class", "struct", "union", "enum" could all be part of a library rather
than being embedded in the language, leading to greater flexibility
(plus the downside - risk of greater complication and confusion). With
modules, this would not result in greater compile time.

>
>> Arguing that making "max" an operator would improve compile times
>
> Not just that by itself, no. Not unless the program consists of nothing
> but max's like my example above. But a program /could/ consist largely
> of things that do need to be expanded through things in those large
> headers, things that /might/ be built-in in another language.
>

I am sure there are things that /could/ be made more efficient by having
them in the core, rather than in the library - maybe even things that
would have a noticeable effect on efficiency (either of compilation, or
of the generated code). Compiler implementations certainly do this sort
of thing, with intrinsic functions or builtin functions. "max" is not a
serious candidate for such a thing.


Rick C. Hodgin

unread,
Jul 3, 2018, 10:18:49 AM7/3/18
to
On 7/3/2018 8:04 AM, Bart wrote:
> All my five main language projects together (three compilers, an interpreter
> and assembler/linker all written in static code, about 100Kloc total), can be
> built from source to new production exes in about 0.7 seconds. One of those
> five projects is a C compiler.

Impressive. Using Visual Studio and a C++ compiler to compile my
mostly C-like code, even my modest programs take a couple seconds.
My Visual FreePro, Jr. app is about 60K lines of code and it takes
about 5-7 seconds to compile in debug mode, and about 10-12 seconds
in release mode (optimized code).

> None of the programs involve use optimised code; it's all quite poor. Yet the
> programs are still fast. None of them are written in C.

One of my tests for developing CAlive required creating a little parser
that converted C code to x86 assembly. Compared to Microsoft's Debug
mode code generation, my parser used about 50% more temporary variables,
most of which ultimately went unused, but I had to have a simple model
for the test parser, and it also used a fixed assembly model (no
optimization). It generated about 2x the number of asm instructions as
the Debug mode code by MS's Visual C/C++ compiler. But it too was still
very fast, and the entire parser was around 2,500 lines and could work
with any valid C expression, except I didn't have struct, pointer, or
address-of support.

My goals have never been for optimization as a primary focus. Today,
computers are fast enough that even the poor code can run fast enough
for nearly all tasks. My goals are in making developers more productive
and getting more code generated and debugged in less time.

With CAlive, I'm looking to get everything syntactically correct in my
first release (and a subsequent maintenance release to fix all the bugs
I missed in my testing), and then to turn my attention to optimization
after everything's working. I've asked Supercat to help me, but have
not hear a definitive yes or no yet. I figure that will be in the 2021-
2022 timeframe, with CAlive being released in the late 2019 timeframe.

> I reckon I could double that speed because there are some inefficiencies, but
> for the time being it's adequate, since I can build a project from scratch
> faster than I can press the Enter key.

My little 2,500 line parser issues were due to hard assumptions I had
to make in the generated code. I had assumed all variables are 32-bit
signed integers, for example, so I had no real types. And I generated
my asm code like this:

; a = b + c;
mov eax,b
mov ebx,c
add eax,ebx
mov t0,eax ; a = t0

mov eax,t0
mov a,eax
mov t1,eax ; t1

So there are obvious inefficiencies there that a simple optimizer would
remove:

; a = b + c
mov eax,b
add eax,c
mov a,eax

It's doable, it would just require a more complex code generation
algorithm. The one I had was literal printf() statements. :-)

> (An older compiler in dynamic code could be built from source to byte-code in
> some 30msec. It was arranged so that every time I ran it it, the whole thing
> is recompiled, but you couldn't tell. Now /that/ was scary.)
>
> > What does it matter? Not like those extra few cycles will kill me.
>
> Why am I saying all this? Because extra cycles do matter to some of us who
> like small tools that work more or less instantly. Compilation considered as
> a task that converts a few hundred KB of input to a few hundred KB of output,
> shouldn't really take that long.

I don't worry too much about how long optimization code generation
takes. If it takes five minutes, but generates faster code, to me
that's not an issue because you can do all of your development in
debug mode and it should be fast enough. And where it's not, take
the handful of algorithms slowing you down and compile them separately
in optimized mode and link them in that way to the rest of your debug
code.

But, I do think the debugging phase should be nearly instantaneous,
and a primary focus of CAlive is to have a continuous compiler. I
want the code being typed to be compiled continuously, updating the
running ABI in memory continuously, so that you are working on a live
system, using the LiveCode ABI that CAlive generates for. And remember
that CAlive introduces the concept of an inquiry, which is a state
where the code branches to a suspend unit or to the debugger when it
encounters code that is in error from the source code producing errors
during compilation. It won't crash. It will suspend so the developer
can bring up the code, fix it, and keep going.

-----
Things are moving forward positively toward CAlive. The 2019-2020
release date has a glimmer of hope. My target is to release it on
Christmas 2019 (my gift to Jesus Christ, and to all mankind).

We'll see though. Lots left to do, and I could get hit by a bus
or have a heart attack or stroke before then. You never know what
tomorrow holds ... which is why you must get right with Jesus today.
It's important.

--
Rick C. Hodgin

Scott Lurndal

unread,
Jul 3, 2018, 10:59:37 AM7/3/18
to
Who says that?

Keith Thompson

unread,
Jul 3, 2018, 12:02:31 PM7/3/18
to
a...@littlepinkcloud.invalid writes:
> In comp.lang.c Keith Thompson <ks...@mib.org> wrote:
[...]
>> Numbered, yes (R0..R7, where R7 is the PC (Program Counter)), but not
>> memory-mapped.
>
> IIRC on earlier PDP-11s they were mapped at 17777700-17777717. You
> needed this on order to be able to initialize registers from the front
> panel. I think they gave up doing this in some later VLSI versions.

I should have mentioned that there are/were different versions of the
PDP-11. The version I worked with did not have memory-mapped
registers as far as I know. Others could have.
It is loading more messages.
0 new messages