I am not sure if the would work as # symbol is a preprocessor marker.
If this is possible then what would they do. Please give examples.
Nothing stops the compilers from generating inline code for functions.
They already do that for lots of functions:
https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
But as the compilers already have built-in support for the rotate
instructions, all you have to do is standardize the name, not invent new
operators.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CADvuK0LGrYEXfXLgOSKMvjJfGgJEA%2BLJDpk87ba4b-w6ZwkLWw%40mail.gmail.com.
On quarta-feira, 22 de novembro de 2017 01:57:54 PST gb2...@gmail.com wrote:
> I have not seen <#, <#=, ># and >#= be used thus far so I want to suggest
> these as the operators in the next set of standards.
No need. Compilers are smart enough to detect a rotation. So a macro like:
#define ROTL(x, b) (uint64_t)(((x) << (b)) | ((x) >> (64 - (b))))
uint8_t x = 1, b = 2;
uint8_t val1 = (((x) << (b)) | ((x) >> ((sizeof(x)*CHAR_BIT) - (b))));
uint8_t val2 = x ># (b * 2);
uint8_t val3 = (uint8_t)rotl( x, b * 3 );
Now tell me if you were to simply glance at this code after the operator became common place and you didn't have any prior context which value would you understand what to expect quickest?
On sábado, 25 de novembro de 2017 04:08:07 PST Andrey Semashev wrote:
> > The high-level language doesn't need that because the compilers are smart
> > enough to notice the pattern of two-shifts-and-or as a rotation and write
> > assembly accordingly.
>
> Well, the compilers were not always that smart, and if we had the
> standard rotation functions or operators from the start, people wouldn't
> have to write assembler code and compilers wouldn't need to be taught to
> recognize certain code patterns. Reliability of this recognition has
> always been and will be a question of QoI, i.e. something that cannot be
> relied on.
True, chicken-and-the-egg: if the operators had been available, perhaps they
would have been used more often.
But my assertion remains: if they had really been needed, they'd have been
standardised and added to some language by now.
For now, they only exist as
intrinsics in <x86intrin.h>.
> I'm not arguing for operators or functions approach here. I'm saying
> that dedicated rotation operations are long overdue in C and C++.
I dispute that. They can only be overdue if you needed them and didn't have
them. Since you can do rotation right now, you have them.
True, chicken-and-the-egg: if the operators had been available, perhaps they
would have been used more often.
But my assertion remains: if they had really been needed, they'd have been
standardised and added to some language by now. For now, they only exist as
intrinsics in <x86intrin.h>.
On sábado, 25 de novembro de 2017 13:17:18 PST Nicol Bolas wrote:
> > But my assertion remains: if they had really been needed, they'd have been
> > standardised and added to some language by now.
>
> One could make the same case about `optional`, for example. Or `variant`.
> Or all of `filesystem`. Or a C++ file IO API that isn't stupid. Or any
> number of other things that the language/library clearly needs but we don't
> have.
Sorry, that's not the same. I was setting the bar for adding an operator.
If somebody wants rotary shifters as operators, the usual shift operators can be overloaded. One just needs a new type for that, e.g.:x << std::rotary(y)Or, perhaps,std::rotary(x) << yThat is probably in response to the general thread, not specifically to your message.Cheers,V.
On Sunday, 26 November 2017 14:01:35 UTC, Viacheslav Usov wrote:If somebody wants rotary shifters as operators, the usual shift operators can be overloaded. One just needs a new type for that, e.g.:x << std::rotary(y)Or, perhaps,std::rotary(x) << yThat is probably in response to the general thread, not specifically to your message.Cheers,V.
Is this not natural in a C++ proposals forum?
Besides, that is not true. std::rotary() (with std:: replaced appropriately) can equally well work in C, where this will require, depending on the form chosen, at least one new built-in type and new semantics for the shift operators for the new type(s).
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+unsubscribe@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/14d1040a-d763-4ba6-a138-a30a35298e0d%40isocpp.org.
You have trouble finding it because they don't discuss over the Internet.
You should prepare a paper and submit it for the next mailing. You may need to
attend one of their meetings to present and defend your paper.
See http://www.open-std.org/jtc1/sc22/wg14/ for more information.
Never trust anything that says "C/C++". It is usually a lie.
I said nothing about "overwriting/overloading of operators". I said "new semantics for the shift operators for the new type(s)".
Having maximum in speed and memory at the same time is an exceptional occurrence, never the norm, no matter what language is used.
If we treated a computer language like a human language, we'd never write a
good compiler and much less cross-platform code, because everyone would write
code differently, in different dialects.
Sorry, but you may be wrong there. The fact that we add an operator changes
the language grammar, which may have unintended consequences. We've already
discussed the fact that there are no good symbol characters available, as # is
used by the preprocessor, $ is used in certain implementations as identifier,
characters outside the basic character set are used in certain language
extensions.
Even if you find one, there's a chance that the sequence of symbols will have
been used somewhere and in someone's code and the new operator changes the
meaning.
It's doable to add new operators, just see the spaceship operator being added.
But there needs to be a really good reason to do so, including an explanation
of why a function isn't sufficient to solve the problem.
Arbitrarily declaring that people who don't agree with you are a priori wrong, out-dated, or whatever is not a viable approach for convincing anyone of the validity of your position.
On Mon, Nov 27, 2017 at 10:12 AM, Nicol Bolas <jmck...@gmail.com> wrote:Arbitrarily declaring that people who don't agree with you are a priori wrong, out-dated, or whatever is not a viable approach for convincing anyone of the validity of your position.
Then again, there's that Max Planck paraphrased quote, "Science advances one funeral at a time."
I think that there is a reasonable possibility that the C++ Standard committee is a victim of epistemic
closure - the same people meeting time after time, agreeing with each other on certain priorities that
perhaps the C++ community at large would not agree with.
Meanwhile, even simple fixes to obvious
errors remain unaddressed draft after draft (such as the number of bits required for an enumeration
type in [dcl.enum]), while more ways are found to make programs have undefined behavior so that
optimizationists can point to the beautiful code their compilers generate by ignoring what the
programmers actually wanted their programs to do.
Yeah, the C++ community at large is totally unhappy with prioritizing Concepts,
Modules, Reflection, operator<=> and comparison operator generation, structured
binding, and so forth. The committee needs to spend more time dealing with making
C++ more like C and turning the "object model" into just some memory where stuff
maybe kinda exists until it doesn't.
It's funny; I can't seem to recall a single instance of C++ adding new UB for this purpose.
Oh sure, there have been lots of clarifications of rules that were unclear. But there has
been no change to the object model since C++98; only having more detail in explaining
how it works. All of the problems you cite about C++'s object model either are in C++98
or are due to defect resolutions to wording in C++98. date back to C++98. The only
difference is that the model has been better specified, so you finally noticed what is and
is not UB.
I wasn't aware that Linux Torvalds was a member of the *C++* community. I
thought he had decided that C++ was crap long ago.
And even if he was, this is hardly evidence for the C++ community at large
being upset with the standards committee over this.
Also, you haven't explained what this has to do with *prioritization* from
the committee. I rather suspect that if you the choice between "modules"
and "less UB", most C++ programmers will pick "modules".
Lastly, please explain how those compiler changes from the GCC bug report
are the result of "clarifications" of C++98? Show me the wording in C++98
that made calling member functions with a NULL pointer well-defined
behavior. Because if you can't, then these were not due to "clarifications"
of C++98; they've *always been there*.
When a compiler writer
> decides that (a + 1 < a) is unilaterally false, even though a programmer
> has written
> a test for that, that compiler writer has made a grave error, not in
> interpreting the language
> but in the service provided to users of the compiler.
>
It's funny; I can't seem to recall a single instance of C++ adding new UB
>> for this purpose.
>>
> Oh sure, there have been lots of *clarifications* of rules that were
>> unclear. But there has
>>
> been no change to the object model since C++98; only having more detail in
>> explaining
>>
> how it works. All of the problems you cite about C++'s object model
>> either are in C++98
>>
> or are due to defect resolutions to wording in C++98. date back to C++98.
>> The only
>>
> difference is that the model has been better specified, so you finally
>> noticed what is and
>>
> is not UB.
>>
>
> It's not that I notice. It's that compiler vendors are using these
> "clarifications" to break
> programs
>
- they deliberately miscompile what the programmer has written in ways that
> are inconsistent with the way they compiled programs before, thereby
> silently breaking
> them and leaving users scrambling to find out why mysterious failures are
> happening.
> Whether or not these possibilities have always been present, it's the
> clarifications that
> make them manifest.
>
Are you so sure about that? If so, prove it.
Give an example of code that used to "work" that doesn't now, based *solely*
on wording from C++11 and later. Where a "clarification" is the difference
maker, rather than simply when compiler writers decided to start making
changes.
FYI: your example `if(a + 1 < a)` is not such a case. C89 permitted
compilers to assume that was false for signed integers.
Rotations are rare - they only turn up in a few niche types of code, such as
cryptography algorithms. And usually such code is so complicated and
requires such time and thought to understand, that any time spend
understanding a rotation operator or function is negligible.
Lastly, please explain how those compiler changes from the GCC bug report are the result of "clarifications" of C++98? Show me the wording in C++98 that made calling member functions with a NULL pointer well-defined behavior. Because if you can't, then these were not due to "clarifications" of C++98; they've always been there....Give an example of code that used to "work" that doesn't now, based solely on wording from C++11 and later. Where a "clarification" is the difference maker, rather than simply when compiler writers decided to start making changes.
FYI: your example `if(a + 1 < a)` is not such a case. C89 permitted compilers to assume that was false for signed integers.
On Mon, Nov 27, 2017 at 10:12 AM, Nicol Bolas <jmck...@gmail.com> wrote:Arbitrarily declaring that people who don't agree with you are a priori wrong, out-dated, or whatever is not a viable approach for convincing anyone of the validity of your position.
Then again, there's that Max Planck paraphrased quote, "Science advances one funeral at a time."
I think that there is a reasonable possibility that the C++ Standard committee is a victim of epistemic
closure - the same people meeting time after time,
agreeing with each other on certain priorities that
perhaps the C++ community at large would not agree with. Meanwhile, even simple fixes to obvious
errors remain unaddressed draft after draft (such as the number of bits required for an enumeration
type in [dcl.enum]), while more ways are found to make programs have undefined behavior so that
optimizationists can point to the beautiful code their compilers generate by ignoring what the
programmers actually wanted their programs to do.
There is an additional reason for making signed overflow undefined
behaviour - there is no single sensible behaviour that could be picked.
The mistake some people make is to think that a C (or C++) "int" is just
a signed type matching the cpu's registers.
Secondly, compilers do /not/ cause programs to have undefined behaviour,
All that has changed is that the undefined behaviour coincidentally matched
the programmer's expectations in some cases, and not in other cases.
> Others believe that, since the language says you cannot do something,
> compilers are free to assume you did not do it.
Since that is the believe shared by the people that defined the
language, and the people that write the compilers, it is clearly the
important one!
On Tue, Nov 28, 2017 at 7:26 AM, David Brown <da...@westcontrol.com> wrote:There is an additional reason for making signed overflow undefined
behaviour - there is no single sensible behaviour that could be picked.
There is no need to pick a single sensible behavior.The mistake some people make is to think that a C (or C++) "int" is just
a signed type matching the cpu's registers.The mistake some people make is to think that it's not.Secondly, compilers do /not/ cause programs to have undefined behaviour,
Yes, they do. C leaves certain operations undefined because it could be
problematic to implement them in a single way on hardware that doesn't
support it. It's the compilers that choose not to provide a definition suitable
to their implementation.
On nearly every common processor, there is no
reason that signed wraparound should not be the result of signed integer
arithmetic.
All that has changed is that the undefined behaviour coincidentally matchedthe programmer's expectations in some cases, and not in other cases.
The truly pernicious aspect of this is that the compilers have increasingly
and silently stopped matching expectations on pre-existing programs. They
are doing that for optimizationism - making previously defined (by the compiler)
operations deliberately undefined so that they can show off clever tricks.
> Others believe that, since the language says you cannot do something,
> compilers are free to assume you did not do it.
Since that is the believe shared by the people that defined the
language, and the people that write the compilers, it is clearly the
important one!
As I said, epistemic closure. There are very important users who are furious
over this sort of thing, but they are not listened to:
<https://lwn.net/Articles/511259/>
<http://yarchive.net/comp/linux/timer_wrapping_c.html>
OK, so we've left the realm of "the standard is broken" and moved into the realm of "the compiler is broken because it doesn't do the thing I expect it to do."
Let us now go back to where this all started:> I think that there is a reasonable possibility that the C++ Standard committee is a victim of epistemic
closure - the same people meeting time after time, agreeing with each other on certain priorities that perhaps the C++ community at large would not agree with.Since your problem is, by your own admission, not with the C++ standards committee, would you like to retract this statement?
OK, you're effectively saying that the optimizations they're making are not actually "optimizing" anything. That they're not making real code faster. That they're just "showing off clever tricks" that are presumably useless.What evidence do you have for this? Prove that such optimizations don't improve the performance of real code.Because if these "clever tricks" actually improve the performance of real code, then they are optimizations, not mere "clever tricks". And since the point of C and C++ is to be fast, making compiled C and C++ fast is not just the right of compiler writers, it's their job.
C and C++ compilers do not belong to Linus Torvalds. Or to any one person.
The # character will interfere regardless of position in the input because
it is a preprocessor instruction in macros.
#define ROL(x, y) x <# y
will probably expand to
x <" y"
https://gcc.gnu.org/onlinedocs/gcc-3.4.2/gcc/Min-and-Max.html
Deprecated, though.
> I read further down about how pow() should have it's own operator, the
> immediate thought I had there was maybe ** & **= would work, old compilers
> which aren't supposed to support the new standard would just fail to
> compile it anyway. As for the rotl/rotr thing I only don't like them
> because of A: how 1 needs to cast to be sure they get the part of the
> integer they want when they use it, neither can they be sure the bits
> rotate in the same way they would rotate on smaller/larger integer (e.g.
> 0x1 becoming 0x80000000 instead of 0x80) & B: One cannot apply the rotation
> directly to the integer they are working with (like how "a += 1" is faster
> than "a = a + 1")
Faster to type, but not to execute.
But I did not understand your reason for not liking. What cast is necessary?
Can you give an example where using rotl without a cast would result in a
surprise or incorrect result?
> What should, according to you, the following produce:
>
> 1U << functionThatReturns33();
>
> Please answer bearing in mind that the SHL operation in assembly is only
> specified to work with values less than 32. Any higher value may shift
> everything, nothing, or shift by the modulo 32.
Please answer in two cases:
1) regular context, the compiler could not inline functionThatReturns33()
2) constant expression context, the compiler can inline and did constant
propagation
Suppose there are. What now, should it be undefined?
In other words, the same assembly code can have different behaviour depending
on the processor used at runtime. How can the compiler implement in a
constexpr context the behaviour that isn't defined?
Which runtime? Remember, the same assembly can produce different results
depending on the processor.
Please give me an answer: if I compile for 16-bit real-mode x86, what should
1U << 33
be?
This is a *relevant* case, since early non-UEFI boot code still runs 16-bit
real-mode, with Intel CPUs released in 2017 (the Quark line of MCUs).
IA-32 Architecture CompatibilityThe 8086 does not mask the shift count. However, all other IA-32 processors (starting with the Intel 286 processor)do mask the shift count to 5 bits, resulting in a maximum count of 31. This masking is done in all operating modes(including the virtual-8086 mode) to reduce the maximum execution time of the instructions.
This means that the behavior of the code is undefined because its result is different
on every target platform, including different numeric results and traps.
This is not possible, because this means that the program can now not only run differently,
but also be *compiled* differently depending on the target architecture
And what if the CPU issues a trap in this situation? Do you expect the compiler to crash or something?
There is no UB allowed in constant expressions, precisely for this reason.
So if you want some particular behavior, you have to define it, and it should
be a fixed single definition, not just "do whatever the CPU does". This, in turn,
means blessing one implementation and penalizing all others to the point that
a particular shift instruction is no longer possible to use.