Thanks,
Nate
> on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:
>
>> Slightly off-topic: are there any plans to add Chaos to Boost?
>
> +1
As some sort of technological preview? It is not portable enough to be
used within Boost, though perhaps a slew of regression failures will cause
that to change (unlikely). <rant>MS, in particular, is too busy taking it
upon themselves to "ensure C++ is relevant" to prioritize implementing the
language.</rant> Also, one of the principles of Chaos is to have
absolutely zero compiler workarounds, and that would have to stay if it
was ever part of Boost.
Regards,
Paul Mensonides
>________________________________
> From: Paul Mensonides <pmen...@comcast.net>
>To: bo...@lists.boost.org
>Sent: Saturday, April 21, 2012 10:17 PM
>Subject: Re: [boost] [preprocessor] Sequences vs. All Other Data Structures
>
>On Sat, 21 Apr 2012 17:42:25 -0400, Dave Abrahams wrote:
>
>> on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:
>>
>>> Slightly off-topic: are there any plans to add Chaos to Boost?
>>
>> +1
>
>As some sort of technological preview? It is not portable enough to be
>used within Boost, though perhaps a slew of regression failures will cause
>that to change (unlikely). <rant>MS, in particular, is too busy taking it
>upon themselves to "ensure C++ is relevant" to prioritize implementing the
>language.</rant> Also, one of the principles of Chaos is to have
>absolutely zero compiler workarounds, and that would have to stay if it
>was ever part of Boost.
There are workarounds for most of the library. How well does active
arguments work with msvc? Are there possible workarounds?
I know I used the PP_DEFER macros before in msvc, which worked
quite well. I used them to actually workaround the fact that msvc
implements the "and, not, or, etc" keywords as macros, which can
make it difficult to cat them, or detect if the keyword was there or not.
So the macro by themselves would expand to their appropriate
operator, but if they were expanded inside of PP_CONTEXT() macro,
then they would expand to something else. Like this:
#define not PP_CONTEXT_DEFINE(PP_MSVC_NOT, !)
not //expands to !
PP_CONTEXT(not) //expands to PP_MSVC_NOT
So I used the PP_DEFER macro to delay the macro expansion for one
scan. Then it detects if PP_CONTEXT is being expanded recursively, and
then chooses the alternate macro. There is also an PP_EXPAND that is
needed, as well. It can work through several scans, too, by having a
nested PP_DEFER for each scan(and a corresponding PP_EXPAND).
I actually got the idea for this from the CHAOS_PP_RAIL and
CHAOS_PP_WALL macro, which is used for a different purpose.
But anyways, I wonder if its even possible(with workarounds) to
implement recursion from chaos in msvc?
> But anyways, I wonder if its even possible(with workarounds) to
>
> implement recursion from chaos in msvc?
The point is that I don't care. By definition, Chaos = NO WORKAROUNDS.
Regards,
Paul Mensonides
>> >> Slightly off-topic: are there any plans to add Chaos to Boost?
>> >
>> > +1
>>
>> As some sort of technological preview? It is not portable enough to be
>> used within Boost, though perhaps a slew of regression failures will
>> cause that to change (unlikely). <rant>MS, in particular, is too busy
>> taking it upon themselves to "ensure C++ is relevant" to prioritize
>> implementing the language.</rant> Also, one of the principles of Chaos
>> is to have absolutely zero compiler workarounds, and that would have to
>> stay if it was ever part of Boost.
>
> I don't see why one compiler's lack of standards-conformance should
> prevent a useful library from becoming part of Boost.
Because it is not just MSVC that's the problem, and I know what the
tendency will be. This will work with compiler XYZ with *just* this
little workaround.... Any workaround whatsoever in Chaos is absolutely
unacceptable to me. It ruins the very point of the library.
Regards,
Paul Mensonides
>> But anyways, I wonder if its even possible(with workarounds) to
>>
>> implement recursion from chaos in msvc?
>
> The point is that I don't care. By definition, Chaos = NO WORKAROUNDS.
I understand that Chaos won't implement workarounds. I was thinking more
of forking chaos, and trying to implement it in msvc. Would that be a possibility?
And with your knowledge of Chaos's implementation and msvc limitations, do
you even think that it would even be remotely feasible?
Thanks,
Paul Fultz II
The library could be proposed to Boost with the explicit understanding
that it is intended to work only with fully standards-conforming
preprocessors. In the long run its presence in Boost might even contribute
to putting pressure on vendors of non-conformant preprocessors to get
their act together.
Regards,
Nate
>> > I don't see why one compiler's lack of standards-conformance should
>> > prevent a useful library from becoming part of Boost.
>>
>> Because it is not just MSVC that's the problem, and I know what the
>> tendency will be. This will work with compiler XYZ with *just* this
>> little workaround.... Any workaround whatsoever in Chaos is absolutely
>> unacceptable to me. It ruins the very point of the library.
>
> The library could be proposed to Boost with the explicit understanding
> that it is intended to work only with fully standards-conforming
> preprocessors. In the long run its presence in Boost might even
> contribute to putting pressure on vendors of non-conformant
> preprocessors to get their act together.
I doubt the latter that would happen. Essentially, because Chaos cannot
be used when targeting VC++, Boost cannot itself use it. Without Boost
itself using it, the target audience is too small. Now, if Boost said
"screw VC++" and used Chaos anyway, that *might* do it. However, it would
break Boost on more compilers than just VC++, and then we'd more likely
just get a Python 2.x vs 3.x instead (apologies, Dave, if I'm mis-
characterizing the current Python state) except with Boost.
Now, IFF there is a place in Boost for a library like Chaos which
currently contains no workarounds and henceforth must not contain
workarounds, then I'd be willing. However, there is no current analog of
that in Boost, nor any Boost policy addressing that type of requirement,
nor any means of enforcement (even a would-be automated enforcement such
as grepping for _MSC_VER (etc.) would work because even slightly
rearranging code in a less than (theoretically) ideal way is still a
workaround).
Regards,
Paul Mensonides
If someone could prepare *minimized* test cases demonstrating the VC bugs that are holding you back here, I could file compiler bugs.
STL
One of the main issues is that Microsoft follows the C89 standard when
it comes to the preprocessor. That has been superseded in both C and C++
a long time ago. So what does one say to a company that follows a very
old standard and defends the inadequacy of its preprocessor by saying
that according to that standard they are processing preprocessor symbols
correctly ? It is like a company saying that they implement a computer
language of 23 years ago and despite the fact that the language has
changed drastically since that time, they are proud to not keep up with
those changes.
A second issue is that quite often simple test cases are unavailable
since the sort of failures exhibited by Microsoft's preprocessor occur
when one is attempting to use the facilities of the Boost PP library and
that is often at the cutting edge of what one can do with the
preprocessor, aside from Chaos which assumes a 100% compliant preprocessor.
I will try to provide some simple test cases sometime in the coming week
or weekend, although perhaps Paul can jump in here and do so more easily.
> On 4/22/2012 8:43 PM, Stephan T. Lavavej wrote:
>> If someone could prepare *minimized* test cases demonstrating the VC
>> bugs that are holding you back here, I could file compiler bugs.
>
> One of the main issues is that Microsoft follows the C89 standard when
> it comes to the preprocessor. That has been superseded in both C and C++
> a long time ago. So what does one say to a company that follows a very
> old standard and defends the inadequacy of its preprocessor by saying
> that according to that standard they are processing preprocessor symbols
> correctly ? It is like a company saying that they implement a computer
> language of 23 years ago and despite the fact that the language has
> changed drastically since that time, they are proud to not keep up with
> those changes.
It isn't that. VC++ doesn't even implement the preprocessor correctly for
C89. Essentially, the macro expansion algorithm appears to be
fundamentally broken. One small example:
#define A() 123
#define B() ()
A B() // should expand to A() (and does)
#define C() A B()
C() // should *still* expand to A() (but instead expands to 123)
The entire methodology is borked here. Leaving blue-paint aside (and
argument processing aside), macro expansion should work like a stream
editor.
+ + C ( ) + +
^
+ + C ( ) + +
^
+ + C ( ) + +
^
+ + A B ( ) + +
^
THIS is "rescanning and further replacement." Rescanning and further
replacement is *not* a recursive process that occurs "in the replacement
list."
+ + A B ( ) + +
^
+ + A ( ) + +
^ // note: not at 'A', scanning doesn't go backward
+ + A ( ) + +
^
+ + A ( ) + +
^
+ + A ( ) + +
^
+ + A ( ) + +
^
In the above, everything to the left of the caret is output and everything
to the right is input.
For the top level source, that output is the underlying language parser
(or preprocess-only output). The only time it isn't is when an argument
to a macro--which is *actually used* at least once in the replacement list
without being an operand of # or ##--is processed. In that case, a
recursive scan of the sequence of tokens making up the argument is
necessary. In that case, the output of the scan is redirected to a buffer
(depending on exact implementation methodology) for future substitution
into a copy of the replacement list.
Things get more complicated when dealing with blue paint and disabling
contexts. A disabling context is a range associated with a macro name
(that exists only during a scan for macro expansion--i.e. if that scan is
redirected to buffer for substitution, the result in the buffer does not
contain annotations representing the disabling context). If an identifier
token which matches the macro name is found in the range during the scan,
it is "painted blue." A painted token cannot be used as the name of a
macro in a macro invocation. However, while the context that causes blue
paint is transient, blue paint on a token is not. It remains even when
the scan output is redirected for substitution. Note that substitution
ultimately means that the sequence of tokens in the argument are going to
get scanned again. When that happens, the contexts from the first scan
are gone, but any blue paint on particular tokens is not.
As a concrete example,
#define A(x) B(x) C(x) D(x)
#define B(x)
#define C(x) x
#define D(x) #x
A(B(1))
Where M' = painted M token, { ... } is the hideset H (which may be flags
in the symbol table), +M is a virtual token which means H := H | M, and -M
is a virtual token which means H := H \ M...
[begin]
A ( B ( 1 ) ) EOF
^ {}
[begin]
B ( 1 ) EOF
^ {}
[begin]
1 EOF
^ {}
1 EOF
^ {}
[end]
+B 1 -B EOF
^ {}
1 -B EOF
^ { B }
1 -B EOF
^ { B }
1 EOF
^ {}
[end]
+A "B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ {}
"B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ { A }
[begin]
1 EOF
^ { A }
1 EOF
^ { A }
[end]
"B(1)" +B 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A, B }
"B(1)" 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A, B }
"B(1)" 1 C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" 1 +C -C D ( 1 ) -A EOF
^ { A }
"B(1)" 1 -C D ( 1 ) -A EOF
^ { A, C }
"B(1)" 1 D ( 1 ) -A EOF
^ { A }
[begin]
1 EOF
^ { A }
1 EOF
^ { A }
[end]
"B(1)" 1 +D "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A }
"B(1)" 1 "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
[begin]
1 EOF
^ { A, D }
1 EOF
^ { A, D }
[end]
"B(1)" 1 "1" A' ( 1 ) +B 1 -B -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) 1 -B -D -A EOF
^ { A, B, D }
"B(1)" 1 "1" A' ( 1 ) 1 -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) 1 -A EOF
^ { A }
"B(1)" 1 "1" A' ( 1 ) 1 EOF
^ {}
[end]
VC++ gets the result correct here (though it doesn't get it by the correct
method).
The algorithm is bizarre in relation to typical code interpreters, but it
is actually shockingly simple. The only times that it gets tricky are
when a -M occurs within the sequence of tokens that makes up an invocation
which I glossed over above (because they don't occur in the above). E.g.
scenarios like:
MACRO -M ( ARG )
or
MACRO ( ARG, -M ARG )
In both of these cases, the -M must get executed prior to MACRO(...) being
replaced by MACRO's replacement list (and resuming scanning).
The first of these is not very tricky--and both boost-pp and chaos-pp
require this. The algorithm merely has to execute the virtual token when
finding the left parentheses. The second is more tricky (and I don't
think even chaos-pp requires this one to work correctly) because the
algorithm has to *correctly* deal with possibly four different scenarios:
1. The argument is not used (any -M must still be executed therein).
2. The argument is used as an operand of # (any -M must still be executed
therein).
3. The argument is used as an operand of ## (any -M must still be executed
and tokens must be painted as required).
4. The argument is used as a non-operand of # or ## (recursive scan)
Worse, 2, 3, and 4 can all be used at the same time:
#define A(x) #x x ## x x
However, in 2 and 3, you cannot encounter a +M, so you can just push a +M
into a "reset" stack for every -M you find and then execute the reset
stack prior to processing an argument (provided the recursive scan case is
done last).
VC++'s macro expansion algorithm appears to be a combination of
optimizations and hacks that yield correct results for simple cases, but
often yields incorrect results where the input requires the actual rules
to be followed. Obvious examples are the way __VA_ARGS__ is "optimized"
into a single argument and the extra scan applied in the following causing
it to yield 123 instead of A()--which is presumably a hack of some kind.
#define A() 123
#define B() ()
#define C() A B()
C()
However, these are only the tip of the iceberg. I have a suspicion that
the algorithm used is actually totally unlike what it is supposed to be.
I.e. I'd guess it's a re-write-the-algorithm rather than a fix-a-few-bugs
scenario.
> A second issue is that quite often simple test cases are unavailable
> since the sort of failures exhibited by Microsoft's preprocessor occur
> when one is attempting to use the facilities of the Boost PP library and
> that is often at the cutting edge of what one can do with the
> preprocessor, aside from Chaos which assumes a 100% compliant
> preprocessor.
Edward is correct here. The problems often occur in a combinatorial
fashion. This is almost certainly due to the actual algorithm operating
in way very foreign to how it should, so when simple examples are used,
the output appears correct except that you cannot see the algorithm state
(i.e. hidesets, blue paint, etc.) which is often *not* correct. So, when
things are used in combination, what appears to be correct in isolation is
actually carrying forward with algorithm state that affects further
processing incorrectly. These types of scenarios are inordinately
difficult to find workarounds for, and any workarounds are inherently
unstable (because you don't know exactly what they are "fixing"). All it
really takes is some different input or different combination and things
might break. Chaos has over 500 primary interface macros and over two
thousand primary and secondary interface macros (i.e. SEQ_FOR_EACH is
primary, SEQ_FOR_EACH_S is secondary (derivative))--not to mention the
number of implementation macros (which is probably in the 10000 range,
though I haven't counted). Full circle, even if it was a matter of
testing permutations and combinations of primary interfaces used only
once, eΓ(500 + 1, 1) is still about 3.3 x 10^1134 possible combinations.
Aka, not tenable.
The way boost-pp deals with this is essentially by having a complete
separate implementation for MSVC which tries to do everything it can to
force things to happen when they should or before it is too late--which
frequently doesn't work.
<rant>
Since I've been doing this pp-stuff, which is well over a decade, I've
seen numerous compiler and tool vendors fix or significantly improve their
preprocessors--several of which have informally consulted with me.
However, I have seen absolutely zero effort by MS. The stock response for
a bug report is "closed as won't fix" for anything related to the
preprocessor. I've heard--sometimes on this list--numerous people who
represent MS say that "they'll see what they can do," but nothing ever
happens. Fix your preprocessor, MS, which will probably gain you nothing
financially, and stop implementing extensions to the language in lieu of
implementing the language itself, and I'll actually maybe believe that you
care about C++, standards, and ethics.
</rant>
Regards,
Paul Mensonides
> Paul Mensonides wrote
>>
>> Hello all.
>>
>> In the past, I've said several times that sequences are a superior to
>> every other data structure when dealing with a collection of elements
>> (tuples are better in some very specific contexts).
>>
>>
> Something that I found missing from Boost.PP docs is computational
> complexity (in terms of number of required macro expansions?) of the
> different algorithms for the different data structures. I take it that
> sequences have the best performances (i.e., smallest pp time because
> smallest number of macro expansions -- is that true?) but is that true
> for _all_ sequence algorithms when compared with other data structures?
> or are some tuple/array algorithms faster than the pp-seq equivalents?
Assuming amortized O(1) name-lookup, computational complexity with macro
expansion is really about the number of tokens (and whitespace
separations) scanned (a particular sequence of tokens is sometimes scanned
more than once) and less about number of macro replacements per se. In
terms of raw performance, nesting of macro arguments (i.e. M(M(M(x))))
also plays a significant role because each argument (provided it is used
in the replacement list without being an operand of # or ##) causes a
recursive invocation of the scanner (along with whatever memory and cycle
requirements that implies), but also because those recursions are not tail
recursive. The output of those scans must be cached for later
substitution. Generally speaking,
#define A(x) B(x)
#define B(x) C(x)
#define C(x) x
A(123)
is more efficient than something like
C(C(C(123)))
because the former doesn't build up as much cache (in the typical case)
and has fewer caches alive at any one point. For an extremely small case
like the above, however, the latter might actually be faster because the
former builds up more hideset bookkeeping (virtual tokens or whatever
other mechanism is used).
Boost.Preprocessor doesn't provide that information is because it is
extremely difficult to derive and because it is wildly different for
different compilers.
Chaos doesn't attempt to do it either. Chaos does usually indicate
exactly how many recursion steps are required given some input, but that's
a measure relative to a different constraint (not performance, per se, but
a type of resource). (The Chaos docs actually abuse big-O because I
couldn't decide whether I wanted exact forms or big-O, so the docs current
have (e.g.) O(2n) instead of just O(n). I actually need to change those
formulas to be auto-generated from LaTeX or use MathML.)
> For example, I would have found such an analysis of Boost.PP
> computational complexity useful when programming complex pp macros
> (i.e., Boost.Contract). I've ended up using pp-lists everywhere and that
> was probably a very bad choice (and in fact Boost.Contract compilation
> times are huge :( ). I've used pp-list just because they handle nil "for
> free" but I could have used a pp-sequence (nil)(elem1)(elem2)... -- with
> a bit more implementation work to deal with the leading (nil) -- had I
> saw evidence that the pp-sequence implementation would have been truly
> faster.
Lists in Boost.Preprocessor are the only data structure type that can have
a nil state. However, it is worse in nearly every (practical) way that
comes to mind. As you mention, in client code, one can simulate nil for
other data types in a variety of ways. Perhaps the most efficient would
be something along the lines of:
bseq: (0)(~) or (1)(seq)
// GET bseq
#define GET(b) GET_ ## b
#define GET_0(x)
#define GET_1(seq) seq
// IF_CONS bseq (t, f...)
#define IF_CONS(b) IF_CONS_A_ ## b
#define IF_CONS_A_0(x) IF_CONS_B_0
#define IF_CONS_A_1(x) IF_CONS_B_1
#define IF_CONS_B_0(t, ...) __VA_ARGS__
#define IF_CONS_B_1(t, ...) t
#define EAT(...)
#define A(bseq) IF_CONS bseq (B, EAT) (GET bseq)
#define B(seq) seq !
? A( (0)(~) )
? A( (1)((a)(b)(c)) )
> One of those reasons is that, if a library is built for it, non-unary
> (or even variadic) elements are natural because the structure of the
> sequence itself encapsulates each element.
>
> (int)(std::pair<int, int>)(double)
>
>
> This would be very convenient. I programmed a case-by-case version of
> these in Boost.Contract but not a general variadic extension of
> pp-sequence.
A lot of stuff would have to change for the boost-pp to deal with these.
>> Another reason is that, if a library is built for it and if
>> placemarkers ala C99/C++11 are available, there is a natural
>> representation of the nil sequence:
>>
>> /**/
>>
>>
> This would also be very convenient. But also automatically handling
> `(nil)` for nil sequences would be very convenient (and still faster
> than pp-lists?).
Yes, but it is probably better to predicate it as above or similarly.
(Reminds me a bit of the (bare-bones) lambda calculus.)
> As for adding Chaos to Boost.PP, I asked "why not" before and the issues
> with MVSC pp came about. Is this an opportunity to fix MSVC's pp? ;)
> (Some bugs have already been reported but unfortunately resolved as
> "won't fix".)
That opportunity has been around for years...
Regards,
Paul Mensonides
>> I understand that Chaos won't implement workarounds. I was thinking
> more
>> of forking chaos, and trying to implement it in msvc. Would that be a
>> possibility? And with your knowledge of Chaos's implementation and msvc
>> limitations, do you even think that it would even be remotely feasible?
>
> I don't think it's feasible. You *might* be able to emulate the
> base interface over a completely separate (and much larger)
> implementation. However, you won't be able to emulate the extensibility
> model. For example, one of the defining things about Chaos is its ability
> to generalize recursion (not referring to the sequence tricks that I
> started this thread with).
Well, actually this code here will work in MSVC:
#include <boost\preprocessor.hpp>
#define PP_EMPTY(...)
#define PP_EXPAND(...) __VA_ARGS__
#define PP_WHEN(x) BOOST_PP_IF(x, PP_EXPAND, PP_EMPTY)
#define PP_DEFER(m) m PP_EMPTY()
#define PP_OBSTRUCT(m) m PP_EMPTY PP_EMPTY()()
#define EVAL(...) \
A(A(A(__VA_ARGS__))) \
/**/
#define A(...) B(B(B(__VA_ARGS__)))
#define B(...) C(C(C(__VA_ARGS__)))
#define C(...) D(D(D(__VA_ARGS__)))
#define D(...) E(E(E(__VA_ARGS__)))
#define E(...) F(F(F(__VA_ARGS__)))
#define F(...) __VA_ARGS__
#define REPEAT(count, macro, ...) \
PP_WHEN(count)( \
REPEAT_INDIRECT PP_OBSTRUCT()()( \
BOOST_PP_DEC(count), macro, __VA_ARGS__ \
) \
macro PP_OBSTRUCT()( \
BOOST_PP_DEC(count), __VA_ARGS__ \
) \
) \
/**/
#define REPEAT_INDIRECT() REPEAT
#define PARAM(n, ...) BOOST_PP_COMMA_IF(n) __VA_ARGS__ ## n
EVAL( REPEAT(100, PARAM, class T) ) // class T0, class T1, ... class T99
#define FIXED(n, ...) BOOST_PP_COMMA_IF(n) __VA_ARGS__
#define TTP(n, ...) \
BOOST_PP_COMMA_IF(n) \
template<REPEAT(BOOST_PP_INC(n), FIXED, class)> class __VA_ARGS__ ## n \
/**/
EVAL( REPEAT(3, TTP, T) )
// template<class> class T0,
// template<class, class> class T1,
// template<class, class, class> class T2
Of course, I had to add extra level of scans in the EVAL macro, otherwise it
would stop at 62 repetitions. This will now handle up to 182 repetitions. Of
course, a more efficient approach is done in Chaos, but perhaps the
recursion backend could be implemented in MSVC, but it will be slower,
which is ok since I don't develop on MSVC, I just need it to build on MSVC.
I will look into this further.
Thanks,
Paul Fultz II
I actually looked at doing something like this with Chaos, but there is no
way to stop the exponential number of scans.
That's a severe problem, but even if that wasn't, the model is different
as the above is essentially just an uncontrolled continuation machine.
Call-and-return semantics are destroyed as there is no real notion of
yielding a result *now* unless you make A reeentrant, but then you end up
getting a *way* higher number of scans--either because you have say 16
separate sets like the above or you have the above go at least 32 "deep."
For example, when two things need to occur during the exact same scan
because the next scan is going to invoke something that involves both
results (anything with half-open parentheses, for sure, but even something
stuff like conditionals). This, in turns, leads to continuations rather
than call-and-return semantics, but then one should just make a
continuation machine ala the one in Chaos or Vesa's original one in Chaos'
sister project Order. In the right context, continuations are not
necessarily a bad thing, but they are definitely a different extensibility
model.
Regards,
Paul Mensonides
>> Of course, a more efficient approach is done in Chaos,
>> but perhaps the recursion backend could be implemented in MSVC, but it
>> will be slower, which is ok since I don't develop on MSVC, I just need
>> it to build on MSVC. I will look into this further.
Both Boost.Preprocessor and Chaos are open-source distributed under the
highly nonrestrictive Boost license, so you are free to fork off of it as
you please. You can do that now, regardless of whether Chaos is part of
Boost or not.
However, the root problem here is VC++ itself. VC++ needs to stop
creating the problem rather than have every client of VC++ working around
the problem. In the time it would take to implement Chaos-level
functionality (or at least approximate it) using VC++, I could literally
write a preprocessor. Politics aside, it is far more work to do the
workarounds in the clients--in this case, even with just one client VC ->
PP-L, than it would be to fix VC++... even if that fix involves re-writing
the macro expansion algorithm entirely. If VC++ won't do it, one is
better off changing the root by using a better toolchain or integrating a
better preprocessor into the toolchain (such as Wave).
Only a couple of days into discussing adding a no-workaround library like
Chaos to Boost, and we're already branching to apply workarounds... And
these are my main issues with moving Chaos to Boost--no policy to govern
(i.e. disallow) the application workarounds and no means of stopping the
fragmentation implied by branching the code. The current situation is
actually better than that. You have a portable subset of functionality
(Boost.Preprocessor) and a cutting edge, but non-portable, superset of
functionality (chaos-pp). If (unaltered toolchain) portability is
required, than use the portable subset. If not (i.e. you are willing and
able to change toolchains or integrate another tool into the toolchain),
then use the superior superset.
Despite my ranting, VC++ as a whole is worlds away from were it was circa
VC6. However, this appears (to me, at least) to be driven entirely by the
bottom line--which only prioritizes based on corporate image and user base
size (which is insignificantly small for pp metaprogramming as long as
Boost continues to provide workarounds for everything). The bottom line
is important, but it is not the only thing that is important. I'd rather
have a complete and correct language/library implementation than "auto-
vectorization" or AMP (neither of which I'm saying is either good or bad
though I'm very much *not* a fan of language extensions and language
subsetting). The priorities should be:
1A. Correctly interpret correct input or fail. (REQ)
1B. Generate correct output. (REQ)
2. Correctly interpret all correct input. (REQ)
3A. Improve performance of output. (QOI)
4A. Improve diagnostic quality. (QOI)
4B. Improve performance of toolchain. (QOI)
5. Create various tools such as IDEs and libraries. (AUX)
The problem with VC++ is that it doesn't appear to prioritize (1A) enough
and (2) appears way down the priority list if it appears at all. (5)
should not really involve the toolchain as far as (1) and (2) are
concerned. Instead, what appears to be the case is that (5) is hammered
(and usually in a marketing way) in lieu of (2). Tied into that is the
long release cycle that incorporates everything under the sun (.Net, VS,
C#--and, no, I don't care about C++/CLI or whatever it is currently
called). Herb has made indications that this may not continue as it has,
but I'll believe that when I see it.
Currently, there are other toolchains available on Windows that are more
successful at (1A) and (2) though possibly not always (1B) (I haven't used
VC++ significantly for some time, so I am not aware of the number of
output bugs.). Regardless of degree of success, there are other toolchains
available on Windows whose *order* of priorities more closely reflect the
above. For me to even consider providing VC++ workarounds in any code
that I write, not only does VC++ have to achieve a reasonable level of
success at (1) and (2), I also have to believe that the MS's order of
priorities going forward is sound--which I don't. MS has dug itself into
that hole, and to me the presence of stuff like AMP--which may well be a
good research direction--speaks volumes about the prioritization within MS.
Now, it may be that my perception is wrong or that things have changed,
but my perception and principles are ultimately what I use to make
decisions. If more software development operated according to similar
principles, the cost of initial software would have been higher, but the
cost of software (relative of course to what the software does) would be
less in the long run (which, by now, would be now). VC's preprocessor
would have already been fixed, autotools would not exist, etc..
Related to the above, if backward compatibility must be broken, break it
as soon as possible.
I have filed this as DevDiv#407151 "VC's preprocessor considered harmful (to Boost.Preprocessor)" in our internal database, including a copy of your entire mail.
If it were within my power to fix this for you, I would - but I am not a compiler dev.
Stephan T. Lavavej
Visual C++ Libraries Developer
If you look at the Microsoft bug reporting page for preprocessors errors
you will see other problems with the preprocessor. A number of these are
basically the same problem with variadic macros, but others are just
basic erroneous proocessing of prepeocessor input and Microsoft's
response often is that it is the way the VC++ preprocessor works and
will not be changed even if it is erroneous.
Here are some links:
https://connect.microsoft.com/VisualStudio/feedback/details/318940/macro-expansion-bug#details
I know this is overkill to list these here, but it really is not done as
an attempt to embarass you or Microsoft. Rather it is proof of what Paul
has said, that Microsoft has taken the view that it is not important to
fix the prerocessor. Even if Microsoft were concerned with breaking
backward compatibility, ie. breaking code which uses the incorrect
implementation, in changing the preprocessor to actually be correct,
they could create a correct preprocessor and have a compiler switches to
enable the backward compatibility mode or the correct preprocessor mode
if they did so.
> Is there anything Boost can do to support Paul (and Steven) by sending a collective message to
> Microsoft, lest they dismiss this as one man's rant?
I've already gotten Herb's attention about it. Where it goes after
that, I can't say.
--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com