[boost] [preprocessor] Sequences vs. All Other Data Structures

614 views
Skip to first unread message

Paul Mensonides

unread,
Apr 21, 2012, 6:49:51 AM4/21/12
to bo...@lists.boost.org
Hello all.

In the past, I've said several times that sequences are a superior to
every other data structure when dealing with a collection of elements
(tuples are better in some very specific contexts).

One of those reasons is that, if a library is built for it, non-unary (or
even variadic) elements are natural because the structure of the sequence
itself encapsulates each element.

(int)(std::pair<int, int>)(double)

Another reason is that, if a library is built for it and if placemarkers
ala C99/C++11 are available, there is a natural representation of the nil
sequence:

/**/

...i.e. nothing. This leads to very natural appends, prepends, and joins,
since these operation are simply variations on juxtaposition.

s (e)
(e) s
s1 s2

I'll use these in the examples that follow.

Another reason, which I'll demonstrate in this post, is that the very
structure of sequences usually contain the potential "processing power" to
process them. By "processing power," I'm referring to algorithmic
steps (i.e. headroom)--which can be a finite resource in preprocessor
metaprogramming. The structure of the sequence itself allows what I've
called "sequential iteration."

#define A(...) + B
#define B(...) - A

A(1)(2)(3) // + - + B

(The reason that this works is because B is not invoked "within" A and
vice versa.)

So, to convert a sequence to a comma-separated-list, you can just do
something like

#define CSV(seq) CSV_A seq
#define CSV_A(...) __VA_ARGS__ CSV_B
#define CSV_B(...) , __VA_ARGS__ CSV_C
#define CSV_C(...) , __VA_ARGS__ CSV_B

CSV( (1)(2)(3) ) // 1, 2, 3 CSV_B

Unfortunately, we still have to get rid of the spurious CSV_A, CSV_B, or
CSV_C (depending on the length of the sequence) that comes out the end.
That's easily done with concatenation.

#define CSV(seq) \
CHAOS_PP_REVERSE_CAT(0, CSV_A seq) \
/**/
#define CSV_A(...) __VA_ARGS__ CSV_B
#define CSV_B(...) , __VA_ARGS__ CSV_C
#define CSV_C(...) , __VA_ARGS__ CSV_B
#define CSV_A0
#define CSV_B0
#define CSV_C0

CSV( (1)(2)(3) ) // 1, 2, 3

The REVERSE_CAT is used here so that the commas don't have to be encoded.
Essentially, it's defined as REVERSE_CAT(a, ...) => __VA_ARGS__ ## a
except that it allows its arguments to expand. A regular CAT could be
used, but you'd have to delay the existence of the commas. Now *if* the
number of scans applied to a macro argument is guaranteed by definition of
CAT (as it is with CHAOS_PP_CAT), one can do something like this

#define CC CHAOS_PP_DEFER(CHAOS_PP_COMMA)()

#define CSV(seq) \
CHAOS_PP_CAT(CSV_A seq, 0) \
/**/
#define CSV_A(...) __VA_ARGS__ CSV_B
#define CSV_B(...) CC __VA_ARGS__ CSV_C
#define CSV_C(...) CC __VA_ARGS__ CSV_B
#define CSV_A0
#define CSV_B0
#define CSV_C0

CSV( (1)(2)(3) ) // 1, 2, 3

The DEFER macro used here delays the expansion of the COMMA() invocation
through exactly one scan which is all that CAT applies to its argument.
The CC is used here only as an abbreviation so that the example doesn't
get too cluttered.

This latter version is significantly more verbose, but this type of thing
is sometimes unavoidable especially when dealing with unmatched
parentheses rather than commas which is exactly what we'll run into next.

Let's say we wanted to calculate the length of the sequence.

#define LP CHAOS_PP_DEFER(CHAOS_PP_LPAREN)()
#define RP CHAOS_PP_DEFER(CHAOS_PP_RPAREN)()

#define SIZE(seq) \
CHAOS_PP_CAT(SIZE_A seq, 0) 0 CHAOS_PP_CAT(SIZE_C seq, 0) \
/**/
#define SIZE_A(...) CHAOS_PP_INC LP SIZE_B
#define SIZE_B(...) CHAOS_PP_INC LP SIZE_A
#define SIZE_A0
#define SIZE_B0
#define SIZE_C(...) RP SIZE_D
#define SIZE_D(...) RP SIZE_C
#define SIZE_C0
#define SIZE_D0

SIZE( (1)(2)(3) ) // CHAOS_PP_INC(CHAOS_PP_INC(CHAOS_PP_INC(0)))

(As with CC above, the LP and RP macros are just sugar to avoid clutter in
the example.)

Essentially, we've iterated the sequence twice. The first time, we
generated CHAOS_PP_INC( for each element. The second time, we generated )
for each element.

However, this doesn't "finish the job" because an expression like

MACRO CHAOS_PP_DEFER(CHAOS_PP_LPAREN)() ... )

causes the invocation of MACRO to be deferred through two scans, not one.
This is actually a good thing in this scenario, because we don't really
want CHAOS_PP_INC to be invoked "inside" CAT. To cause the evaluation, we
just add another scan:

#define LP CHAOS_PP_DEFER(CHAOS_PP_LPAREN)() #define RP
CHAOS_PP_DEFER(CHAOS_PP_RPAREN)()

#define SIZE(seq) \
SIZE_I(CHAOS_PP_CAT(SIZE_A seq, 0) 0 CHAOS_PP_CAT(SIZE_C seq, 0)) \
/**/
#define SIZE_A(...) CHAOS_PP_INC LP SIZE_B
#define SIZE_B(...) CHAOS_PP_INC LP SIZE_A
#define SIZE_A0
#define SIZE_B0
#define SIZE_C(...) RP SIZE_D
#define SIZE_D(...) RP SIZE_C
#define SIZE_C0
#define SIZE_D0
#define SIZE_I(x) x

SIZE( (1)(2)(3) ) // 3

Like BOOST_PP_INC, CHAOS_PP_INC saturates at a finite value, and thus the
above implementation can only "correctly" measure the length of a
sequences of limited length. One could use the unlimited, but more
expensive, CHAOS_PP_ARBITRARY_INC instead if necessary.

Generating a closing parentheses is common, so we'll generalize it

#define LP CHAOS_PP_DEFER(CHAOS_PP_LPAREN)()
#define RP CHAOS_PP_DEFER(CHAOS_PP_RPAREN)()

#define CLOSE(seq) CHAOS_PP_CAT(CLOSE_A seq, 0)
#define CLOSE_A(...) RP CLOSE_B
#define CLOSE_B(...) RP CLOSE_A
#define CLOSE_A0
#define CLOSE_B0

The above then becomes

#define SIZE(seq) \
SIZE_I(CHAOS_PP_CAT(SIZE_A seq, 0) 0 CLOSE(seq)) \
/**/
#define SIZE_A(...) CHAOS_PP_INC LP SIZE_B
#define SIZE_B(...) CHAOS_PP_INC LP SIZE_A
#define SIZE_A0
#define SIZE_B0
#define SIZE_I(x) x

Now, let's do something more serious. I recently ran into a scenario
where I needed to generate something for each permutation of each element
of the powerset of a sequence.

For the powerset:

#define POWERSET(seq) \
POWERSET_1(CHAOS_PP_CAT(POWERSET_T seq, 0) CLOSE(seq)) \
/**/
#define POWERSET_1(x) x
#define POWERSET_2(e, ps) \
CHAOS_PP_IIF(CHAOS_PP_SEQ_IS_CONS(ps))( \
ps CHAOS_PP_SPLIT( \
1, \
POWERSET_5(CHAOS_PP_CAT(POWERSET_P ps, 0) e, CLOSE(ps)) \
), \
(e)() \
) \
/**/
#define POWERSET_3(...) POWERSET_4(__VA_ARGS__)
#define POWERSET_4(seq, e, ps) e, (e seq) ps
#define POWERSET_5(...) __VA_ARGS__
#define POWERSET_T(...) POWERSET_2 LP(__VA_ARGS__) CC POWERSET_F
#define POWERSET_F(...) POWERSET_2 LP(__VA_ARGS__) CC POWERSET_T
#define POWERSET_T0
#define POWERSET_F0
#define POWERSET_P(seq) POWERSET_3 LP seq CC POWERSET_Q
#define POWERSET_Q(seq) POWERSET_3 LP seq CC POWERSET_P
#define POWERSET_P0
#define POWERSET_Q0

POWERSET( (a)(b)(c) )
// ( (c) )
// ( )
// ( (b)(c) )
// ( (b) )
// ( (a)(c) )
// ( (a) )
// ( (a)(b)(c) )
// ( (a)(b) )

It important to note that this implementation is not limited by any
library-defined resource. It is limited only by the amount of available
compiler resources. With unlimited CPU and memory resources, this
algorithm compute the powerset of a sequence of *any* length.

For generating permutations, I'll use a helper algorithm which I've called
"FOCUS". The job of this algorithm is to produce a sequence which
isolates each element in the input sequence. For example,

FOCUS((a)(b)(c)(d), ~)
// ( (a)(b)(c), (d), , ~)
// ( (a)(b), (c), (d), ~)
// ( (a), (b), (c)(d), ~)
// ( , (a), (b)(c)(d), ~)

This is kind of a weird form. Essentially, for each element x in the
input sequence, the algorithm produces an element in the resulting
sequence of the form:

( seq-of-elements-before-x, (x), seq-of-elements-after-x, ~)

#define FOCUS(seq, ...) \
CHAOS_PP_INLINE_WHEN(CHAOS_PP_SEQ_IS_CONS(seq))( \
FOCUS_1( \
(CHAOS_PP_SEQ_HEAD(seq)), \
CHAOS_PP_SEQ_TAIL(seq), \
__VA_ARGS__
) \
) \
/**/
#define FOCUS_1(h, t, ...) \
FOCUS_4(CHAOS_PP_CAT(FOCUS_T t, 0)(, h, t, __VA_ARGS__) CLOSE(t)) \
/**/
#define FOCUS_2(seq) FOCUS_3 seq
#define FOCUS_3(l, m, r, ...) \
(l m, (CHAOS_PP_SEQ_HEAD(r)), CHAOS_PP_SEQ_TAIL(r), __VA_ARGS__) \
(l, m, r, __VA_ARGS__) \
/**/
#define FOCUS_4(x) x
#define FOCUS_T(...) FOCUS_2 LP FOCUS_F
#define FOCUS_F(...) FOCUS_2 LP FOCUS_T
#define FOCUS_T0
#define FOCUS_F0

As with POWERSET, this implementation is unlimited WRT library resources.

Finally, the permuations themselves:

#define PERMUTE(seq) \
PERMUTE_3(CHAOS_PP_CAT(PERMUTE_P seq, 0) PERMUTE_1(seq,) CLOSE(seq)) \
/**/
#define PERMUTE_1(seq, pf) \
CHAOS_PP_IIF( \
CHAOS_PP_BITAND(CHAOS_PP_SEQ_IS_CONS(seq)) \
(CHAOS_PP_SEQ_IS_CONS(CHAOS_PP_SEQ_TAIL(seq))))( \
PERMUTE_2, (pf seq) CHAOS_PP_EAT \
)(FOCUS(seq, pf)) \
/**/
#define PERMUTE_1_INDIRECT() PERMUTE_1
#define PERMUTE_2(fs) CHAOS_PP_CAT(PERMUTE_T fs, 0)
#define PERMUTE_3(x) x
#define PERMUTE_4(x) x
#define PERMUTE_T(l, m, r, pf) \
CHAOS_PP_OBSTRUCT(PERMUTE_1_INDIRECT)()(l r, pf m) PERMUTE_F \
/**/
#define PERMUTE_F(l, m, r, pf) \
CHAOS_PP_OBSTRUCT(PERMUTE_1_INDIRECT)()(l r, pf m) PERMUTE_T \
/**/
#define PERMUTE_T0
#define PERMUTE_F0
#define PERMUTE_P(...) PERMUTE_4 LP PERMUTE_Q
#define PERMUTE_Q(...) PERMUTE_4 LP PERMUTE_P
#define PERMUTE_P0
#define PERMUTE_Q0

PERMUTE((a)(b)(c))
// ( (c)(b)(a) )
// ( (c)(a)(b) )
// ( (b)(c)(a) )
// ( (b)(a)(c) )
// ( (a)(c)(b) )
// ( (a)(b)(c) )

Putting these together:

#define A(s, seq) \
CHAOS_PP_EXPR_S(s)(CHAOS_PP_SEQ_FOR_EACH_S( \
s, B, PERMUTE(seq) \
)) \
/**/
#define B(s, seq) CHAOS_PP_SEQ_CONCAT(seq)

CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH(
A, POWERSET( (a)(b)(c) )
))

// c cb bc b ca ac a cba cab bca bac acb abc ba ab

This generates

\sum_{k=0}^n\frac{n!}{\left({n-k}\right)!} = e\int_1^\infty t^ne^{-t}\,dt
= e\Gamma\left({n + 1, 1}\right)

results (apologies for the LaTeX). For n = 3, as above, that's 16 results
(where the 16th is empty from the nil sequence). For n = 6, that's 1957
results. For n = 7, it's 13700 results--which was enough to ICE g++
running on my VM.

The moral of the story is that sequences are powerful because in many
cases, an input sequence itself often contains enough "potential energy"
to process it. It isn't very difficult to produce n^2, 2^n, or even n^n
computational steps.

I only really needed the case for n = 3, and I ultimately didn't use the
above because the cost of a runtime solution acceptable and not really
worth adding the above machinery (and I didn't want to take the time to
add it to Chaos), but I thought I'd share it anyway.

Regards,
Paul Mensonides

implementation...
g++ -E -P -std=c++11 (or -std=c++0x) -I $CHAOS_ROOT <filename>
---

#include <chaos/preprocessor/cat.h>
#include <chaos/preprocessor/control/iif.h>
#include <chaos/preprocessor/control/inline_when.h>
#include <chaos/preprocessor/facilities/split.h>
#include <chaos/preprocessor/logical/bitand.h>
#include <chaos/preprocessor/punctuation/comma.h>
#include <chaos/preprocessor/punctuation/paren.h>
#include <chaos/preprocessor/recursion/basic.h>
#include <chaos/preprocessor/recursion/expr.h>
#include <chaos/preprocessor/seq/concat.h>
#include <chaos/preprocessor/seq/core.h>
#include <chaos/preprocessor/seq/for_each.h>
#include <chaos/preprocessor/tuple/eat.h>

#ifndef SEQ
#define SEQ (a)(b)(c)
#endif

#define LP CHAOS_PP_DEFER(CHAOS_PP_LPAREN)()
#define RP CHAOS_PP_DEFER(CHAOS_PP_RPAREN)()
#define CC CHAOS_PP_DEFER(CHAOS_PP_COMMA)()

#define CLOSE(seq) CHAOS_PP_CAT(CLOSE_T seq, 0)
#define CLOSE_T(...) RP CLOSE_F
#define CLOSE_F(...) RP CLOSE_T
#define CLOSE_T0
#define CLOSE_F0

#define POWERSET(seq) \
POWERSET_1(CHAOS_PP_CAT(POWERSET_T seq, 0) CLOSE(seq)) \
/**/
#define POWERSET_1(x) x
#define POWERSET_2(e, ps) \
CHAOS_PP_IIF(CHAOS_PP_SEQ_IS_CONS(ps))( \
ps CHAOS_PP_SPLIT( \
1, \
POWERSET_5(CHAOS_PP_CAT(POWERSET_P ps, 0) e, CLOSE(ps)) \
), \
(e)() \
) \
/**/
#define POWERSET_3(...) POWERSET_4(__VA_ARGS__)
#define POWERSET_4(seq, e, ps) e, (e seq) ps
#define POWERSET_5(...) __VA_ARGS__
#define POWERSET_T(...) POWERSET_2 LP(__VA_ARGS__) CC POWERSET_F
#define POWERSET_F(...) POWERSET_2 LP(__VA_ARGS__) CC POWERSET_T
#define POWERSET_T0
#define POWERSET_F0
#define POWERSET_P(seq) POWERSET_3 LP seq CC POWERSET_Q
#define POWERSET_Q(seq) POWERSET_3 LP seq CC POWERSET_P
#define POWERSET_P0
#define POWERSET_Q0

// --

#define FOCUS(seq, ...) \
CHAOS_PP_INLINE_WHEN(CHAOS_PP_SEQ_IS_CONS(seq))( \
FOCUS_1( \
(CHAOS_PP_SEQ_HEAD(seq)), CHAOS_PP_SEQ_TAIL(seq), \
__VA_ARGS__ \
) \
) \
/**/
#define FOCUS_1(h, t, ...) \
FOCUS_4(CHAOS_PP_CAT(FOCUS_T t, 0)(, h, t, __VA_ARGS__) CLOSE(t)) \
/**/
#define FOCUS_2(seq) FOCUS_3 seq
#define FOCUS_3(l, m, r, ...) \
(l m, (CHAOS_PP_SEQ_HEAD(r)), CHAOS_PP_SEQ_TAIL(r), __VA_ARGS__) \
(l, m, r, __VA_ARGS__) \
/**/
#define FOCUS_4(x) x
#define FOCUS_T(...) FOCUS_2 LP FOCUS_F
#define FOCUS_F(...) FOCUS_2 LP FOCUS_T
#define FOCUS_T0
#define FOCUS_F0

// --

#define PERMUTE(seq) \
PERMUTE_3(CHAOS_PP_CAT(PERMUTE_P seq, 0) PERMUTE_1(seq,) CLOSE(seq)) \
/**/
#define PERMUTE_1(seq, pf) \
CHAOS_PP_IIF( \
CHAOS_PP_BITAND(CHAOS_PP_SEQ_IS_CONS(seq)) \
(CHAOS_PP_SEQ_IS_CONS(CHAOS_PP_SEQ_TAIL(seq))))( \
PERMUTE_2, (pf seq) CHAOS_PP_EAT \
)(FOCUS(seq, pf)) \
/**/
#define PERMUTE_1_INDIRECT() PERMUTE_1
#define PERMUTE_2(fs) CHAOS_PP_CAT(PERMUTE_T fs, 0)
#define PERMUTE_3(x) x
#define PERMUTE_4(x) x
#define PERMUTE_T(l, m, r, pf) \
CHAOS_PP_OBSTRUCT(PERMUTE_1_INDIRECT)()(l r, pf m) PERMUTE_F \
/**/
#define PERMUTE_F(l, m, r, pf) \
CHAOS_PP_OBSTRUCT(PERMUTE_1_INDIRECT)()(l r, pf m) PERMUTE_T \
/**/
#define PERMUTE_T0
#define PERMUTE_F0
#define PERMUTE_P(...) PERMUTE_4 LP PERMUTE_Q
#define PERMUTE_Q(...) PERMUTE_4 LP PERMUTE_P
#define PERMUTE_P0
#define PERMUTE_Q0

// --

#define A(s, seq) \
CHAOS_PP_EXPR_S(s)(CHAOS_PP_SEQ_FOR_EACH_S( \
s, B, PERMUTE(seq) \
)) \
/**/
#define B(s, seq) CHAOS_PP_SEQ_CONCAT(seq)

CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH(
A, POWERSET(SEQ)
))



_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost

Nathan Ridge

unread,
Apr 21, 2012, 4:49:11 PM4/21/12
to Boost Developers Mailing List

Slightly off-topic: are there any plans to add Chaos to Boost?

Thanks,
Nate

Dave Abrahams

unread,
Apr 21, 2012, 5:42:25 PM4/21/12
to bo...@lists.boost.org

on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:

> Slightly off-topic: are there any plans to add Chaos to Boost?

+1

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Paul Mensonides

unread,
Apr 21, 2012, 10:17:42 PM4/21/12
to bo...@lists.boost.org
On Sat, 21 Apr 2012 17:42:25 -0400, Dave Abrahams wrote:

> on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:
>
>> Slightly off-topic: are there any plans to add Chaos to Boost?
>
> +1

As some sort of technological preview? It is not portable enough to be
used within Boost, though perhaps a slew of regression failures will cause
that to change (unlikely). <rant>MS, in particular, is too busy taking it
upon themselves to "ensure C++ is relevant" to prioritize implementing the
language.</rant> Also, one of the principles of Chaos is to have
absolutely zero compiler workarounds, and that would have to stay if it
was ever part of Boost.

Regards,
Paul Mensonides

Nathan Ridge

unread,
Apr 21, 2012, 11:10:29 PM4/21/12
to Boost Developers Mailing List

> >> Slightly off-topic: are there any plans to add Chaos to Boost?
> >
> > +1
>
> As some sort of technological preview? It is not portable enough to be
> used within Boost, though perhaps a slew of regression failures will cause
> that to change (unlikely). <rant>MS, in particular, is too busy taking it
> upon themselves to "ensure C++ is relevant" to prioritize implementing the
> language.</rant> Also, one of the principles of Chaos is to have
> absolutely zero compiler workarounds, and that would have to stay if it
> was ever part of Boost.

I don't see why one compiler's lack of standards-conformance should prevent
a useful library from becoming part of Boost.

Regards,
Nate

paul Fultz

unread,
Apr 22, 2012, 12:34:34 AM4/22/12
to bo...@lists.boost.org


>________________________________
> From: Paul Mensonides <pmen...@comcast.net>
>To: bo...@lists.boost.org
>Sent: Saturday, April 21, 2012 10:17 PM
>Subject: Re: [boost] [preprocessor] Sequences vs. All Other Data Structures


>
>On Sat, 21 Apr 2012 17:42:25 -0400, Dave Abrahams wrote:
>
>> on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:
>>
>>> Slightly off-topic: are there any plans to add Chaos to Boost?
>>
>> +1
>
>As some sort of technological preview?  It is not portable enough to be
>used within Boost, though perhaps a slew of regression failures will cause
>that to change (unlikely).  <rant>MS, in particular, is too busy taking it
>upon themselves to "ensure C++ is relevant" to prioritize implementing the
>language.</rant>  Also, one of the principles of Chaos is to have
>absolutely zero compiler workarounds, and that would have to stay if it
>was ever part of Boost.

There are workarounds for most of the library. How well does active

arguments work with msvc? Are there possible workarounds?


I know I used the PP_DEFER macros before in msvc, which worked

quite well. I used them to actually workaround the fact that msvc

implements the "and, not, or, etc" keywords as macros, which can

make it difficult to  cat them, or detect if the keyword was there or not.

So the macro by themselves would expand to their appropriate

operator, but if they were expanded inside of PP_CONTEXT() macro,

then they would expand to something else. Like this:

#define not PP_CONTEXT_DEFINE(PP_MSVC_NOT, !)

not //expands to !
PP_CONTEXT(not) //expands to PP_MSVC_NOT

So I used the PP_DEFER macro to delay the macro expansion for one
scan. Then it detects if PP_CONTEXT is being expanded recursively, and
then chooses the alternate macro. There is also an PP_EXPAND that is
needed, as well. It can work through several scans, too, by having a

nested PP_DEFER for each scan(and a corresponding PP_EXPAND).
I actually got the idea for this from the CHAOS_PP_RAIL and
CHAOS_PP_WALL macro, which is used for a different purpose.


But anyways, I wonder if its even possible(with workarounds) to

implement recursion from chaos in msvc?

Paul Mensonides

unread,
Apr 22, 2012, 2:25:51 AM4/22/12
to bo...@lists.boost.org
On Sat, 21 Apr 2012 21:34:34 -0700, paul Fultz wrote:

> But anyways, I wonder if its even possible(with workarounds) to
>
> implement recursion from chaos in msvc?

The point is that I don't care. By definition, Chaos = NO WORKAROUNDS.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 22, 2012, 2:31:18 AM4/22/12
to bo...@lists.boost.org
On Sun, 22 Apr 2012 03:10:29 +0000, Nathan Ridge wrote:

>> >> Slightly off-topic: are there any plans to add Chaos to Boost?
>> >
>> > +1
>>
>> As some sort of technological preview? It is not portable enough to be
>> used within Boost, though perhaps a slew of regression failures will
>> cause that to change (unlikely). <rant>MS, in particular, is too busy
>> taking it upon themselves to "ensure C++ is relevant" to prioritize
>> implementing the language.</rant> Also, one of the principles of Chaos
>> is to have absolutely zero compiler workarounds, and that would have to
>> stay if it was ever part of Boost.
>
> I don't see why one compiler's lack of standards-conformance should
> prevent a useful library from becoming part of Boost.

Because it is not just MSVC that's the problem, and I know what the
tendency will be. This will work with compiler XYZ with *just* this
little workaround.... Any workaround whatsoever in Chaos is absolutely
unacceptable to me. It ruins the very point of the library.

Regards,
Paul Mensonides

paul Fultz

unread,
Apr 22, 2012, 3:09:52 AM4/22/12
to bo...@lists.boost.org

>> But anyways, I wonder if its even possible(with workarounds) to
>>
>> implement recursion from chaos in msvc?
>
> The point is that I don't care.  By definition, Chaos = NO WORKAROUNDS.

I understand that Chaos won't implement workarounds. I was thinking more
of forking chaos, and trying to implement it in msvc. Would that be a possibility?
And with your knowledge of Chaos's implementation and msvc limitations, do
you even think that it would even be remotely feasible?

Thanks,
Paul Fultz II

Nathan Ridge

unread,
Apr 22, 2012, 3:51:11 AM4/22/12
to Boost Developers Mailing List

> >> >> Slightly off-topic: are there any plans to add Chaos to Boost?
> >> >
> >> > +1
> >>
> >> As some sort of technological preview? It is not portable enough to be
> >> used within Boost, though perhaps a slew of regression failures will
> >> cause that to change (unlikely). <rant>MS, in particular, is too busy
> >> taking it upon themselves to "ensure C++ is relevant" to prioritize
> >> implementing the language.</rant> Also, one of the principles of Chaos
> >> is to have absolutely zero compiler workarounds, and that would have to
> >> stay if it was ever part of Boost.
> >
> > I don't see why one compiler's lack of standards-conformance should
> > prevent a useful library from becoming part of Boost.
>
> Because it is not just MSVC that's the problem, and I know what the
> tendency will be. This will work with compiler XYZ with *just* this
> little workaround.... Any workaround whatsoever in Chaos is absolutely
> unacceptable to me. It ruins the very point of the library.

The library could be proposed to Boost with the explicit understanding
that it is intended to work only with fully standards-conforming
preprocessors. In the long run its presence in Boost might even contribute
to putting pressure on vendors of non-conformant preprocessors to get
their act together.

Regards,
Nate

Paul Mensonides

unread,
Apr 22, 2012, 5:33:19 AM4/22/12
to bo...@lists.boost.org
On Sun, 22 Apr 2012 00:09:52 -0700, paul Fultz wrote:

> I understand that Chaos won't implement workarounds. I was thinking more
> of forking chaos, and trying to implement it in msvc. Would that be a
> possibility? And with your knowledge of Chaos's implementation and msvc
> limitations, do you even think that it would even be remotely feasible?

I don't think it's feasible. You *might* be able to emulate the
base interface over a completely separate (and much larger)
implementation. However, you won't be able to emulate the extensibility
model. For example, one of the defining things about Chaos is its ability
to generalize recursion (not referring to the sequence tricks that I
started this thread with). That generalization of recursion reaches into
client code. This model is quite different from (e.g.) the Boost pp-lib
where you have

end-user -> boost-pp

This end-user might be a library, of course, and there may be a few
exceptions, but not really. The pp-lib itself is incapable of reusing
itself properly--how many times have there been complaints that algorithm
XYZ is not reentrant? Essentially, the boost-pp model is just not
extensible.

In Chaos, however, you have a model that is more like regular software:

end-user -> { chaos-pp 3rd-party-library }
3rd-party-library -> { chaos-pp 3rd-party-library }

The closest reference to the core language that either boost-pp or chaos-
pp makes is probably ENUM_PARAMS. However, there is a huge "unexplored"
area between the relatively low-level preprocessor libraries like boost-pp
and chaos-pp and the higher level semantics of the core language with
scenarios that are more specific than the low-level libraries but still
library-izable (!!).

For example, any powerset contains the nil set. However, one doesn't
always want it in the result. So, I might do:

#define A(s, seq) \
CHAOS_PP_INLINE_WHEN(CHAOS_PP_SEQ_IS_CONS(seq))( \
/* DO WHATEVER */ \
) \
/**/

CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH(
A, POWERSET(SEQ)
))

However, even though /* DO WHATEVER */ may not be generalizable, the
predication is. So, I might instead do something like:

#define FILTER(s, e, p, ...) \
FILTER_I( \
CHAOS_PP_OBSTRUCT(), CHAOS_PP_NEXT(s), e, p, \
CHAOS_PP_NON_OPTIONAL(__VA_ARGS__), \
CHAOS_PP_PACK_OPTIONAL(__VA_ARGS__) \
) \
/**/
#define FILTER_I(_, s, e, p, m, pack) \
CHAOS_PP_INLINE_WHEN _( \
CHAOS_PP_CALL(p)()(s, p, e CHAOS_PP_EXPOSE(pack)) \
)( \
CHAOS_PP_CALL(m)()(s, m, e CHAOS_PP_EXPOSE(pack)) \
) \
/**/

#define A(s, seq) /* DO WHATEVER */

CHAOS_PP_EXPR(CHAOS_PP_SEQ_FOR_EACH(
FILTER, POWERSET(SEQ), CHAOS_PP_SEQ_IS_CONS_(CHAOS_PP_ARG(1)), A
))

This isn't perfect--it doesn't deal with non-unary sequences, for
example. However, that could be accomplished with similar sorts of
chaining.

Now, in the above, SEQ_FOR_EACH is reentrant, the predicate called by
FILTER can use FILTER, and thus FILTER is reentrant. The macro, A can be
made reentrant (if necessary). And all these reentrancies are bouncing
around between potentially multiple libraries and user code (let's say
CHAOS_PP_* is provided by Chaos (C), FILTER is provided by library A,
POWERSET is provided by library B (which somehow uses library A), and the
end-user (E) does the above).

This works because of the extensibility of the architecture. Emulating
the interface is one thing--which still might not be possible with MSVC.
Emulating the extensibly with MSVC, in particular, is either impossible or
difficult beyond belief.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 22, 2012, 5:45:37 AM4/22/12
to bo...@lists.boost.org
On Sun, 22 Apr 2012 07:51:11 +0000, Nathan Ridge wrote:

>> > I don't see why one compiler's lack of standards-conformance should
>> > prevent a useful library from becoming part of Boost.
>>
>> Because it is not just MSVC that's the problem, and I know what the
>> tendency will be. This will work with compiler XYZ with *just* this
>> little workaround.... Any workaround whatsoever in Chaos is absolutely
>> unacceptable to me. It ruins the very point of the library.
>
> The library could be proposed to Boost with the explicit understanding
> that it is intended to work only with fully standards-conforming
> preprocessors. In the long run its presence in Boost might even
> contribute to putting pressure on vendors of non-conformant
> preprocessors to get their act together.

I doubt the latter that would happen. Essentially, because Chaos cannot
be used when targeting VC++, Boost cannot itself use it. Without Boost
itself using it, the target audience is too small. Now, if Boost said
"screw VC++" and used Chaos anyway, that *might* do it. However, it would
break Boost on more compilers than just VC++, and then we'd more likely
just get a Python 2.x vs 3.x instead (apologies, Dave, if I'm mis-
characterizing the current Python state) except with Boost.

Now, IFF there is a place in Boost for a library like Chaos which
currently contains no workarounds and henceforth must not contain
workarounds, then I'd be willing. However, there is no current analog of
that in Boost, nor any Boost policy addressing that type of requirement,
nor any means of enforcement (even a would-be automated enforcement such
as grepping for _MSC_VER (etc.) would work because even slightly
rearranging code in a less than (theoretically) ideal way is still a
workaround).

Regards,
Paul Mensonides

paul Fultz

unread,
Apr 22, 2012, 1:42:55 PM4/22/12
to bo...@lists.boost.org


> I don't think it's feasible.  You *might* be able to emulate the
> base interface over a completely separate (and much larger)
> implementation.  However, you won't be able to emulate the extensibility
> model.  For example, one of the defining things about Chaos is its ability
> to generalize recursion (not referring to the sequence tricks that I
> started this thread with). 
One of the key things I would like to implement in msvc is the recursion
backend from Chaos. So there wouldn't be the need to implement a
plethora of macros to make one macro re-entrant, which I suppose doesn't
look feasible in msvc.

> The closest reference to the core language that either boost-pp or chaos-
> pp makes is probably ENUM_PARAMS.  However, there is a huge
> "unexplored"
> area between the relatively low-level preprocessor libraries like boost-pp
> and chaos-pp and the higher level semantics of the core language with
> scenarios that are more specific than the low-level libraries but still
> library-izable (!!).

I think one of the other "unexplored" areas of the preprocessor, is the
development of DSLs. It would be nice if there was a high-level library
solution for this.

Thanks,
Paul Fultz II

Edward Diener

unread,
Apr 22, 2012, 3:05:53 PM4/22/12
to bo...@lists.boost.org
On 4/22/2012 5:45 AM, Paul Mensonides wrote:
> On Sun, 22 Apr 2012 07:51:11 +0000, Nathan Ridge wrote:
>
>>>> I don't see why one compiler's lack of standards-conformance should
>>>> prevent a useful library from becoming part of Boost.
>>>
>>> Because it is not just MSVC that's the problem, and I know what the
>>> tendency will be. This will work with compiler XYZ with *just* this
>>> little workaround.... Any workaround whatsoever in Chaos is absolutely
>>> unacceptable to me. It ruins the very point of the library.
>>
>> The library could be proposed to Boost with the explicit understanding
>> that it is intended to work only with fully standards-conforming
>> preprocessors. In the long run its presence in Boost might even
>> contribute to putting pressure on vendors of non-conformant
>> preprocessors to get their act together.
>
> I doubt the latter that would happen. Essentially, because Chaos cannot
> be used when targeting VC++, Boost cannot itself use it.

This is not completely true. Even though it would provide more work for
a library implementor, a library could choose to use Boost Chaos for
compilers that support it and choose to use Boost PP for compilers which
do not ( including VC ).

In that case if Boost Chaos were better for preprocessor metaprogramming
than Boost PP, as evidenced by Boost implementors using Boost Chaos
instead of Boost PP, it would provide impetus for compiler vendors to
make their preprocessor 100% C++11 compliant (and no, I do not expect
VC++ to ever make an effort to be compliant but that is neither here nor
there).

> Without Boost
> itself using it, the target audience is too small. Now, if Boost said
> "screw VC++" and used Chaos anyway, that *might* do it. However, it would
> break Boost on more compilers than just VC++, and then we'd more likely
> just get a Python 2.x vs 3.x instead (apologies, Dave, if I'm mis-
> characterizing the current Python state) except with Boost.
>
> Now, IFF there is a place in Boost for a library like Chaos which
> currently contains no workarounds and henceforth must not contain
> workarounds, then I'd be willing. However, there is no current analog of
> that in Boost, nor any Boost policy addressing that type of requirement,
> nor any means of enforcement (even a would-be automated enforcement such
> as grepping for _MSC_VER (etc.) would work because even slightly
> rearranging code in a less than (theoretically) ideal way is still a
> workaround).

Just my opinion but...

I think there should be a place in Boost for implementations which are
aupported by a subset of compilers. I also see nothing wrong with a
library which only supports compilers which implement some area of the
standard and provides 0 workarounds for compilers which do not implement
that area of the standard 100% correctly.

Sebastian Redl

unread,
Apr 22, 2012, 3:21:35 PM4/22/12
to bo...@lists.boost.org

On 22.04.2012, at 21:05, Edward Diener wrote:

> On 4/22/2012 5:45 AM, Paul Mensonides wrote:
>> On Sun, 22 Apr 2012 07:51:11 +0000, Nathan Ridge wrote:
>>
>>>>> I don't see why one compiler's lack of standards-conformance should
>>>>> prevent a useful library from becoming part of Boost.
>>>>
>>>> Because it is not just MSVC that's the problem, and I know what the
>>>> tendency will be. This will work with compiler XYZ with *just* this
>>>> little workaround.... Any workaround whatsoever in Chaos is absolutely
>>>> unacceptable to me. It ruins the very point of the library.
>>>
>>> The library could be proposed to Boost with the explicit understanding
>>> that it is intended to work only with fully standards-conforming
>>> preprocessors. In the long run its presence in Boost might even
>>> contribute to putting pressure on vendors of non-conformant
>>> preprocessors to get their act together.
>>
>> I doubt the latter that would happen. Essentially, because Chaos cannot
>> be used when targeting VC++, Boost cannot itself use it.
>
> This is not completely true. Even though it would provide more work for a library implementor, a library could choose to use Boost Chaos for compilers that support it and choose to use Boost PP for compilers which do not ( including VC ).

What advantage would that give the library implementor over just using Boost PP?

Sebastian

Edward Diener

unread,
Apr 22, 2012, 6:09:38 PM4/22/12
to bo...@lists.boost.org
The same advantage C++ programmers get when they use standard library
containers instead of C arrays. They would be using better technology
and promoting that technology. I do know that they would still have to
use Boost PP for compilers that do not completely implement the
preprocessor correctly for C++11, but there is a gratification of using
better technology when one can.

Being held back from using better technology because of compiler
deficiencies, even in a multi-compiler environment, is not my idea of
pleasurable programming.

Dave Abrahams

unread,
Apr 22, 2012, 6:47:49 PM4/22/12
to bo...@lists.boost.org

on Sat Apr 21 2012, Nathan Ridge <zeratul976-AT-hotmail.com> wrote:

>> >> Slightly off-topic: are there any plans to add Chaos to Boost?
>> >
>> > +1
>>
>> As some sort of technological preview? It is not portable enough to be
>> used within Boost, though perhaps a slew of regression failures will cause
>> that to change (unlikely). <rant>MS, in particular, is too busy taking it
>> upon themselves to "ensure C++ is relevant" to prioritize implementing the
>> language.</rant> Also, one of the principles of Chaos is to have
>> absolutely zero compiler workarounds, and that would have to stay if it
>> was ever part of Boost.
>
> I don't see why one compiler's lack of standards-conformance should prevent
> a useful library from becoming part of Boost.

In principle, there is no reason. I would much rather have Chaos
available in Boost even with zero workarounds.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


Dave Abrahams

unread,
Apr 22, 2012, 6:52:10 PM4/22/12
to bo...@lists.boost.org

on Sun Apr 22 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:

> On Sun, 22 Apr 2012 07:51:11 +0000, Nathan Ridge wrote:
>
>>> > I don't see why one compiler's lack of standards-conformance should
>>> > prevent a useful library from becoming part of Boost.
>>>
>>> Because it is not just MSVC that's the problem, and I know what the
>>> tendency will be. This will work with compiler XYZ with *just* this
>>> little workaround.... Any workaround whatsoever in Chaos is absolutely
>>> unacceptable to me. It ruins the very point of the library.
>>
>> The library could be proposed to Boost with the explicit understanding
>> that it is intended to work only with fully standards-conforming
>> preprocessors. In the long run its presence in Boost might even
>> contribute to putting pressure on vendors of non-conformant
>> preprocessors to get their act together.
>
> I doubt the latter that would happen. Essentially, because Chaos cannot
> be used when targeting VC++, Boost cannot itself use it. Without Boost
> itself using it, the target audience is too small. Now, if Boost said
> "screw VC++" and used Chaos anyway, that *might* do it. However, it would
> break Boost on more compilers than just VC++, and then we'd more likely
> just get a Python 2.x vs 3.x instead (apologies, Dave, if I'm mis-
> characterizing the current Python state) except with Boost.
>
> Now, IFF there is a place in Boost for a library like Chaos which
> currently contains no workarounds and henceforth must not contain
> workarounds, then I'd be willing.

As far as I'm concerned, there are two places:

1. As a zero-workaround sub-library of Boost.PP
2. After review and acceptance as a standalone library.

> However, there is no current analog of that in Boost, nor any Boost
> policy addressing that type of requirement, nor any means of
> enforcement (even a would-be automated enforcement such as grepping
> for _MSC_VER (etc.) would work because even slightly rearranging code
> in a less than (theoretically) ideal way is still a workaround).


It does leave me wondering what you will do when you discover an
ambiguity in the standard that two implementations interpret
differently, but that's a bridge you can cross when you come to it.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


Stephan T. Lavavej

unread,
Apr 22, 2012, 8:43:02 PM4/22/12
to bo...@lists.boost.org
[Edward Diener]

> In that case if Boost Chaos were better for preprocessor metaprogramming
> than Boost PP, as evidenced by Boost implementors using Boost Chaos
> instead of Boost PP, it would provide impetus for compiler vendors to
> make their preprocessor 100% C++11 compliant (and no, I do not expect
> VC++ to ever make an effort to be compliant but that is neither here nor
> there).

If someone could prepare *minimized* test cases demonstrating the VC bugs that are holding you back here, I could file compiler bugs.

STL

Edward Diener

unread,
Apr 22, 2012, 9:51:06 PM4/22/12
to bo...@lists.boost.org
On 4/22/2012 8:43 PM, Stephan T. Lavavej wrote:
> [Edward Diener]
>> In that case if Boost Chaos were better for preprocessor metaprogramming
>> than Boost PP, as evidenced by Boost implementors using Boost Chaos
>> instead of Boost PP, it would provide impetus for compiler vendors to
>> make their preprocessor 100% C++11 compliant (and no, I do not expect
>> VC++ to ever make an effort to be compliant but that is neither here nor
>> there).
>
> If someone could prepare *minimized* test cases demonstrating the VC bugs that are holding you back here, I could file compiler bugs.

One of the main issues is that Microsoft follows the C89 standard when
it comes to the preprocessor. That has been superseded in both C and C++
a long time ago. So what does one say to a company that follows a very
old standard and defends the inadequacy of its preprocessor by saying
that according to that standard they are processing preprocessor symbols
correctly ? It is like a company saying that they implement a computer
language of 23 years ago and despite the fact that the language has
changed drastically since that time, they are proud to not keep up with
those changes.

A second issue is that quite often simple test cases are unavailable
since the sort of failures exhibited by Microsoft's preprocessor occur
when one is attempting to use the facilities of the Boost PP library and
that is often at the cutting edge of what one can do with the
preprocessor, aside from Chaos which assumes a 100% compliant preprocessor.

I will try to provide some simple test cases sometime in the coming week
or weekend, although perhaps Paul can jump in here and do so more easily.

Brian Wood

unread,
Apr 22, 2012, 11:05:26 PM4/22/12
to bo...@lists.boost.org
Paul Mensonides:

> As some sort of technological preview? It is not portable enough to be
> used within Boost, though perhaps a slew of regression failures will cause
> that to change (unlikely). <rant>MS, in particular, is too busy taking it
> upon themselves to "ensure C++ is relevant" to prioritize implementing the
> language.</rant> Also, one of the principles of Chaos is to have
> absolutely zero compiler workarounds, and that would have to stay if it
> was ever part of Boost.


Given your desire to avoid compiler workarounds, an on line code
generator may be helpful. One of the happy results of going on line is
kissing compiler workarounds good-bye.

I've worked on the distribution of code generated by an on line code
generator -- http://webEbenezer.net/build_integration.html . You're
welcome to use that as a starting point if you like.

Shalom,
Brian Wood
Ebenezer Enterprises
http://webEbenezer.net

Paul Mensonides

unread,
Apr 23, 2012, 1:04:32 AM4/23/12
to bo...@lists.boost.org
On Sun, 22 Apr 2012 21:51:06 -0400, Edward Diener wrote:

> On 4/22/2012 8:43 PM, Stephan T. Lavavej wrote:

>> If someone could prepare *minimized* test cases demonstrating the VC
>> bugs that are holding you back here, I could file compiler bugs.
>
> One of the main issues is that Microsoft follows the C89 standard when
> it comes to the preprocessor. That has been superseded in both C and C++
> a long time ago. So what does one say to a company that follows a very
> old standard and defends the inadequacy of its preprocessor by saying
> that according to that standard they are processing preprocessor symbols
> correctly ? It is like a company saying that they implement a computer
> language of 23 years ago and despite the fact that the language has
> changed drastically since that time, they are proud to not keep up with
> those changes.

It isn't that. VC++ doesn't even implement the preprocessor correctly for
C89. Essentially, the macro expansion algorithm appears to be
fundamentally broken. One small example:

#define A() 123

#define B() ()

A B() // should expand to A() (and does)

#define C() A B()

C() // should *still* expand to A() (but instead expands to 123)

The entire methodology is borked here. Leaving blue-paint aside (and
argument processing aside), macro expansion should work like a stream
editor.

+ + C ( ) + +
^

+ + C ( ) + +
^

+ + C ( ) + +
^

+ + A B ( ) + +
^

THIS is "rescanning and further replacement." Rescanning and further
replacement is *not* a recursive process that occurs "in the replacement
list."

+ + A B ( ) + +
^

+ + A ( ) + +
^ // note: not at 'A', scanning doesn't go backward

+ + A ( ) + +
^

+ + A ( ) + +
^

+ + A ( ) + +
^

+ + A ( ) + +
^

In the above, everything to the left of the caret is output and everything
to the right is input.

For the top level source, that output is the underlying language parser
(or preprocess-only output). The only time it isn't is when an argument
to a macro--which is *actually used* at least once in the replacement list
without being an operand of # or ##--is processed. In that case, a
recursive scan of the sequence of tokens making up the argument is
necessary. In that case, the output of the scan is redirected to a buffer
(depending on exact implementation methodology) for future substitution
into a copy of the replacement list.

Things get more complicated when dealing with blue paint and disabling
contexts. A disabling context is a range associated with a macro name
(that exists only during a scan for macro expansion--i.e. if that scan is
redirected to buffer for substitution, the result in the buffer does not
contain annotations representing the disabling context). If an identifier
token which matches the macro name is found in the range during the scan,
it is "painted blue." A painted token cannot be used as the name of a
macro in a macro invocation. However, while the context that causes blue
paint is transient, blue paint on a token is not. It remains even when
the scan output is redirected for substitution. Note that substitution
ultimately means that the sequence of tokens in the argument are going to
get scanned again. When that happens, the contexts from the first scan
are gone, but any blue paint on particular tokens is not.

As a concrete example,

#define A(x) B(x) C(x) D(x)
#define B(x)
#define C(x) x
#define D(x) #x

A(B(1))

Where M' = painted M token, { ... } is the hideset H (which may be flags
in the symbol table), +M is a virtual token which means H := H | M, and -M
is a virtual token which means H := H \ M...

[begin]
A ( B ( 1 ) ) EOF
^ {}
[begin]
B ( 1 ) EOF
^ {}
[begin]
1 EOF
^ {}
1 EOF
^ {}
[end]
+B 1 -B EOF
^ {}
1 -B EOF
^ { B }
1 -B EOF
^ { B }
1 EOF
^ {}
[end]
+A "B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ {}
"B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" B ( 1 ) C ( 1 ) D ( 1 ) -A EOF
^ { A }
[begin]
1 EOF
^ { A }
1 EOF
^ { A }
[end]
"B(1)" +B 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A, B }
"B(1)" 1 -B C ( 1 ) D ( 1 ) -A EOF
^ { A, B }
"B(1)" 1 C ( 1 ) D ( 1 ) -A EOF
^ { A }
"B(1)" 1 +C -C D ( 1 ) -A EOF
^ { A }
"B(1)" 1 -C D ( 1 ) -A EOF
^ { A, C }
"B(1)" 1 D ( 1 ) -A EOF
^ { A }
[begin]
1 EOF
^ { A }
1 EOF
^ { A }
[end]
"B(1)" 1 +D "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A }
"B(1)" 1 "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) B ( 1 ) -D -A EOF
^ { A, D }
[begin]
1 EOF
^ { A, D }
1 EOF
^ { A, D }
[end]
"B(1)" 1 "1" A' ( 1 ) +B 1 -B -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) 1 -B -D -A EOF
^ { A, B, D }
"B(1)" 1 "1" A' ( 1 ) 1 -D -A EOF
^ { A, D }
"B(1)" 1 "1" A' ( 1 ) 1 -A EOF
^ { A }
"B(1)" 1 "1" A' ( 1 ) 1 EOF
^ {}
[end]

VC++ gets the result correct here (though it doesn't get it by the correct
method).

The algorithm is bizarre in relation to typical code interpreters, but it
is actually shockingly simple. The only times that it gets tricky are
when a -M occurs within the sequence of tokens that makes up an invocation
which I glossed over above (because they don't occur in the above). E.g.
scenarios like:

MACRO -M ( ARG )

or

MACRO ( ARG, -M ARG )

In both of these cases, the -M must get executed prior to MACRO(...) being
replaced by MACRO's replacement list (and resuming scanning).

The first of these is not very tricky--and both boost-pp and chaos-pp
require this. The algorithm merely has to execute the virtual token when
finding the left parentheses. The second is more tricky (and I don't
think even chaos-pp requires this one to work correctly) because the
algorithm has to *correctly* deal with possibly four different scenarios:

1. The argument is not used (any -M must still be executed therein).
2. The argument is used as an operand of # (any -M must still be executed
therein).
3. The argument is used as an operand of ## (any -M must still be executed
and tokens must be painted as required).
4. The argument is used as a non-operand of # or ## (recursive scan)

Worse, 2, 3, and 4 can all be used at the same time:

#define A(x) #x x ## x x

However, in 2 and 3, you cannot encounter a +M, so you can just push a +M
into a "reset" stack for every -M you find and then execute the reset
stack prior to processing an argument (provided the recursive scan case is
done last).

VC++'s macro expansion algorithm appears to be a combination of
optimizations and hacks that yield correct results for simple cases, but
often yields incorrect results where the input requires the actual rules
to be followed. Obvious examples are the way __VA_ARGS__ is "optimized"
into a single argument and the extra scan applied in the following causing
it to yield 123 instead of A()--which is presumably a hack of some kind.

#define A() 123
#define B() ()
#define C() A B()

C()

However, these are only the tip of the iceberg. I have a suspicion that
the algorithm used is actually totally unlike what it is supposed to be.
I.e. I'd guess it's a re-write-the-algorithm rather than a fix-a-few-bugs
scenario.

> A second issue is that quite often simple test cases are unavailable
> since the sort of failures exhibited by Microsoft's preprocessor occur
> when one is attempting to use the facilities of the Boost PP library and
> that is often at the cutting edge of what one can do with the
> preprocessor, aside from Chaos which assumes a 100% compliant
> preprocessor.

Edward is correct here. The problems often occur in a combinatorial
fashion. This is almost certainly due to the actual algorithm operating
in way very foreign to how it should, so when simple examples are used,
the output appears correct except that you cannot see the algorithm state
(i.e. hidesets, blue paint, etc.) which is often *not* correct. So, when
things are used in combination, what appears to be correct in isolation is
actually carrying forward with algorithm state that affects further
processing incorrectly. These types of scenarios are inordinately
difficult to find workarounds for, and any workarounds are inherently
unstable (because you don't know exactly what they are "fixing"). All it
really takes is some different input or different combination and things
might break. Chaos has over 500 primary interface macros and over two
thousand primary and secondary interface macros (i.e. SEQ_FOR_EACH is
primary, SEQ_FOR_EACH_S is secondary (derivative))--not to mention the
number of implementation macros (which is probably in the 10000 range,
though I haven't counted). Full circle, even if it was a matter of
testing permutations and combinations of primary interfaces used only
once, eΓ(500 + 1, 1) is still about 3.3 x 10^1134 possible combinations.
Aka, not tenable.

The way boost-pp deals with this is essentially by having a complete
separate implementation for MSVC which tries to do everything it can to
force things to happen when they should or before it is too late--which
frequently doesn't work.

<rant>
Since I've been doing this pp-stuff, which is well over a decade, I've
seen numerous compiler and tool vendors fix or significantly improve their
preprocessors--several of which have informally consulted with me.
However, I have seen absolutely zero effort by MS. The stock response for
a bug report is "closed as won't fix" for anything related to the
preprocessor. I've heard--sometimes on this list--numerous people who
represent MS say that "they'll see what they can do," but nothing ever
happens. Fix your preprocessor, MS, which will probably gain you nothing
financially, and stop implementing extensions to the language in lieu of
implementing the language itself, and I'll actually maybe believe that you
care about C++, standards, and ethics.
</rant>

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 23, 2012, 1:12:00 AM4/23/12
to bo...@lists.boost.org
On Sun, 22 Apr 2012 22:05:26 -0500, Brian Wood wrote:

> Given your desire to avoid compiler workarounds, an on line code
> generator may be helpful. One of the happy results of going on line is
> kissing compiler workarounds good-bye.
>
> I've worked on the distribution of code generated by an on line code
> generator -- http://webEbenezer.net/build_integration.html . You're
> welcome to use that as a starting point if you like.

Interesting, but not necessary. If one is willing to interfere with the
tool chain and fix a slew of errors in Windows headers (in particular),
one can just integrate Wave or some other preprocessor into the tool chain.

Regards,
Paul Mensonides

lcaminiti

unread,
Apr 23, 2012, 10:01:41 AM4/23/12
to bo...@lists.boost.org

Paul Mensonides wrote
>
> Hello all.
>
> In the past, I've said several times that sequences are a superior to
> every other data structure when dealing with a collection of elements
> (tuples are better in some very specific contexts).
>

Something that I found missing from Boost.PP docs is computational
complexity (in terms of number of required macro expansions?) of the
different algorithms for the different data structures. I take it that
sequences have the best performances (i.e., smallest pp time because
smallest number of macro expansions -- is that true?) but is that true for
_all_ sequence algorithms when compared with other data structures? or are
some tuple/array algorithms faster than the pp-seq equivalents?

For example, I would have found such an analysis of Boost.PP computational
complexity useful when programming complex pp macros (i.e., Boost.Contract).
I've ended up using pp-lists everywhere and that was probably a very bad
choice (and in fact Boost.Contract compilation times are huge :( ). I've
used pp-list just because they handle nil "for free" but I could have used a
pp-sequence (nil)(elem1)(elem2)... -- with a bit more implementation work to
deal with the leading (nil) -- had I saw evidence that the pp-sequence
implementation would have been truly faster.


One of those reasons is that, if a library is built for it, non-unary (or
even variadic) elements are natural because the structure of the sequence
itself encapsulates each element.

(int)(std::pair&lt;int, int&gt;)(double)


This would be very convenient. I programmed a case-by-case version of these
in Boost.Contract but not a general variadic extension of pp-sequence.



> Another reason is that, if a library is built for it and if placemarkers
> ala C99/C++11 are available, there is a natural representation of the nil
> sequence:
>
> /**/
>

This would also be very convenient. But also automatically handling `(nil)`
for nil sequences would be very convenient (and still faster than
pp-lists?).

As for adding Chaos to Boost.PP, I asked "why not" before and the issues
with MVSC pp came about. Is this an opportunity to fix MSVC's pp? ;) (Some
bugs have already been reported but unfortunately resolved as "won't fix".)

Thanks.
--Lorenzo


--
View this message in context: http://boost.2283326.n4.nabble.com/preprocessor-Sequences-vs-All-Other-Data-Structures-tp4576239p4580498.html
Sent from the Boost - Dev mailing list archive at Nabble.com.

Paul Mensonides

unread,
Apr 23, 2012, 12:50:41 PM4/23/12
to bo...@lists.boost.org
On Mon, 23 Apr 2012 07:01:41 -0700, lcaminiti wrote:

> Paul Mensonides wrote
>>
>> Hello all.
>>
>> In the past, I've said several times that sequences are a superior to
>> every other data structure when dealing with a collection of elements
>> (tuples are better in some very specific contexts).
>>
>>
> Something that I found missing from Boost.PP docs is computational
> complexity (in terms of number of required macro expansions?) of the
> different algorithms for the different data structures. I take it that
> sequences have the best performances (i.e., smallest pp time because
> smallest number of macro expansions -- is that true?) but is that true
> for _all_ sequence algorithms when compared with other data structures?
> or are some tuple/array algorithms faster than the pp-seq equivalents?

Assuming amortized O(1) name-lookup, computational complexity with macro
expansion is really about the number of tokens (and whitespace
separations) scanned (a particular sequence of tokens is sometimes scanned
more than once) and less about number of macro replacements per se. In
terms of raw performance, nesting of macro arguments (i.e. M(M(M(x))))
also plays a significant role because each argument (provided it is used
in the replacement list without being an operand of # or ##) causes a
recursive invocation of the scanner (along with whatever memory and cycle
requirements that implies), but also because those recursions are not tail
recursive. The output of those scans must be cached for later
substitution. Generally speaking,

#define A(x) B(x)
#define B(x) C(x)
#define C(x) x

A(123)

is more efficient than something like

C(C(C(123)))

because the former doesn't build up as much cache (in the typical case)
and has fewer caches alive at any one point. For an extremely small case
like the above, however, the latter might actually be faster because the
former builds up more hideset bookkeeping (virtual tokens or whatever
other mechanism is used).

Boost.Preprocessor doesn't provide that information is because it is
extremely difficult to derive and because it is wildly different for
different compilers.

Chaos doesn't attempt to do it either. Chaos does usually indicate
exactly how many recursion steps are required given some input, but that's
a measure relative to a different constraint (not performance, per se, but
a type of resource). (The Chaos docs actually abuse big-O because I
couldn't decide whether I wanted exact forms or big-O, so the docs current
have (e.g.) O(2n) instead of just O(n). I actually need to change those
formulas to be auto-generated from LaTeX or use MathML.)

> For example, I would have found such an analysis of Boost.PP
> computational complexity useful when programming complex pp macros
> (i.e., Boost.Contract). I've ended up using pp-lists everywhere and that
> was probably a very bad choice (and in fact Boost.Contract compilation
> times are huge :( ). I've used pp-list just because they handle nil "for
> free" but I could have used a pp-sequence (nil)(elem1)(elem2)... -- with
> a bit more implementation work to deal with the leading (nil) -- had I
> saw evidence that the pp-sequence implementation would have been truly
> faster.

Lists in Boost.Preprocessor are the only data structure type that can have
a nil state. However, it is worse in nearly every (practical) way that
comes to mind. As you mention, in client code, one can simulate nil for
other data types in a variety of ways. Perhaps the most efficient would
be something along the lines of:

bseq: (0)(~) or (1)(seq)

// GET bseq
#define GET(b) GET_ ## b
#define GET_0(x)
#define GET_1(seq) seq

// IF_CONS bseq (t, f...)
#define IF_CONS(b) IF_CONS_A_ ## b
#define IF_CONS_A_0(x) IF_CONS_B_0
#define IF_CONS_A_1(x) IF_CONS_B_1
#define IF_CONS_B_0(t, ...) __VA_ARGS__
#define IF_CONS_B_1(t, ...) t

#define EAT(...)

#define A(bseq) IF_CONS bseq (B, EAT) (GET bseq)
#define B(seq) seq !

? A( (0)(~) )
? A( (1)((a)(b)(c)) )

> One of those reasons is that, if a library is built for it, non-unary
> (or even variadic) elements are natural because the structure of the
> sequence itself encapsulates each element.
>
> (int)(std::pair&lt;int, int&gt;)(double)
>
>
> This would be very convenient. I programmed a case-by-case version of
> these in Boost.Contract but not a general variadic extension of
> pp-sequence.

A lot of stuff would have to change for the boost-pp to deal with these.

>> Another reason is that, if a library is built for it and if
>> placemarkers ala C99/C++11 are available, there is a natural
>> representation of the nil sequence:
>>
>> /**/
>>
>>
> This would also be very convenient. But also automatically handling
> `(nil)` for nil sequences would be very convenient (and still faster
> than pp-lists?).

Yes, but it is probably better to predicate it as above or similarly.
(Reminds me a bit of the (bare-bones) lambda calculus.)

> As for adding Chaos to Boost.PP, I asked "why not" before and the issues
> with MVSC pp came about. Is this an opportunity to fix MSVC's pp? ;)
> (Some bugs have already been reported but unfortunately resolved as
> "won't fix".)

That opportunity has been around for years...

Regards,
Paul Mensonides

paul Fultz

unread,
Apr 23, 2012, 5:08:02 PM4/23/12
to bo...@lists.boost.org

>> I understand that Chaos won't implement workarounds. I was thinking
> more
>> of forking chaos, and trying to implement it in msvc. Would that be a
>> possibility? And with your knowledge of Chaos's implementation and msvc
>> limitations, do you even think that it would even be remotely feasible?
>
> I don't think it's feasible.  You *might* be able to emulate the
> base interface over a completely separate (and much larger)
> implementation.  However, you won't be able to emulate the extensibility
> model.  For example, one of the defining things about Chaos is its ability
> to generalize recursion (not referring to the sequence tricks that I
> started this thread with).

Well, actually this code here will work in MSVC:

#include <boost\preprocessor.hpp>
#define PP_EMPTY(...)
#define PP_EXPAND(...) __VA_ARGS__
#define PP_WHEN(x) BOOST_PP_IF(x, PP_EXPAND, PP_EMPTY)
#define PP_DEFER(m) m PP_EMPTY()
#define PP_OBSTRUCT(m) m PP_EMPTY PP_EMPTY()()

#define EVAL(...) \
    A(A(A(__VA_ARGS__))) \
    /**/
#define A(...) B(B(B(__VA_ARGS__)))
#define B(...) C(C(C(__VA_ARGS__)))
#define C(...) D(D(D(__VA_ARGS__)))
#define D(...) E(E(E(__VA_ARGS__)))
#define E(...) F(F(F(__VA_ARGS__)))
#define F(...) __VA_ARGS__

#define REPEAT(count, macro, ...) \
    PP_WHEN(count)( \
        REPEAT_INDIRECT PP_OBSTRUCT()()( \
            BOOST_PP_DEC(count), macro, __VA_ARGS__ \
        ) \
        macro PP_OBSTRUCT()( \
            BOOST_PP_DEC(count), __VA_ARGS__ \
        ) \
    ) \
    /**/
#define REPEAT_INDIRECT() REPEAT

#define PARAM(n, ...) BOOST_PP_COMMA_IF(n) __VA_ARGS__ ## n

EVAL( REPEAT(100, PARAM, class T) ) // class T0, class T1, ... class T99

#define FIXED(n, ...) BOOST_PP_COMMA_IF(n) __VA_ARGS__
#define TTP(n, ...) \
    BOOST_PP_COMMA_IF(n) \
        template<REPEAT(BOOST_PP_INC(n), FIXED, class)> class __VA_ARGS__ ## n \
    /**/

EVAL( REPEAT(3, TTP, T) )
// template<class> class T0,
// template<class, class> class T1,
// template<class, class, class> class T2

Of course, I had to add extra level of scans in the EVAL macro, otherwise it
would stop at 62 repetitions. This will now handle up to 182 repetitions. Of
course, a more efficient approach is done in Chaos, but perhaps the
recursion backend could be implemented in MSVC, but it will be slower,
which is ok since I don't develop on MSVC, I just need it to build on MSVC.
I will look into this further.

Thanks,
Paul Fultz II

Paul Mensonides

unread,
Apr 23, 2012, 7:19:28 PM4/23/12
to bo...@lists.boost.org

I actually looked at doing something like this with Chaos, but there is no
way to stop the exponential number of scans.

That's a severe problem, but even if that wasn't, the model is different
as the above is essentially just an uncontrolled continuation machine.
Call-and-return semantics are destroyed as there is no real notion of
yielding a result *now* unless you make A reeentrant, but then you end up
getting a *way* higher number of scans--either because you have say 16
separate sets like the above or you have the above go at least 32 "deep."
For example, when two things need to occur during the exact same scan
because the next scan is going to invoke something that involves both
results (anything with half-open parentheses, for sure, but even something
stuff like conditionals). This, in turns, leads to continuations rather
than call-and-return semantics, but then one should just make a
continuation machine ala the one in Chaos or Vesa's original one in Chaos'
sister project Order. In the right context, continuations are not
necessarily a bad thing, but they are definitely a different extensibility
model.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 23, 2012, 9:23:20 PM4/23/12
to bo...@lists.boost.org
> On Mon, 23 Apr 2012 14:08:02 -0700, paul Fultz wrote:

>> Of course, a more efficient approach is done in Chaos,
>> but perhaps the recursion backend could be implemented in MSVC, but it
>> will be slower, which is ok since I don't develop on MSVC, I just need
>> it to build on MSVC. I will look into this further.

Both Boost.Preprocessor and Chaos are open-source distributed under the
highly nonrestrictive Boost license, so you are free to fork off of it as
you please. You can do that now, regardless of whether Chaos is part of
Boost or not.

However, the root problem here is VC++ itself. VC++ needs to stop
creating the problem rather than have every client of VC++ working around
the problem. In the time it would take to implement Chaos-level
functionality (or at least approximate it) using VC++, I could literally
write a preprocessor. Politics aside, it is far more work to do the
workarounds in the clients--in this case, even with just one client VC ->
PP-L, than it would be to fix VC++... even if that fix involves re-writing
the macro expansion algorithm entirely. If VC++ won't do it, one is
better off changing the root by using a better toolchain or integrating a
better preprocessor into the toolchain (such as Wave).

Only a couple of days into discussing adding a no-workaround library like
Chaos to Boost, and we're already branching to apply workarounds... And
these are my main issues with moving Chaos to Boost--no policy to govern
(i.e. disallow) the application workarounds and no means of stopping the
fragmentation implied by branching the code. The current situation is
actually better than that. You have a portable subset of functionality
(Boost.Preprocessor) and a cutting edge, but non-portable, superset of
functionality (chaos-pp). If (unaltered toolchain) portability is
required, than use the portable subset. If not (i.e. you are willing and
able to change toolchains or integrate another tool into the toolchain),
then use the superior superset.

Despite my ranting, VC++ as a whole is worlds away from were it was circa
VC6. However, this appears (to me, at least) to be driven entirely by the
bottom line--which only prioritizes based on corporate image and user base
size (which is insignificantly small for pp metaprogramming as long as
Boost continues to provide workarounds for everything). The bottom line
is important, but it is not the only thing that is important. I'd rather
have a complete and correct language/library implementation than "auto-
vectorization" or AMP (neither of which I'm saying is either good or bad
though I'm very much *not* a fan of language extensions and language
subsetting). The priorities should be:

1A. Correctly interpret correct input or fail. (REQ)
1B. Generate correct output. (REQ)
2. Correctly interpret all correct input. (REQ)
3A. Improve performance of output. (QOI)
4A. Improve diagnostic quality. (QOI)
4B. Improve performance of toolchain. (QOI)
5. Create various tools such as IDEs and libraries. (AUX)

The problem with VC++ is that it doesn't appear to prioritize (1A) enough
and (2) appears way down the priority list if it appears at all. (5)
should not really involve the toolchain as far as (1) and (2) are
concerned. Instead, what appears to be the case is that (5) is hammered
(and usually in a marketing way) in lieu of (2). Tied into that is the
long release cycle that incorporates everything under the sun (.Net, VS,
C#--and, no, I don't care about C++/CLI or whatever it is currently
called). Herb has made indications that this may not continue as it has,
but I'll believe that when I see it.

Currently, there are other toolchains available on Windows that are more
successful at (1A) and (2) though possibly not always (1B) (I haven't used
VC++ significantly for some time, so I am not aware of the number of
output bugs.). Regardless of degree of success, there are other toolchains
available on Windows whose *order* of priorities more closely reflect the
above. For me to even consider providing VC++ workarounds in any code
that I write, not only does VC++ have to achieve a reasonable level of
success at (1) and (2), I also have to believe that the MS's order of
priorities going forward is sound--which I don't. MS has dug itself into
that hole, and to me the presence of stuff like AMP--which may well be a
good research direction--speaks volumes about the prioritization within MS.

Now, it may be that my perception is wrong or that things have changed,
but my perception and principles are ultimately what I use to make
decisions. If more software development operated according to similar
principles, the cost of initial software would have been higher, but the
cost of software (relative of course to what the software does) would be
less in the long run (which, by now, would be now). VC's preprocessor
would have already been fixed, autotools would not exist, etc..

Related to the above, if backward compatibility must be broken, break it
as soon as possible.

Stephan T. Lavavej

unread,
Apr 24, 2012, 12:22:07 AM4/24/12
to bo...@lists.boost.org
[Paul Mensonides]

> It isn't that. VC++ doesn't even implement the preprocessor correctly for
> C89. Essentially, the macro expansion algorithm appears to be
> fundamentally broken. One small example:
> #define A() 123
> #define B() ()
> A B() // should expand to A() (and does)
> #define C() A B()
> C() // should *still* expand to A() (but instead expands to 123)

I have filed this as DevDiv#407151 "VC's preprocessor considered harmful (to Boost.Preprocessor)" in our internal database, including a copy of your entire mail.

If it were within my power to fix this for you, I would - but I am not a compiler dev.

Stephan T. Lavavej
Visual C++ Libraries Developer

Paul A. Bristow

unread,
Apr 24, 2012, 4:38:52 AM4/24/12
to bo...@lists.boost.org
> -----Original Message-----
> From: boost-...@lists.boost.org [mailto:boost-...@lists.boost.org] On Behalf Of Stephan T.
> Lavavej
> Sent: Tuesday, April 24, 2012 5:22 AM
> To: bo...@lists.boost.org
> Subject: Re: [boost] [preprocessor] Sequences vs. All Other Data Structures
>
> [Paul Mensonides]
> > It isn't that. VC++ doesn't even implement the preprocessor correctly
> > for C89. Essentially, the macro expansion algorithm appears to be
> > fundamentally broken. One small example:
> > #define A() 123
> > #define B() ()
> > A B() // should expand to A() (and does) #define C() A B()
> > C() // should *still* expand to A() (but instead expands to 123)
>
> I have filed this as DevDiv#407151 "VC's preprocessor considered harmful (to Boost.Preprocessor)"
in our
> internal database, including a copy of your entire mail.
>
> If it were within my power to fix this for you, I would - but I am not a compiler dev.

Steven's action seems helpful as he can be.

Paul Mensonides obviously feels strongly about this, but to many this may seem an obscure language
backwater.

Paul explains why it is not just a minor detail and causing a lot of downstream grief.

(But I haven't understood if there will be grief from existing macro code if the Microsoft fix it?
Is this the reason or their lack of enthusiasm?)

Is there anything Boost can do to support Paul (and Steven) by sending a collective message to
Microsoft, lest they dismiss this as one man's rant?

Paul

---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB UK
+44 1539 561830 07714330204
pbri...@hetp.u-net.com

lcaminiti

unread,
Apr 24, 2012, 8:42:09 AM4/24/12
to bo...@lists.boost.org

Paul A. Bristow-2 wrote
>
> Is there anything Boost can do to support Paul (and Steven) by sending a
> collective message to
> Microsoft, lest they dismiss this as one man's rant?
>

+1. I'd subscribe a "petition" for fixing major bugs in MSVC pp.

--Lorenzo


--
View this message in context: http://boost.2283326.n4.nabble.com/preprocessor-Sequences-vs-All-Other-Data-Structures-tp4576239p4583313.html
Sent from the Boost - Dev mailing list archive at Nabble.com.

Edward Diener

unread,
Apr 24, 2012, 9:00:34 AM4/24/12
to bo...@lists.boost.org
On 4/24/2012 12:22 AM, Stephan T. Lavavej wrote:
> [Paul Mensonides]
>> It isn't that. VC++ doesn't even implement the preprocessor correctly for
>> C89. Essentially, the macro expansion algorithm appears to be
>> fundamentally broken. One small example:
>> #define A() 123
>> #define B() ()
>> A B() // should expand to A() (and does)
>> #define C() A B()
>> C() // should *still* expand to A() (but instead expands to 123)
>
> I have filed this as DevDiv#407151 "VC's preprocessor considered harmful (to Boost.Preprocessor)" in our internal database, including a copy of your entire mail.
>
> If it were within my power to fix this for you, I would - but I am not a compiler dev.

If you look at the Microsoft bug reporting page for preprocessors errors
you will see other problems with the preprocessor. A number of these are
basically the same problem with variadic macros, but others are just
basic erroneous proocessing of prepeocessor input and Microsoft's
response often is that it is the way the VC++ preprocessor works and
will not be changed even if it is erroneous.

Here are some links:

https://connect.microsoft.com/VisualStudio/feedback/details/385034/preprocessor-bug-two-tokens-may-be-joined-into-a-single-token

https://connect.microsoft.com/VisualStudio/feedback/details/380090/variadic-macro-replacement#details

https://connect.microsoft.com/VisualStudio/feedback/details/548580/c-preprocessor-expand-variadic-macro-in-wrong-order#details

https://connect.microsoft.com/VisualStudio/feedback/details/676585/bug-in-cl-c-compiler-in-correct-expansion-of-va-args

https://connect.microsoft.com/VisualStudio/feedback/details/718976/msvc-preprocessor-incorrectly-evaluates-variadic-macro-arguments-in-nested-macro-definition

https://connect.microsoft.com/VisualStudio/feedback/details/318940/macro-expansion-bug#details

https://connect.microsoft.com/VisualStudio/feedback/details/306850/c-preprocessor-broken-with-cpp-conditionals-in-code-using-a-macro#details

https://connect.microsoft.com/VisualStudio/feedback/details/288202/preprocessor-defines-macros-not-like-expected

https://connect.microsoft.com/VisualStudio/feedback/details/177051/invalid-preprocessor-function-like-macro-expansion#details

I know this is overkill to list these here, but it really is not done as
an attempt to embarass you or Microsoft. Rather it is proof of what Paul
has said, that Microsoft has taken the view that it is not important to
fix the prerocessor. Even if Microsoft were concerned with breaking
backward compatibility, ie. breaking code which uses the incorrect
implementation, in changing the preprocessor to actually be correct,
they could create a correct preprocessor and have a compiler switches to
enable the backward compatibility mode or the correct preprocessor mode
if they did so.

Dave Abrahams

unread,
Apr 24, 2012, 6:53:03 AM4/24/12
to bo...@lists.boost.org

on Tue Apr 24 2012, "Paul A. Bristow" <pbristow-AT-hetp.u-net.com> wrote:

> Is there anything Boost can do to support Paul (and Steven) by sending a collective message to
> Microsoft, lest they dismiss this as one man's rant?

I've already gotten Herb's attention about it. Where it goes after
that, I can't say.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com

Stephan T. Lavavej

unread,
Apr 24, 2012, 3:06:19 PM4/24/12
to bo...@lists.boost.org
[Edward Diener]
> Here are some links:

Filed as DevDiv#407841 "VC's preprocessor considered harmful, Round 2".

Remember that I am a user of the compiler, like you (and the STL greatly stresses the compiler, although not as much as Boost). Compiler bugs affect me like anyone else - in fact, more than anyone else, except Boost. You do not need to convince me that they should be fixed. :->

STL

Stephan T. Lavavej

unread,
Apr 24, 2012, 3:46:15 PM4/24/12
to bo...@lists.boost.org
[Paul A. Bristow]
> But I haven't understood if there will be grief from existing macro code if the Microsoft fix it?

Probably (VC preprocesses billions of lines of code). But if I were a compiler dev, I'd deal with it with pragmas (superior to compiler options, because a header could push/pop requests for preprocessor conformance).

> Is there anything Boost can do to support Paul (and Steven) by sending a collective message to
> Microsoft, lest they dismiss this as one man's rant?

For this and other bugs, it is helpful to hear:

"I maintain Boost.Foobar"/"I maintain Product X, with Y users and Z revenue" and "this bug is bad/really bad/kitten-vaporizing" (ideally, quantify how many kittens are vaporized)

In the absence of such information, we try to figure out how severe a bug is, which factors into its priority (along with the cost of fixing it, available devs, etc.). Severity is usually pretty obvious: silent bad codegen is worse than rejects-valid, which is worse than accepts-invalid, etc. But sometimes a bug looks deceptively innocuous, which is where additional info can help.

For example, I filed DevDiv#166326 "N3276: decltype v1.1", about a subtle corner case in a new-to-VC10 feature. Ordinarily, that sounds like low priority. (EDG and GCC's lack of support for this at the time further decreased the priority.) But the info that (a) this is a huge obstacle to implementing result_of (for bonus points, argued by a WG21 paper), (b) Boost (i.e. Eric Niebler) really wants it, and (c) the STL really wants it too, pushed it "over the bar" and it got fixed. (Which later made me super happy as I was going through customer-reported STL issues and found that one was dependent on decltype v1.1.)

One note, though - commenting on *resolved* Connect bugs generally doesn't do any good. It's not obvious, but we aren't notified of Connect comments. Comments on active bugs will typically be looked at when the bug is resolved, but we have no reason to look at already-resolved bugs in our database (not even I have time for that, which is why I resolve Connect bugs with my E-mail address).

STL

paul Fultz

unread,
Apr 24, 2012, 6:32:45 PM4/24/12
to bo...@lists.boost.org


>>> I don't think it's feasible.  You *might* be able to emulate
> the base
>>> interface over a completely separate (and much larger) implementation. 
>>> However, you won't be able to emulate the extensibility model.  For
>>> example, one of the defining things about Chaos is its ability to
>>> generalize recursion (not referring to the sequence tricks that I
>>> started this thread with).

More I'm looking into this, the more its looking feasible to implement in MSVC.
Here is a piece of code that implements a FOR macro using a recursion
backend, like what is found in Chaos, and all it took was applying some
simple workarounds:

#include <boost\preprocessor.hpp>
#include <boost\preprocessor\detail\is_nullary.hpp>

#define PP_EMPTY(...)
#define PP_EXPAND(...) __VA_ARGS__
#define PP_WHEN(x) BOOST_PP_IF(x, PP_EXPAND, PP_EMPTY)
#define PP_DEFER(m) m PP_EMPTY()
#define PP_OBSTRUCT(m) m PP_EMPTY PP_EMPTY()()

#define STATE() BOOST_PP_AUTO_REC(IS_VALID, 16)
#define NEXT(s) BOOST_PP_INC(s)
#define IS_VALID(s) \
    BOOST_PP_IS_NULLARY(BOOST_PP_CAT(EVAL_, s)(()))
#define EVAL_S(s) BOOST_PP_CAT(EVAL_, s)
#define EVAL EVAL_S(STATE())

#define EVAL_1(...)   EVAL_RET_1((__VA_ARGS__))
#define EVAL_2(...)   EVAL_RET_2((__VA_ARGS__))
#define EVAL_3(...)   EVAL_RET_3((__VA_ARGS__))
#define EVAL_4(...)   EVAL_RET_4((__VA_ARGS__))
#define EVAL_5(...)   EVAL_RET_5((__VA_ARGS__))
#define EVAL_6(...)   EVAL_RET_6((__VA_ARGS__))
#define EVAL_7(...)   EVAL_RET_7((__VA_ARGS__))
#define EVAL_8(...)   EVAL_RET_8((__VA_ARGS__))
#define EVAL_9(...)   EVAL_RET_9((__VA_ARGS__))
#define EVAL_10(...)  EVAL_RET_10((__VA_ARGS__))
#define EVAL_11(...)  EVAL_RET_11((__VA_ARGS__))
#define EVAL_12(...)  EVAL_RET_12((__VA_ARGS__))
#define EVAL_13(...)  EVAL_RET_13((__VA_ARGS__))
#define EVAL_14(...)  EVAL_RET_14((__VA_ARGS__))
#define EVAL_15(...)  EVAL_RET_15((__VA_ARGS__))
#define EVAL_16(...)  EVAL_RET_16((__VA_ARGS__))
#define EVAL_RET_1(x)   EVAL_X_1  x
#define EVAL_RET_2(x)   EVAL_X_2  x
#define EVAL_RET_3(x)   EVAL_X_3  x
#define EVAL_RET_4(x)   EVAL_X_4  x
#define EVAL_RET_5(x)   EVAL_X_5  x
#define EVAL_RET_6(x)   EVAL_X_6  x
#define EVAL_RET_7(x)   EVAL_X_7  x
#define EVAL_RET_8(x)   EVAL_X_8  x
#define EVAL_RET_9(x)   EVAL_X_9  x
#define EVAL_RET_10(x)  EVAL_X_10 x
#define EVAL_RET_11(x)  EVAL_X_11 x
#define EVAL_RET_12(x)  EVAL_X_12 x
#define EVAL_RET_13(x)  EVAL_X_13 x
#define EVAL_RET_14(x)  EVAL_X_14 x
#define EVAL_RET_15(x)  EVAL_X_15 x
#define EVAL_RET_16(x)  EVAL_X_16 x
#define EVAL_X_1(...)   __VA_ARGS__
#define EVAL_X_2(...)   __VA_ARGS__
#define EVAL_X_3(...)   __VA_ARGS__
#define EVAL_X_4(...)   __VA_ARGS__
#define EVAL_X_5(...)   __VA_ARGS__
#define EVAL_X_6(...)   __VA_ARGS__
#define EVAL_X_7(...)   __VA_ARGS__
#define EVAL_X_8(...)   __VA_ARGS__
#define EVAL_X_9(...)   __VA_ARGS__
#define EVAL_X_10(...)  __VA_ARGS__
#define EVAL_X_11(...)  __VA_ARGS__
#define EVAL_X_12(...)  __VA_ARGS__
#define EVAL_X_13(...)  __VA_ARGS__
#define EVAL_X_14(...)  __VA_ARGS__
#define EVAL_X_15(...)  __VA_ARGS__
#define EVAL_X_16(...)  __VA_ARGS__

#define FOR(pred, op, macro, state) \
    FOR_S(STATE(), pred, op, macro, state) \
    /**/
#define FOR_S(s, pred, op, macro, state) \
    FOR_I( \
        PP_OBSTRUCT(), NEXT(s), \
        pred, op, macro, state \
    ) \
    /**/
#define FOR_INDIRECT() FOR_I
#define FOR_I(_, s, pred, op, macro, state) \
    PP_WHEN _(pred _(s, state))( \
        macro _(s, state) \
        EVAL_S(s) _(FOR_INDIRECT _()( \
            PP_OBSTRUCT _(), NEXT(s), \
            pred, op, macro, op _(s, state) \
        )) \
    ) \
    /**/

#define PRED(s, state) BOOST_PP_BOOL(state)
#define OP(s, state) BOOST_PP_DEC(state)
#define MACRO(s, state) state

EVAL(FOR(PRED, OP, MACRO, 10)) // 10 9 8 7 6 5 4 3 2 1

Paul A. Bristow

unread,
Apr 25, 2012, 7:08:23 AM4/25/12
to bo...@lists.boost.org
> -----Original Message-----
> From: boost-...@lists.boost.org [mailto:boost-...@lists.boost.org] On Behalf Of Stephan T.
> Lavavej
> Sent: Tuesday, April 24, 2012 8:46 PM
> To: bo...@lists.boost.org
> Subject: Re: [boost] [preprocessor] Sequences vs. All Other Data Structures
>
> [Paul A. Bristow]
> > But I haven't understood if there will be grief from existing macro code if the Microsoft fix
it?
>
> Probably (VC preprocesses billions of lines of code). But if I were a compiler dev, I'd deal with
it with
> pragmas (superior to compiler options, because a header could push/pop requests for preprocessor
> conformance).
>
> > Is there anything Boost can do to support Paul (and Steven) by sending
> > a collective message to Microsoft, lest they dismiss this as one man's rant?
>
> For this and other bugs, it is helpful to hear:
>
> "I maintain Boost.Foobar"/"I maintain Product X, with Y users and Z revenue" and "this bug is
bad/really
> bad/kitten-vaporizing" (ideally, quantify how many kittens are vaporized)

I'm immensely relieved to know that Microsoft is so sensitive to issues involving vaporization of
kittens ;-)

Paul

PS I wouldn't put this issue in that category, but I think it is much more important than it might
seem.

---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB UK
+44 1539 561830 07714330204
pbri...@hetp.u-net.com







Paul Mensonides

unread,
Apr 25, 2012, 8:01:07 AM4/25/12
to bo...@lists.boost.org
On Wed, 25 Apr 2012 12:08:23 +0100, Paul A. Bristow wrote:

>> [mailto:boost-...@lists.boost.org] On Behalf Of Stephan T. Lavavej

>> "I maintain Boost.Foobar"/"I maintain Product X, with Y users and Z
>> revenue" and "this bug is
> bad/really
>> bad/kitten-vaporizing" (ideally, quantify how many kittens are
>> vaporized)

> PS I wouldn't put this issue in that category, but I think it is much
> more important than it might seem.

In terms of direct user base, the issue is not that big. In terms of
indirect user base (i.e. user's of other libraries in Boost which use
Boost.Preprocessor), the issue is larger, but in that case, it's an issue
that mostly affects Boost authors. Given the skill level of the typical
Boost author (and the inclination to innovate), it's a shame when a great
deal of time is wasted applying workarounds for compiler issues. It
increases the complexity of everything, and, in some cases, actually
affects interface. Such workarounds need not be preprocessor-related,
and, on the whole, the preprocessor is *not* as important as the core
language.

However, it is also not *nearly* as complex as the core language--and
therefore not nearly as difficult to do correctly. In fact, in the last
24 hours, I implemented a macro expansion algorithm that, AFAICT, works
flawlessly--even in all the corner cases. (Grain of salt... I haven't
tested it very well, however, and it doesn't have a lexer or parser
attached to it, so it is not a "preprocessor"--it just takes a sequence of
preprocessing tokens and a set of existing macro definitions, and does the
macro replacement therein.)

The point of the above is that it is simply not that hard to do it right
if you actually know what doing it right is and take the time to do it.
Of course, doing it right also implies obeying the phases of
translation....

I would have less of a problem with all of this if it was just about
prioritization of fixes of implemented features and addition of missing
features of the language. But that doesn't appear to be case. Instead,
other extra-linguistic (and more easily marketable) things appear to take
priority away from the language itself. Not that those are necessarily
the same dev teams, but there are integration issues, nonetheless. I have
an even larger problem with "won't fix because that's the way it so deal
with it." As it currently stands, I will not use cl nor will I make sure
code is portable to VC++. Do I want variadic templates? Absolutely, but
while I'm waiting I can emulate variadic templates for VC++ if I have a
decent preprocessor. In fact, the preprocessor is the fundamental means
of dealing with compiler deficiencies and system discrepancies.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 25, 2012, 8:20:08 AM4/25/12
to bo...@lists.boost.org
On Tue, 24 Apr 2012 19:46:15 +0000, Stephan T. Lavavej wrote:

> [Paul A. Bristow]
>> But I haven't understood if there will be grief from existing macro
>> code if the Microsoft fix it?
>
> Probably (VC preprocesses billions of lines of code). But if I were a
> compiler dev, I'd deal with it with pragmas (superior to compiler
> options, because a header could push/pop requests for preprocessor
> conformance).

I actually don't think this would work very well. Remember we're talking
about macros here and, in particular, libraries which *define* macros for
users to use. To handle that in a way that scales with pragmas (or even
_Pragmas), you'd have to be "remembering" the pragma state of the
definition and changing the algorithm mid-replacement (which sounds like
more bugs, not less). Even if that worked, I wouldn't use pragmas either--
especially not pragmas to make the compiler do what it already should do
by default. The best case scenario, if you want to support legacy mode:
compiler switch to *disable* a compliant preprocessor. I.e. do the right
thing _by default_ (just like with for-loop scoping). That, of course,
means fixing the Windows headers (which, BTW, need to be printed, the
prints burned, and then deleted for their wanton macro abuse). If the
preprocessor works the way it should, most macro uses that currently work
will still work. The ones that won't will either be fancier than is
typical or rely on bad (or missing) implementation of the early phases of
translation (e.g. macro-formed comments).

-Paul

Dave Abrahams

unread,
Apr 26, 2012, 2:47:07 PM4/26/12
to bo...@lists.boost.org

on Mon Apr 23 2012, Paul Mensonides <pmenso57-AT-comcast.net> wrote:

> Only a couple of days into discussing adding a no-workaround library like
> Chaos to Boost, and we're already branching to apply workarounds... And
> these are my main issues with moving Chaos to Boost--no policy to govern
> (i.e. disallow) the application workarounds and no means of stopping the
> fragmentation implied by branching the code.

Hi Paul,

I believe we're going to be moving to a modularized structure in the
next few months, with individual per-library repositories. There should
be no problem with maintaining absolute hegemonous control over the
contents of your library's repository.

--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com


Paul Mensonides

unread,
Apr 26, 2012, 8:55:16 PM4/26/12
to bo...@lists.boost.org
On 4/25/2012 5:01 AM, Paul Mensonides wrote:

> However, it is also not *nearly* as complex as the core language--and
> therefore not nearly as difficult to do correctly. In fact, in the last
> 24 hours, I implemented a macro expansion algorithm that, AFAICT, works
> flawlessly--even in all the corner cases. (Grain of salt... I haven't
> tested it very well, however, and it doesn't have a lexer or parser
> attached to it, so it is not a "preprocessor"--it just takes a
sequence of
> preprocessing tokens and a set of existing macro definitions, and
does the
> macro replacement therein.)

Implementation of the above attached--probably lots of room for
improvement and optimization. Apologies for the mega function (in
particular--I didn't feel like refactoring) and the (non-portable) shell
color-coding (I wanted to see blue paint). I built it with g++ 4.7.0,
but I believe 4.6+ should also work.

$ g++ -std=c++11 -I $CHAOS_ROOT 1.cpp

If this wasn't just a toy, I'd probably replace the symbol table with a
trie (radix tree) populated during lexical analysis--especially with how
many common prefixes you get in C++. Also, I'd avoid the various string
comparisons and just compare iterators--which would faster and improve
locality. Also, for output to tty (i.e. preprocess only) there needs to
be state machine to judiciously insert whitespace to prevent erroneous
re-tokenization by later tool. Such generated whitespace can no longer
affect the semantics of the program (whereas before, it can, thanks to
stringizing).

As one can see from this code, a recursive call to the macro replacement
scanner only occurs in one spot: when preparing an actual argument for a
formal argument that is used in the replacement "in the open". Aside
from that, the blue paint, and context changes (implemented here via
virtual tokens), this is a classical stream editor. Macros invocations
are found in the stream, replaced (in the stream) by their replacement
lists (without macro replacement), and scanning resumes at the first
token from the replacement list.

The input is:

#define O 0 +X ## Y A
#define A() A() B
#define B() B() A
#define C(x, y) x ## y
#define D(...) D D ## __VA_ARGS__ __VA_ARGS__ #__VA_ARGS__
#define ID(...) __VA_ARGS__
#define P(p, x) p ## x(P

O()
A()()()
C(C,)
D(D(0, 1))
ID (
1
)
P(,ID)(1,2),P(1,2)))

Given no lexer/parser, the above is manually put in in the code. The
output is:

0 <space> + XY <space> A ( ) <space> B <newline>
A ( ) <space> B ( ) <space> A ( ) <space> B <newline>
C <newline>
D <space> DD ( 0 , <space> 1 ) <space> D <space> D0 , <space> 1 <space>
0 , <space> 1 <space> "0, 1" <space> "D(0, 1)" <newline>
<space> <tab> 1 <space> <newline>
P ( 1 , 2 ) , 12 ( P ) <newline>

which is correct.

g++ outputs:

0 +XY A() B
A() B() A() B
C
D DD(0, 1) D D0, 1 0, 1 "0, 1" "D(0, 1)"
1
P(1,2),12(P)

which is correct.

cl outputs:

0 +XY A() B
A() B() A() B
C
D DD D0, 1 0, 1 "0, 1" D D0, 1 0, 1 "0, 1" "D(0, 1)"
1
12(P,12(P)

where the 4th and 6th are wrong.

wave outputs:

0 +XY A() B
A() B() A() B
C
D DD(0, 1) D D0, 1 0, 1 "0, 1" "D(0, 1)"
1
error: improperly terminated macro invocation or replacement-list
terminates in partial macro expansion (not supported yet): missing ')'

Regards,
Paul Mensonides

1.cpp

Stephan T. Lavavej

unread,
Apr 27, 2012, 12:17:37 AM4/27/12
to bo...@lists.boost.org
[Paul Mensonides]
> The best case scenario, if you want to support legacy mode:
> compiler switch to *disable* a compliant preprocessor. I.e. do the right
> thing _by default_ (just like with for-loop scoping).

Yeah, you're right. I take back what I said about pragmas, I see that they would be especially unworkable for macros.

> wanton macro abuse

I was inured to defending against macroized min/max, but we recently discovered that Yield was macroized too - fun.

STL

Paul Mensonides

unread,
Apr 27, 2012, 12:45:09 AM4/27/12
to bo...@lists.boost.org
On Fri, 27 Apr 2012 04:17:37 +0000, Stephan T. Lavavej wrote:

> [Paul Mensonides]
>> The best case scenario, if you want to support legacy mode: compiler
>> switch to *disable* a compliant preprocessor. I.e. do the right thing
>> _by default_ (just like with for-loop scoping).
>
> Yeah, you're right. I take back what I said about pragmas, I see that
> they would be especially unworkable for macros.
>
>> wanton macro abuse
>
> I was inured to defending against macroized min/max, but we recently
> discovered that Yield was macroized too - fun.

Not just min/max. Many many Windows API names too (i.e. A vs. W).
Obviously this is an issue for C without overloading, but there could be
separate C++ headers that have overloaded inline thunks. Otherwise, all
non-local (meaning locally defined, used, undefined) should be prefixed -
and- all uppercase.

Window headers aren't special (neither are other library headers... I'm
looking at you Qt and Linux (e.g. "major")) and should obey the
appropriate conventions. The Windows headers are simply garbage from the
dark ages that is inflicted on way too many users. (The same actually
goes for Qt's ridiculous signals/slots moc mechanism.) Library authors:
get it out of your heads that your library is "special." In the
particular cases of Windows and Qt, the use of either should be an
insignificant part of any software. Otherwise, the software is
fundamentally poorly designed.

This is true even for preprocessor metaprogramming libraries. The
contract library referenced in this thread, Boost.Python, the MPL, etc.,
all use Boost.Preprocessor, but the use is ultimately incidental--an
implementation detail. By definition, a library simply cannot tell how
pervasive or important it will be in a client or in a client's client (and
so on). That is *one* of the things that makes library building hard--you
often don't know whether optimization in any particular case is necessary.

Regards,
Paul Mensonides

Paul Mensonides

unread,
Apr 27, 2012, 1:06:58 AM4/27/12
to bo...@lists.boost.org
On 4/26/2012 5:55 PM, Paul Mensonides wrote:
> Implementation of the above attached--probably lots of room for
> improvement and optimization. Apologies for the mega function (in
> particular--I didn't feel like refactoring) and the (non-portable) shell
> color-coding (I wanted to see blue paint). I built it with g++ 4.7.0,
> but I believe 4.6+ should also work.
>
> $ g++ -std=c++11 -I $CHAOS_ROOT 1.cpp

Updated attached. This one executes some more realistic tests.

(I must say having no lexer to tokenize input and no parser to process
directives makes input a pain in the ass.)

#define CAT(a, ...) PRIMITIVE_CAT(a, __VA_ARGS__)
#define PRIMITIVE_CAT(a, ...) a ## __VA_ARGS__

#define INC(x) PRIMITIVE_CAT(INC_, x)
#define INC_0 1
#define INC_1 2
#define INC_2 3
#define INC_3 4
#define INC_4 5
#define INC_5 6
#define INC_6 7
#define INC_7 8
#define INC_8 9
#define INC_9 9

#define DEC(x) PRIMITIVE_CAT(DEC_, x)
#define DEC_0 0
#define DEC_1 0
#define DEC_2 1
#define DEC_3 2
#define DEC_4 3
#define DEC_5 4
#define DEC_6 5
#define DEC_7 6
#define DEC_8 7
#define DEC_9 8

#define EXPR_S(s) PRIMITIVE_CAT(EXPR_, s)
#define EXPR_0(...) __VA_ARGS__
#define EXPR_1(...) __VA_ARGS__
#define EXPR_2(...) __VA_ARGS__
#define EXPR_3(...) __VA_ARGS__
#define EXPR_4(...) __VA_ARGS__
#define EXPR_5(...) __VA_ARGS__
#define EXPR_6(...) __VA_ARGS__
#define EXPR_7(...) __VA_ARGS__
#define EXPR_8(...) __VA_ARGS__
#define EXPR_9(...) __VA_ARGS__

#define SPLIT(i, ...) PRIMITIVE_CAT(SPLIT_, i)(, __VA_ARGS__)
#define SPLIT_0(p, a, ...) p ## a
#define SPLIT_1(p, a, ...) p ## __VA_ARGS__

#define IS_VARIADIC(...) \
SPLIT(0, CAT(IS_VARIADIC_, IS_VARIADIC_C __VA_ARGS__)) \
/**/
#define IS_VARIADIC_C(...) 1
#define IS_VARIADIC_IS_VARIADIC_C 0,
#define IS_VARIADIC_1 1,

#define NOT(x) IS_VARIADIC(PRIMITIVE_CAT(NOT_, x))
#define NOT_0 ()

#define COMPL(b) PRIMITIVE_CAT(COMPL_, b)
#define COMPL_0 1
#define COMPL_1 0

#define BOOL(x) COMPL(NOT(x))

#define IIF(c) PRIMITIVE_CAT(IIF_, c)
#define IIF_0(t, ...) __VA_ARGS__
#define IIF_1(t, ...) t

#define IF(c) IIF(BOOL(c))

#define EMPTY()

#define DEFER(id) id EMPTY()
#define OBSTRUCT(id) id DEFER(EMPTY)()

#define EAT(...)

#define REPEAT_S(s, n, m, ...) \
IF(n)(REPEAT_I, EAT)(, OBSTRUCT(), INC(s), DEC(n), m, __VA_ARGS__) \
/**/
#define REPEAT_INDIRECT() REPEAT_S
#define REPEAT_I(p, _, s, n, m, ...) \
EXPR_S _(s)( \
REPEAT_INDIRECT _()( \
s, n, p ## m, p ## __VA_ARGS__ \
) \
p ## m _(s, n, p ## __VA_ARGS__) \
) \
/**/

#define COMMA() ,

#define COMMA_IF(n) IF(n)(COMMA, EAT)()

#define A(s, i, id) \
COMMA_IF(i) \
template<EXPR_S(s)(REPEAT_S(s, INC(i), B, ~))> class id ## i \
/**/
#define B(s, i, _) COMMA_IF(i) class

EXPR_S(0)(REPEAT_S(0, 3, A, T))

The attached implementation yields correct results.

g++ yields

template< class> class T0 , template< class , class> class T1 ,
template< class , class , class> class T2

which is correct.

wave yields

template<class> class T0 , template<class , class> class T1 ,
template<class , class , class> class T2

which is correct.

Unsurprisingly, VC++ errors.

Regards,
Paul Mensonides
1.cpp

Brian Wood

unread,
May 2, 2012, 3:36:34 AM5/2/12
to bo...@lists.boost.org
I guess this thread was largely about communicating some things
to Microsoft. I have an item to add to the list. I'm not sure of the
terminology for this but it is supported in some of the newer versions
of g++, but not in the version of VS 11 that I have. The code shown
below is available here:
http://webEbenezer.net/misc/direct.tar.bz2<http://webebenezer.net/misc/direct.tar.bz2>.


This snippet is from cmwAmbassador.cc
#ifdef ENDIAN_BIG
, byteOrder(most_significant_first)
#else
, byteOrder(least_significant_first)
#endif

and this from cmwAmbassador.hh

#ifdef ENDIAN_BIG
::cmw::ReceiveBufferTCPCompressed<LeastSignificantFirst> cmwBuf; #else
::cmw::ReceiveBufferTCPCompressed<SameFormat> cmwBuf; #endif

uint8_t const byteOrder;


I want to rewrite that as the following in the .hh file.

#ifdef ENDIAN_BIG
::cmw::ReceiveBufferTCPCompressed<LeastSignificantFirst> cmwBuf;
uint8_t const byteOrder = most_significant_first;
#else
::cmw::ReceiveBufferTCPCompressed<SameFormat> cmwBuf;
uint8_t const byteOrder = least_significant_first; #endif

and be able to get rid of the ifdef in the .cc file. Microsoft:
please add support for this and let me know when it is available.

--
Brian Wood
Ebenezer Enterprises
http://webEbenezer.net <http://webebenezer.net/>

Nathan Ridge

unread,
May 2, 2012, 4:07:02 AM5/2/12
to Boost Developers Mailing List
This looks like a use of non-static data member initializers, a C++11
feature. GCC support for this feature has been added in 4.7, and VC does
not yet support it. This is one of the many C++11 features in which VC
lags behind GCC (others are variadic templates, uniform initialization,
delegating constructors, user-defined literals, and many others).

I agree that it would be nice if MS got VC up to speed on C++11, but
what does this have to do with the preprocessor?

Regards,
Nate

Lars Viklund

unread,
May 2, 2012, 7:28:14 AM5/2/12
to bo...@lists.boost.org
On Wed, May 02, 2012 at 08:07:02AM +0000, Nathan Ridge wrote:
>
> > I guess this thread was largely about communicating some things
> > to Microsoft.
> I agree that it would be nice if MS got VC up to speed on C++11, but
> what does this have to do with the preprocessor?

Nice attempt at hijacking, but why don't you file a proper bug report
over at Connect instead where it can be properly tracked so that your
interest can be properly accounted for when prioritising features for
upcoming releases. At least that's how I understand that it's done.

Heck, this is not even related to Boost in any way, you just happen to
use some of its primitives.

Doesn't the preprocessor rules state that the directive must be the
first non-whitespace character on a line anyway? Your inline code looks
a bit iffy in that respect.

(How's GDnet treating you nowadays? :D)

--
Lars Viklund | z...@acc.umu.se

Nathan Ridge

unread,
Jul 22, 2012, 12:58:45 AM7/22/12
to Boost Developers Mailing List

> From: s...@exchange.microsoft.com
>
> [Edward Diener]
> > Here are some links:
>
> Filed as DevDiv#407841 "VC's preprocessor considered harmful, Round 2".
>
> Remember that I am a user of the compiler, like you (and the STL greatly stresses the compiler, although not as much as Boost). Compiler bugs affect me like anyone else - in fact, more than anyone else, except Boost. You do not need to convince me that they should be fixed. :->

Has there been any favourable response to these bug reports?

Thanks,
Nate
Reply all
Reply to author
Forward
0 new messages