class TCPClientSocket {
...
private:
static const int kBufferSize = 32 * 1024;
...
};
Several well-known C++ books published before year 2000, such as The
C++ Programming Language, 2nd Ed.; Effective C++, 2nd Ed.; and More
Effective C++, mentioned this new feature.
Unfortunately Visual C++ did not allow this until recently. So the
Chromium source tree has a lot of code that did it the old-fashioned
way:
class TCPClientSocket {
...
private:
static const int kBufferSize;
...
};
// static
const int TCPClientSocket::kBufferSize = 32 * 1024;
But the Chromium source tree also has code that uses the new feature.
This implies all the compilers we use support this feature.
Therefore, I'd like to propose that in new code, we initialize static
const integral data members in class definitions because that makes
the code easier to understand. I don't think we need to go back and
update the old code.
Two notes:
1. Only static const members of integral types (such as int, char,
size_t) can be initialized in class definitions. For example, a
static const data member of type double still needs to be initialized
outside the class definition.
2. The C++ books also showed examples of defining static const
integral data members, without initializers. Our example above would
look like:
class TCPClientSocket {
...
private:
static const int kBufferSize = 32 * 1024;
...
};
// static
const int TCPClientSocket::kBufferSize;
I am not sure if this definition is necessary if we never takes a
reference or the address of the constant.
Wan-Teh Chang
Wan-Teh Chang
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
Is this still true in VS2010? I'd hope they'd have fixed this.
I think that's the plan for
not-too-far-in-the-future-but-not-right-now but hasn't happened yet. I
always forget. I'm sure it's the plan for *some* point in the future.
IANALL, but if there's trouble between compilers, is there some reason
we don't just use enum for plain constants? Taking the address of a
constant can't be especially common. And, as a side effect we'd ensure
there's no init-time code running.
(ps, shed -> Fuchsia)
On Wed, Aug 31, 2011 at 3:56 PM, Wan-Teh Chang <w...@google.com> wrote:
I am not sure if this definition is necessary if we never takes a
reference or the address of the constant.
-5- There shall be exactly one definition of a static data member that is used in a program;
On Wed, Aug 31, 2011 at 4:34 PM, Antoine Labour <pi...@chromium.org> wrote:I think that's the plan for
> On Wed, Aug 31, 2011 at 4:19 PM, William Chan (陈智昌) <will...@chromium.org>
> wrote:
>> Is this still true in VS2010? I'd hope they'd have fixed this.
>
> I have no idea.
> Have we dropped support for VS2008 ?
not-too-far-in-the-future-but-not-right-now but hasn't happened yet. I
always forget. I'm sure it's the plan for *some* point in the future.
I agree w/ scottmg@ that if there's no need to take the address of the constant, using an enum for static const integral quantities is nicer since it allows in-line initialization without requiring dropping an out-of-line definition in another file.
I recently hit this problem with a patch which used std::min() on such
a value. It was fun, because on Mac I got a link error like:
Undefined symbols:
"base::debug::StackTrace::kMaxTraces", referenced from:
__ZN4base5debug10StackTrace10kMaxTracesE$non_lazy_ptr in
libbase.a(stack_trace.o)
(maybe you meant: __ZN4base5debug10StackTrace10kMaxTracesE$non_lazy_ptr)
ld: symbol(s) not found
Pore over that for a few minutes and tell me what's different about
the target reference and the "maybe you meant" reference!
[Give up yet? They're identical.]
-scott
http://en.wikipedia.org/wiki/One_Definition_Rule#Definitions_of_static_const_data_members
was as close as I got to why. You can also manage it by casting to
force a constant context.
* When using enums to declare constants, the constants' type is effectively implementation-defined. If the enum is anonymous, then this is worse than having a constant with an explicit type for various reasons, including requiring annoying casts on some compilers but not others, and not necessarily type-checking (or objecting on questionable type conversions) in code that uses the constant. I would argue that using an anonymous enum for this is somewhat similar to using a #define, and would hope that people prefer "static const" to #define.
* When defining bitfields, using an enum is particularly troublesome, since code attempting to use the bit values cannot actually name the enum and use the named type unless the enum declares all possible bit combinations.
* In C++03, passing a value of anonymous type to a template arg -- like what happens when you use one of these constants with DCHECK_EQ() -- is illegal. This was mostly relaxed in C++11, but we don't compile with C++11 yet. While we disable this warning, it is preferable not to write illegal code if we can avoid it.
* When using enums to declare constants, the constants' type is effectively implementation-defined. If the enum is anonymous, then this is worse than having a constant with an explicit type for various reasons, including requiring annoying casts on some compilers but not others, and not necessarily type-checking (or objecting on questionable type conversions) in code that uses the constant. I would argue that using an anonymous enum for this is somewhat similar to using a #define, and would hope that people prefer "static const" to #define.Hah. My take on it is that using an enum is similar to using the numeric literal, which is what people think in, but with the up-side of abstraction (compared to the literal) and of no syntactic confusion (compared to the #define).
* When defining bitfields, using an enum is particularly troublesome, since code attempting to use the bit values cannot actually name the enum and use the named type unless the enum declares all possible bit combinations.Why not? The underlying type can hold all combinations (7.2-6-).
* In C++03, passing a value of anonymous type to a template arg -- like what happens when you use one of these constants with DCHECK_EQ() -- is illegal. This was mostly relaxed in C++11, but we don't compile with C++11 yet. While we disable this warning, it is preferable not to write illegal code if we can avoid it.Would you say r99087 is an example of the tradeoff you're making?
I think part of my bias here is that I think of enum as being "properly" used for truly "enumerated" types, and consider the use of it to define individual constants to be a hack compared with the "semantically correct" use of "const <type>".
The compiler whines at you that you need to explicitly cast. Using a typedefed integral type here avoids this issue.
I am curious whether your opinion would change after we move to C++11, in which it seems like we could probably use static consts without the current downsides.
I believe most compilers do support bitfields, but people don't use
them because supposedly compilers generate suboptimal code. I'm not
sure how true this is today.
For example:class Blibber {private:enum BlubberBits {STICKY = 1 << 0,GOOEY = 1 << 1,SLIMY = 1 << 2,};typedef uint8 Blubber; // bitwise inclusive OR of any combination of BlubberBits.Blubber blubber_;};
For that matter, however, why not use facilities of the language that do all this annoying stuff for us? Do some of our compilers not like this:
class Blibber {private:struct Blubber {bool sticky : 1;bool gooey : 1;bool slimy : 1;};Blubber blubber_;};And now code to access/modify the bits of a Blubber are insanely simple: no masking off bits to clear them, etc. Why isn't this more common?
On Thu, Sep 1, 2011 at 7:57 AM, Greg Thompson <g...@chromium.org> wrote:For that matter, however, why not use facilities of the language that do all this annoying stuff for us? Do some of our compilers not like this:
class Blibber {private:struct Blubber {bool sticky : 1;bool gooey : 1;bool slimy : 1;};Blubber blubber_;};And now code to access/modify the bits of a Blubber are insanely simple: no masking off bits to clear them, etc. Why isn't this more common?One reason is that it can be a bit less clear in this case precisely which physical bits of a value correspond to the logical bits that are named. This matters if you need to e.g. mask off a group of bits in the type (see e.g. the page transition types).
In fact if you're not careful, you can sometimes write code that packs in an unexpected way that results in the complete type becoming wide enough to kill your perf (because you can no longer hold it in a register, or whatever) -- this is a risk if you start mixing bools with other types in the packed object.
It's also true as Fred mentioned that compilers can generate worse code, though in general the auto-generated code for the above is about the same as the hand-generated masking that we'd do with other methods.
On Thu, Sep 1, 2011 at 1:36 PM, Peter Kasting <pkas...@google.com> wrote:One reason is that it can be a bit less clear in this case precisely which physical bits of a value correspond to the logical bits that are named. This matters if you need to e.g. mask off a group of bits in the type (see e.g. the page transition types).This sounds like an argument for using explicit bit twiddling if bit positions are part of some kind of contract. This makes sense.
In fact if you're not careful, you can sometimes write code that packs in an unexpected way that results in the complete type becoming wide enough to kill your perf (because you can no longer hold it in a register, or whatever) -- this is a risk if you start mixing bools with other types in the packed object.Sounds like something clang could find for us, no?
I just checked VS2008, and there's no difference in the generated code. I would be surprised if modern compilers generate worse code. (Note: I've been surprised before.) This argument sounds a bit like "language feature X was horrible back in the Cfront days. Stay away!"It's also true as Fred mentioned that compilers can generate worse code, though in general the auto-generated code for the above is about the same as the hand-generated masking that we'd do with other methods.
On Wed, Aug 31, 2011 at 6:18 PM, Ami Fischman <fisc...@chromium.org> wrote:* When using enums to declare constants, the constants' type is effectively implementation-defined. If the enum is anonymous, then this is worse than having a constant with an explicit type for various reasons, including requiring annoying casts on some compilers but not others, and not necessarily type-checking (or objecting on questionable type conversions) in code that uses the constant. I would argue that using an anonymous enum for this is somewhat similar to using a #define, and would hope that people prefer "static const" to #define.Hah. My take on it is that using an enum is similar to using the numeric literal, which is what people think in, but with the up-side of abstraction (compared to the literal) and of no syntactic confusion (compared to the #define).I think part of my bias here is that I think of enum as being "properly" used for truly "enumerated" types, and consider the use of it to define individual constants to be a hack compared with the "semantically correct" use of "const <type>". This is of course about as nitpicky as web authors preferring <strong> to <b> so perhaps we're off in the weeds here. That said, at least a few other people have in the past been approving of changes that move constants defined as anonymous enums to being static consts, so I'm not sure I'm alone.* When defining bitfields, using an enum is particularly troublesome, since code attempting to use the bit values cannot actually name the enum and use the named type unless the enum declares all possible bit combinations.Why not? The underlying type can hold all combinations (7.2-6-).Thanks for the spec section link. That was an interesting read. I downgrade my objection here to:Code actually trying to use a named enum type will keep running into problems because any attempt to perform bit operations on the enum values will end up converting to the underlying type (or some promotion thereof), necessitating a cast. So if you do:enum X { VAL_A = ..., VAL_B = ..., ... };X x = VAL_A | VAL_B;The compiler whines at you that you need to explicitly cast. Using a typedefed integral type here avoids this issue.I still think this is a serious enough problem that I would avoid enums at least for bitfield types, even if not avoiding them elsewhere.
* In C++03, passing a value of anonymous type to a template arg -- like what happens when you use one of these constants with DCHECK_EQ() -- is illegal. This was mostly relaxed in C++11, but we don't compile with C++11 yet. While we disable this warning, it is preferable not to write illegal code if we can avoid it.Would you say r99087 is an example of the tradeoff you're making?Yes. Interestingly, according to my try runs on some other patches, the DHECK_EQ() (and similar) problem doesn't seem to rear its head all the time, so this (and other patches of mine) may have been too aggressive in preemptively removing it.You may be right that right now some of this is "six of one, half a dozen of the other" (or even less compelling in some cases); I am curious whether your opinion would change after we move to C++11, in which it seems like we could probably use static consts without the current downsides.PK
--
On Thu, Sep 1, 2011 at 11:01 AM, Greg Thompson <g...@chromium.org> wrote:On Thu, Sep 1, 2011 at 1:36 PM, Peter Kasting <pkas...@google.com> wrote:One reason is that it can be a bit less clear in this case precisely which physical bits of a value correspond to the logical bits that are named. This matters if you need to e.g. mask off a group of bits in the type (see e.g. the page transition types).This sounds like an argument for using explicit bit twiddling if bit positions are part of some kind of contract. This makes sense.It's more than just "part of a contract". It's "any time it's convenient for some user to access groups of values within the bitfield". The page transition types I mentioned before are an example. There's no need for some kind of contract that particular bits have to be at particular locations, there's merely a need by some callers to be able to strip the type down to only a subset of its components. Using explicitly-placed bits and mask values is merely the easiest way to accomplish this.
In fact if you're not careful, you can sometimes write code that packs in an unexpected way that results in the complete type becoming wide enough to kill your perf (because you can no longer hold it in a register, or whatever) -- this is a risk if you start mixing bools with other types in the packed object.Sounds like something clang could find for us, no?I'm not sure what you're saying. There's nothing illegal about writing structs whose bitfields pack in ways that expand the overall struct size, so I don't know what you're hoping Clang would catch.
I just checked VS2008, and there's no difference in the generated code. I would be surprised if modern compilers generate worse code. (Note: I've been surprised before.) This argument sounds a bit like "language feature X was horrible back in the Cfront days. Stay away!"It's also true as Fred mentioned that compilers can generate worse code, though in general the auto-generated code for the above is about the same as the hand-generated masking that we'd do with other methods.No, it was intended as the reverse: "this is a theoretical problem but it doesn't matter in most practical situations so don't worry about it".
What I was trying to say overall was that there are reasons why in some cases the pattern you propose has flaws. It's a reasonable pattern to use when all you want to do is the simple case of a few flags packed into a field. I personally don't find the code for "if (val & FLAG1)" to be terribly more onerous than "if (val.flag1)", and since each has minor downsides in some scenarios, it seems OK to me for people to choose different ones at different times.
On Wed, Aug 31, 2011 at 6:44 PM, Peter Kasting <pkas...@google.com> wrote:Code actually trying to use a named enum type will keep running into problems because any attempt to perform bit operations on the enum values will end up converting to the underlying type (or some promotion thereof), necessitating a cast. So if you do:enum X { VAL_A = ..., VAL_B = ..., ... };X x = VAL_A | VAL_B;The compiler whines at you that you need to explicitly cast. Using a typedefed integral type here avoids this issue.
Would the following snippet solve the problem well enough for you?template <typename T>T bitOr(T a, T b) {return static_cast<T>(a | b);}
template <typename T>
T bitOrEqual(T &a, T b) {
return a = bitOr(a, b);
}
It makes you tear up a little. Of course, we could also posit the
existence of an entire system of templates to enable definition of and
operation on bitwise types.
That said, I did recently find myself thinking that locking down the
bitwise operators wouldn't be a bad thing. The number of cases where
they are used and the author REALLY thought through the precedence
rules is probably pretty low. Abstracting it a little would be
annoying for those of us who are used to thinking about the bits, but
a lot of people don't really know how badly they're missing the
bit-level boat, unfortunately.
-scott
bitwise operators wouldn't be a bad thing. The number of cases whereThat said, I did recently find myself thinking that locking down the
they are used and the author REALLY thought through the precedence
rules is probably pretty low. Abstracting it a little would be
annoying for those of us who are used to thinking about the bits, but
a lot of people don't really know how badly they're missing the
bit-level boat, unfortunately.
Yeah, the case which got me thinking was (x & mask == 0). It seems
like it might be reasonable to warn when you have to use precedence to
decide between bitwise operators and equality, and also possibly
between things like bitwise operators and arithmetic operators. For
instance, (x << b | flag) seems somewhat reasonable, since it's all
bitwise, but (x + y & mask) is somewhat questionable, to my mind -
indeed, a certain kind of C programmer embeds an ability to
transparently compose bitwise operations out of 2s-complement
arithmetic (sometimes I don't even notice I've done it), but that
doesn't mean it is appropriate when working on a large project like
this. (x & y && flag) seems right out, though if it were (x & y AND
flag) it wouldn't be so bad (I'm more concerned about the potential
for mis-reading intention that case).
Having (x << shift == mask) work right and (x & mask == 0) work wrong
is really wonderful :-).
I think the velocipede outbuilding should be mauve, BTW.
-scott
Yeah, the case which got me thinking was (x & mask == 0)
(x << b | flag)
(x + y & mask)
(x & y && flag)
(x & y AND flag)
This is caught by clang (i.e. is a compile time error since we build
with -Werror):
hummer:src thakis$ third_party/llvm-build/Release+Asserts/bin/clang
-Wall test.cc
test.cc:3:9: warning: & has lower precedence than ==; == will be
evaluated first [-Wparentheses]
if (a & 1 == 0)
^~~~~~~~
test.cc:3:9: note: place parentheses around the == expression to
silence this warning
if (a & 1 == 0)
^
( )
test.cc:3:9: note: place parentheses around the & expression to
evaluate it first
if (a & 1 == 0)
^
( )
1 warning generated.
It doesn't warn on any of the other things you mention below.
> It seems
> like it might be reasonable to warn when you have to use precedence to
> decide between bitwise operators and equality, and also possibly
> between things like bitwise operators and arithmetic operators. For
> instance, (x << b | flag) seems somewhat reasonable, since it's all
> bitwise, but (x + y & mask) is somewhat questionable, to my mind -
> indeed, a certain kind of C programmer embeds an ability to
> transparently compose bitwise operations out of 2s-complement
> arithmetic (sometimes I don't even notice I've done it), but that
> doesn't mean it is appropriate when working on a large project like
> this. (x & y && flag) seems right out, though if it were (x & y AND
> flag) it wouldn't be so bad (I'm more concerned about the potential
> for mis-reading intention that case).
>
> Having (x << shift == mask) work right and (x & mask == 0) work wrong
> is really wonderful :-).
>
> I think the velocipede outbuilding should be mauve, BTW.
>
> -scott
>