Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Are bitfields useful?

116 views
Skip to first unread message

Frederick Gotham

unread,
Jul 30, 2020, 5:10:00 AM7/30/20
to

This week I'm working on two different products that link with the same library, however one product links with a newer version of the library. The older library has a struct like this:

typedef struct {
u16 scale;
u8 window;
u8 a;
u8 b;
u8 c;
float x;
float y;
float z;
u32 reserved[4];
} Manager;


And the newer library has the same struct like this:

typedef struct {
u16 scale;
u8 window;
u8 a;
u8 b;
u8 c;
float x;
float y;
float z;
s8 offset;
u8 transparent : 1;
u8 reserved0 : 7;
u8 reserved1[2];
u32 reserved2[3];
} Manager;

I want to set the member "transparent" on both platforms, and so I started out with code like this:

void Set_Flag_For_Transparency(Manager &e)
{
float *const p_z = &e.z;
u8 *const p_offset = reinterpret_cast<u8*>(p_z + 1u);
u8 *const p_byte_containing_transparent = p_offset + 1u;
*p_byte_containing_transparent |= 0x80; //Set the high bit
}

After doing a little reading up on bitfields, it seems that ISO/ANSI standards give compilers a lot of freedom as to how bitfields are implemented -- so much freedom in fact that they don't have much use in portable code. For example, in my function just above, the 'transparent' bit might be 0x80 or it might be 0x01. Bits might not straddle bytes, and really the only thing that's guaranteed is the number of bits of precision (e.g. if it's 3 bits then the max value is 7).

Are bitfields pretty much useless other than for limiting the max value of an integer type (e.g. 3 bits for 7, 4 bits for 15, 5 bits for 31)?

The people who wrote the library I'm working with either:
(A) Expect me to use the same compiler as them
(B) Expect me to use a compiler that implements bitfields the same way
(C) Don't know that bitfields aren't portable for this purpose

Bo Persson

unread,
Jul 30, 2020, 5:36:49 AM7/30/20
to
or (D) Don't use an older library when there is an new and improved one

Note that the ISO standard doesn't require a compiler to have 8, 16, or
32 bit integer types, and doesn't say what size a float is. Or if there
are padding between members. So this is totally non-portable anyway.

And it definitely doesn't give any meaning to

reinterpret_cast<u8*>(p_z + 1u)

so don't do that.

The only portable way to set the transparent bit is

e.transparent = 1;


If you just *have* set a byte in the reserved area, you could do

#if defined SOME_OLD_SYSTEM
e.reserved[0] = transparent_hack_value;
#endif

assuming that the reserved[4] part isn't secretly used for something
else, and that transparent_hack_value has a value that your tests show
is "working" for the older system.

Really, there is no good way of doing this.


Bo Persson


Frederick Gotham

unread,
Jul 30, 2020, 6:10:11 AM7/30/20
to
On Thursday, July 30, 2020 at 10:36:49 AM UTC+1, Bo Persson wrote:

> or (D) Don't use an older library when there is an new and improved one



I work with the scraps that I'm given. My orders come from a guy who gets 150% my salary, who gets his orders from a guy who gets 150% his salary (so that's 225% my salary).

If the guy getting 2.25 times my salary tells my boss that the arm32 product uses Library v1.0.1, and that the aarch64 product uses Library v1.0.4, then I run with that.



> Note that the ISO standard doesn't require a compiler to have 8, 16, or
> 32 bit integer types, and doesn't say what size a float is. Or if there
> are padding between members. So this is totally non-portable anyway.



99% of architectures have 8-bit bytes and integer types of size 16-Bit and 32-Bit (without padding inside the integer type). I'm not saying that the 1% don't exist, but they're probably in a museum.



> And it definitely doesn't give any meaning to
>
> reinterpret_cast<u8*>(p_z + 1u)



This line of code has very understandable meaning on computers where all pointers are the same size. And even on a computer where a pointer to a byte is bigger (because it's a pointer + offset), the conversion to u8* will add in that offset for me.



> The only portable way to set the transparent bit is
>
> e.transparent = 1;



Again, I work with the scraps I'm given.



> If you just *have* set a byte in the reserved area, you could do
>
> #if defined SOME_OLD_SYSTEM
> e.reserved[0] = transparent_hack_value;
> #endif



I don't want to wipe out any other stuff stored in "reserved", plus I don't want to make assumptions about endianness (because our ARM products are Big-Endian, and our x86 products are Little-Endian). If I was going with your method then I'd do something like:

u32 constexpr high = is_big_endian ? 0x80000000 : 0x00000080;
u32 constexpr low = is_big_endian ? 0x01000000 : 0x00000001;

u32 constexpr transparent_value = is_bitfield_left_to_right ? high : low;

e.reserved[0] |= transparent_value;

But I think it would be preferable just to access one byte like this:

*reinterpret_cast<char unsigned*>(e.reserved) |= left_to_right ? 0x80 : 0x01;

In the end however I decided to go with:

void Set_Flag_For_Transparency(Manager &e)
{
typedef struct {
u16 scale;
u8 window;
u8 a;
u8 b;
u8 c;
float x;
float y;
float z;
s8 offset;
u8 transparent : 1;
u8 reserved0 : 7;
u8 reserved1[2];
u32 reserved2[3];
} Manager;

Latest_Rendition_Of_Struct *const p = reinterpret_cast<Latest_Rendition_Of_Struct*>(&e);

p->transparent = 1;
}

Juha Nieminen

unread,
Jul 30, 2020, 12:38:07 PM7/30/20
to
Frederick Gotham <cauldwel...@gmail.com> wrote:
> After doing a little reading up on bitfields, it seems that
> ISO/ANSI standards give compilers a lot of freedom as to how
> bitfields are implemented

Indeed. The standard places pretty much no requirements on how
the compiler will "pack" the bitfields inside the struct. Most
prominently, it does not guarantee that bitfields are packed
as tightly as possible, taking as little space as possible.

For example, you might think that if you have a bunch of consecutive
bitfields inside the struct whose sizes sum up to 32, that the
compiler will make them take 32 bits ie. 4 bytes. But compilers
are not obligated to do that, and they may well take more than
that (ie. there may be unused bits).

A compiler might even completely ignore bitfield sizes when it
comes to packing them inside a struct, and it would still be
allowed to do that by the standard.

Moreover, I don't think the standard even requires that a
"bifielded" integral in a struct be stored in contiguous
memory locations inside the struct. The compiler may well
split an integer into two and have the two parts not be
contiguous inside the struct (ie. the bits that make up
the bitfield may have unused bits in between, or even
perhaps another bitfield).

If you rely on a data having a very specific format in memory,
you cannot use bitfields for this, because they are not
guaranteed to be laid out as you want, and their layout
(and even how many bytes they take) may change from compiler
to compiler and system to system.

Bo Persson

unread,
Jul 31, 2020, 4:41:01 AM7/31/20
to
Correct.

In addition to all this, and in addition to being little endian or big
endian, it is also known that for example VC++ and gcc don't agree on
which order to allocate bits.

So if Frederick has

u8 transparent : 1;
u8 reserved0 : 7;

The transparent bit will be the high bit with some compilers and the low
bit on others.


Bo Persson

Scott Newman

unread,
Jul 31, 2020, 6:39:02 AM7/31/20
to
> So if Frederick has
>           u8 transparent : 1;
>           u8 reserved0 : 7;
> The transparent bit will be the high bit with some compilers and the low
> bit on others.

The decision is whether the compiler compiles for a big-endian machine
or a little-endian machine. A big-endian compiler begins to fill up the
bits from the high-bit, a little-endian compiler does it the oppostite
way.

Bo Persson

unread,
Jul 31, 2020, 6:46:11 AM7/31/20
to
No, that's exactly not the case.

Even when you compile for x86, VC++ and g++ will start from opposite
ends. The standard allows either way, and that's what happens.



Bo Persson

Scott Newman

unread,
Jul 31, 2020, 6:48:56 AM7/31/20
to
> No, that's exactly not the case.

That's what actually happens with the current compilers.

Bo Persson

unread,
Jul 31, 2020, 7:00:20 AM7/31/20
to
On 2020-07-31 at 12:48, Scott Newman wrote:
>> No, that's exactly not the case.
>
> That's what actually happens with the current compilers.

You did notice the example on the very next line, didn't you:

"Even when you compile for x86, VC++ and g++ will start from opposite
ends. The standard allows either way, and that's what happens."


Not everyone believes that gcc on Linux is the entire world when writing
portable code.



Bo Persson

Scott Newman

unread,
Jul 31, 2020, 7:07:42 AM7/31/20
to
> "Even when you compile for x86, VC++ and g++ will start from opposite
> ends. The standard allows either way, and that's what happens."

What the standard says doesn't count. What counts is what current
compilers do.
You can rely on that a little-endian or big-endian compiler does it
like I said since no one would use a compiler that would it different
because this would break the conventions.

James Kuyper

unread,
Jul 31, 2020, 8:09:34 AM7/31/20
to
So, on an x86, since VC++ and g++ do in fact do it differently, one of
those two very popular compilers must be one that "no one would use" -
which one? I ask because I don't know which one of those does it in a
different order than the one you expect - I've used both compilers
(which, according to you, makes me "no one"), but it would never occur
to me to write code that would require me to know which order they use.

Jorgen Grahn

unread,
Jul 31, 2020, 8:11:18 AM7/31/20
to
On Fri, 2020-07-31, Bo Persson wrote:
> On 2020-07-31 at 12:38, Scott Newman wrote:
>>> So if Frederick has
>>>            u8 transparent : 1;
>>>            u8 reserved0 : 7;
>>> The transparent bit will be the high bit with some compilers and the
>>> low bit on others.
>>
>> The decision is whether the compiler compiles for a big-endian machine
>> or a little-endian machine. A big-endian compiler begins to fill up the
>> bits from the high-bit, a little-endian compiler does it the oppostite
>> way.
>
> No, that's exactly not the case.

As far as I can tell, "Scott Newman" gives plausible but incorrect
information on purpose.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Scott Newman

unread,
Jul 31, 2020, 8:14:06 AM7/31/20
to
> So, on an x86, since VC++ and g++ do in fact do it differently, one of
> those two very popular compilers must be one that "no one would use" -
> which one?

VC++ behaves like what I said since there are only little-endian targets
for VC++.

Bo Persson

unread,
Jul 31, 2020, 8:57:43 AM7/31/20
to
On 2020-07-31 at 14:11, Jorgen Grahn wrote:
> On Fri, 2020-07-31, Bo Persson wrote:
>> On 2020-07-31 at 12:38, Scott Newman wrote:
>>>> So if Frederick has
>>>>            u8 transparent : 1;
>>>>            u8 reserved0 : 7;
>>>> The transparent bit will be the high bit with some compilers and the
>>>> low bit on others.
>>>
>>> The decision is whether the compiler compiles for a big-endian machine
>>> or a little-endian machine. A big-endian compiler begins to fill up the
>>> bits from the high-bit, a little-endian compiler does it the oppostite
>>> way.
>>
>> No, that's exactly not the case.
>
> As far as I can tell, "Scott Newman" gives plausible but incorrect
> information on purpose.
>

The nicest interpretation is that he only uses gcc on Linux, and
considers everything else irrelevant. Of course makes it easy for him to
write "portable" code. :-)


Bo Persson

Scott Newman

unread,
Jul 31, 2020, 9:00:16 AM7/31/20
to
> The nicest interpretation is that he only uses gcc on Linux, and
> considers everything else irrelevant. Of course makes it easy for
> him to write "portable" code.  :-)

I'm neither talking about gcc, nor MSVC or whatever. Regarding the
placement of bitfields you can simply distinguish between compilers
for big-endian and little-endian targets.

James Kuyper

unread,
Jul 31, 2020, 11:16:00 AM7/31/20
to
I've only ever used VC++ because my employer choose it for the
application they were paying me to work on. I had no need or interest
in knowing whether it supports big-endian targets. Therefore, I'll
take your word for that - (or maybe I won't - your reputation for
accuracy isn't exactly the highest).

However, according to Bo (I have not bothered personally
investigating this) on precisely those same targets, g++ allocates
bit-fields the other way around. According to you, that means that
"no one would use" g++? How do you reconcile that claim with the fact
that g++ is one of the most popular compilers targeting those
platforms?

Scott Newman

unread,
Jul 31, 2020, 2:55:29 PM7/31/20
to
> However, according to Bo (I have not bothered personally
> investigating this) on precisely those same targets, g++ allocates
> bit-fields the other way around. ...

No, it doesn't:

struct BF
{
unsigned lo : 16;
unsigned hi : 16;
};

void fLo( BF &bf )
{
bf.lo = 0x0102;
}

void fHi( BF &bf )
{
bf.hi = 0x0304;
}


_Z3fLoR2BF:
movl $258, %eax
movw %ax, (%rdi)
ret


_Z3fHiR2BF:
movl $772, %eax
movw %ax, 2(%rdi)
ret

Öö Tiib

unread,
Jul 31, 2020, 4:30:02 PM7/31/20
to
You are deliberately always writing what you know to be wrong because
you hope that then someone will respond saying that it is wrong. What
happened to you? What made you that kind of miserable human being?

Scott Newman

unread,
Jul 31, 2020, 5:43:55 PM7/31/20
to
>> I'm neither talking about gcc, nor MSVC or whatever. Regarding the
>> placement of bitfields you can simply distinguish between compilers
>> for big-endian and little-endian targets.

> You are deliberately always writing what you know to be wrong because
> you hope that then someone will respond saying that it is wrong. What
> happened to you? What made you that kind of miserable human being?

What I said above is 100% right.


Mr Flibble

unread,
Jul 31, 2020, 6:22:29 PM7/31/20
to
Don't feed the troll.

/Flibble

--
"Snakes didn't evolve, instead talking snakes with legs changed into snakes." - Rick C. Hodgin

“You won’t burn in hell. But be nice anyway.” – Ricky Gervais

“I see Atheists are fighting and killing each other again, over who doesn’t believe in any God the most. Oh, no..wait.. that never happens.” – Ricky Gervais

"Suppose it's all true, and you walk up to the pearly gates, and are confronted by God," Byrne asked on his show The Meaning of Life. "What will Stephen Fry say to him, her, or it?"
"I'd say, bone cancer in children? What's that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery that is not our fault. It's not right, it's utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a world that is so full of injustice and pain. That's what I would say."

Scott Newman

unread,
Aug 1, 2020, 6:52:22 AM8/1/20
to
>> You are deliberately always writing what you know to be wrong because
>> you hope that then someone will respond saying that it is wrong. What
>> happened to you? What made you that kind of miserable human being?

> Don't feed the troll.

Where am I trolling ? It's exactly as what I said: compiler for little
-endian targets fill up bitfields beginning from the LSB, compilers for
big-endian targets do it the opposite way.

Juha Nieminen

unread,
Aug 1, 2020, 9:01:16 AM8/1/20
to
Scott Newman <sco...@gmail.com> wrote:
> Where am I trolling ? It's exactly as what I said: compiler for little
> -endian targets fill up bitfields beginning from the LSB, compilers for
> big-endian targets do it the opposite way.

So which one of those is gcc, and which one is Visual Studio,
when they are compiling for the same target?

Scott Newman

unread,
Aug 1, 2020, 9:02:53 AM8/1/20
to
> So which one of those is gcc, and which one is Visual Studio,
> when they are compiling for the same target?

I've shown here - <rg1peo$uj$1...@dont-email.me> - that when they both
target to x86 or x64, they both behave according to what I said.

Scott Newman

unread,
Aug 1, 2020, 9:08:51 AM8/1/20
to
Here you can find the same statement:
http://mjfrazer.org/mjfrazer/bitfields/

Juha Nieminen

unread,
Aug 1, 2020, 5:21:36 PM8/1/20
to
Which is what, exactly? How do they behave?

Scott Newman

unread,
Aug 2, 2020, 2:52:26 AM8/2/20
to
>>> So which one of those is gcc, and which one is Visual Studio,
>>> when they are compiling for the same target?

>> I've shown here - <rg1peo$uj$1...@dont-email.me> - that when they both
>> target to x86 or x64, they both behave according to what I said.

> Which is what, exactly? How do they behave?

They behave identically because all compilers for little-endian
targets pack bitfieds in the same way.

Juha Nieminen

unread,
Aug 2, 2020, 3:35:05 AM8/2/20
to
You are still not answering my question.

Scott Newman

unread,
Aug 2, 2020, 3:40:04 AM8/2/20
to
>>> Which is what, exactly? How do they behave?

All compilers for little-endian-targets - including the both mentioned -
pack bitfields beginning from the LSB. Compilers for big-endian targets
do the opposite.

Öö Tiib

unread,
Aug 2, 2020, 7:06:03 AM8/2/20
to
He will forever say all things wrongly. The boltar just moves
goalposts but newman tells falsehoods that he knows are false without
blinking.

In my experience bitfields are not useful for inter-process
communication on same platform unless the programs are compiled
with exactly same compiler and same options. Such code:

#include <iostream>

struct S {
unsigned short t:9;
unsigned char r:7;
};

int main() { std::cout << sizeof(S) << '\n'; }

That will output 4 from MSVC and 2 from gcc ... nothing identical.
And other compilers may behave differently. Additionally gcc can
reorder bitfields sometimes just because of options, pragmas
or attributes used.

Bart

unread,
Aug 2, 2020, 7:58:22 AM8/2/20
to
I pointed this out on comp.lang.c too. And I gave the example of the GTK
library using bitfields extensively, with the possibility of issues
arising out of people mixing compilers (eg. GTK binaries use gcc, user
apps including those headers using MSVC).

The response I got was that the GTK library is so mainstream and so
widely used that it can't possibly have any such problems, or that the
developers had preempted such concerns.

I have an inkling it works by luck.

Scott Newman

unread,
Aug 2, 2020, 8:22:01 AM8/2/20
to
> In my experience bitfields are not useful for inter-process
> communication on same platform unless the programs are compiled
> with exactly same compiler and same options.

That depends on which types you use.

James Kuyper

unread,
Aug 2, 2020, 8:29:41 AM8/2/20
to
Let me suggest that what he means is that the following code:

#include <iostream>

int main(void)
{
union {
struct {
unsigned bit:1;
} s;
unsigned int ui;
} u = {1u};

std::cout << u.ui << std::endl;
return 0;
}

Is absolutely guaranteed to produce the same result on every C compiler,
even though the standard imposes no such requirement. Now, I might have
misunderstood him - if so, I'm providing him with this opportunity for
him to correct this code to more accurately reflect his intent.

I'm using the following command:

g++ -std=c++17 -pedantic -Wall -O2 -Wpointer-arith -Wcast-align
-fno-enforce-eh-specs -ffor-scope -fno-gnu-keywords
-fno-nonansi-builtins -Wctor-dtor-privacy -Wnon-virtual-dtor
-Wold-style-cast -Woverloaded-virtual -Wsign-promo bits.cpp -o bits

to translate the above code. g++ --version gives
g++ (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0

My system is running Ubuntu Linux 18.04.4, on a Intel® Core™ i5-5300U
CPU @ 2.30GHz × 4.

When I run the above program, I get "1". All that's needed to prove him
wrong is to demonstrate a different result on at least one other system.
Unfortunately, I don't currently have access to any other compiler
running on any other system, so I can't even test the simple claim that
MSVC++ and gcc differ in this regard.



Since, on my system, that produces the output "1", that presumably is
the result that he insists is guaranteed. All that's needed to prove him
wrong is to identify on system where it's false.

Scott Newman

unread,
Aug 2, 2020, 11:10:14 AM8/2/20
to
If you do it that way:

#include <iostream>

struct S
{
unsigned short t : 9;
unsigned short r : 7;
};

int main()
{
std::cout << sizeof(S) << std::endl;
}

The size is the same for all compilers having the same size of unsigned.
The bitfield-handling is the same for all little-endian and all big-en-
dian compilers.
So there are ways to have reliable IPC with bitfields.

Bart

unread,
Aug 2, 2020, 2:30:21 PM8/2/20
to
Try this version:

#include <iostream>

struct S
{
unsigned short t : 9;
unsigned char c;
unsigned short r : 7;
};

int main()
{
std::cout << sizeof(S) << std::endl;
}


I get a size of 4 with gcc/clang, and 6 with msvc (all tested at
rextester.com). (I get 8 with one C compiler, for that same struct.)

You can try it with t:8, the results vary a little I think. Here it
depends on whether the non-bitfield 'c' field is subsumed into the
storage units used for the bit-fields.

Scott Newman

unread,
Aug 2, 2020, 2:33:48 PM8/2/20
to
> Try this version:
>
>  #include <iostream>
>
>  struct S
>  {
>      unsigned short t : 9;
>      unsigned char c;
>      unsigned short r : 7;
>  };
>
>  int main()
>  {
>      std::cout << sizeof(S) << std::endl;
>  }
>

You must concatenate the same type, otherwise it's not guaranteed
to be compatible among all platforms with the same endianess.
The char has to be not in the bitfield-run as you can take its
address. But you cant take the address of a bitfield.

Bart

unread,
Aug 2, 2020, 2:43:10 PM8/2/20
to
You're saying that you can only have consecutive group of bitfields
within any struct. But you might be trying to match the layout of some
hardware, or some external struct in a library, or some data format.

If you can't do that, then why bother with bitfields at all if there are
restrictions? You'd have to add special comments to future maintainers.

Scott Newman

unread,
Aug 2, 2020, 2:47:12 PM8/2/20
to
> You're saying that you can only have consecutive group of bitfields
> within any struct. But you might be trying to match the layout of some
> hardware, or some external struct in a library, or some data format.
> If you can't do that, then why bother with bitfields at all if there are
> restrictions? You'd have to add special comments to future maintainers.

I'm the wrong person to address questions for the sense of bitfields.
I just told how you can use them semi-portable.

Tim Rentsch

unread,
Aug 23, 2020, 6:19:35 AM8/23/20
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Frederick Gotham <cauldwel...@gmail.com> wrote:
>
>> After doing a little reading up on bitfields, it seems that
>> ISO/ANSI standards give compilers a lot of freedom as to how
>> bitfields are implemented
>
> Indeed. The standard places pretty much no requirements on how
> the compiler will "pack" the bitfields inside the struct. [...]

This claim is debatable. If you want to say it isn't clear what
the C++ standard /does/ require for packing adjacent bitfields,
that's fine, but saying there are no requirements (or even
"pretty much" no requirements) is at the very least open to
debate.

Tim Rentsch

unread,
Aug 23, 2020, 9:38:56 AM8/23/20
to
Presumably you meant C++ compiler.

> even though the standard imposes no such requirement.

The program shown has undefined behavior.

Juha Nieminen

unread,
Aug 24, 2020, 2:23:03 AM8/24/20
to
Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>> Indeed. The standard places pretty much no requirements on how
>> the compiler will "pack" the bitfields inside the struct. [...]
>
> This claim is debatable. If you want to say it isn't clear what
> the C++ standard /does/ require for packing adjacent bitfields,
> that's fine, but saying there are no requirements (or even
> "pretty much" no requirements) is at the very least open to
> debate.

What are the requirement that the standard imposes onto the layout of
bitfields inside a struct/class?

I freely admit that I have not read the standard in this regard, but it's
my understanding that there are no requirements about how many bits are
actually allocated for each bitfield, or the order in which consecutive
bitfields are stored in the struct. It probably doesn't even mandate that
the bits in one bitfield must be in consecutive bytes.

For example, suppose you have something like:

struct S
{
unsigned short a:3, b:11, c:2
};

While those bits take 16 bits in total, and thus would fit in 2 bytes,
as far as I know, there's no requirement that the compiler does so.
For all that the standard is concerned, a compiler could just as well
pack those values as:

00000aaa bbbbbbbb 00000bbb 000000cc

Another compiler might back them as:

000000cc 00000bbb bbbbbbbb 00000aaa

A third compiler might decide to pack them as:

000ccaaa bbbbbbbb 00000bbb

James Kuyper

unread,
Aug 24, 2020, 9:54:56 AM8/24/20
to
On 8/24/20 2:22 AM, Juha Nieminen wrote:
...
> What are the requirement that the standard imposes onto the layout of
> bitfields inside a struct/class?


"An implementation may allocate any addressable storage unit large
enough to hold a bit-field. If enough space remains, a bit-field that
immediately follows another bit-field in a structure shall be packed
into adjacent bits of the same unit. If insufficient space remains,
whether a bit-field that does not fit is put into the next unit or
overlaps adjacent units is implementation-defined. The order of
allocation of bit-fields within a unit (high-order to low-order or
low-order to high-order) is implementation-defined. The alignment of the
addressable storage unit is unspecified.

A bit-field declaration with no declarator, but only a colon and a
width, indicates an unnamed bit-field. 126) As a special case, a bit
field structure member with a width of 0 indicates that no further
bit-field is to be packed into the unit in which the previous bit-field,
if any, was placed." (6.7.2.1 p11-12).

That's not enough specification to be of much use, but it's (barely)
enough to refute the claim that the standard impose no requirements.

Tim Rentsch

unread,
Sep 10, 2020, 10:29:38 AM9/10/20
to
Juha Nieminen <nos...@thanks.invalid> writes:

> Tim Rentsch <tr.1...@z991.linuxsc.com> wrote:
>
>>> Indeed. The standard places pretty much no requirements on how
>>> the compiler will "pack" the bitfields inside the struct. [...]
>>
>> This claim is debatable. If you want to say it isn't clear what
>> the C++ standard /does/ require for packing adjacent bitfields,
>> that's fine, but saying there are no requirements (or even
>> "pretty much" no requirements) is at the very least open to
>> debate.
>
> What are the requirement that the standard imposes onto the layout of
> bitfields inside a struct/class?
>
> I freely admit that I have not read the standard in this regard, but it's
> my understanding that there are no requirements about how many bits are
> actually allocated for each bitfield, or the order in which consecutive
> bitfields are stored in the struct. It probably doesn't even mandate that
> the bits in one bitfield must be in consecutive bytes.

Your understanding would be better informed if you would
read what the C and C++ standards actually say before
making assertions about what they do or do not require.

Tim Rentsch

unread,
Sep 10, 2020, 10:31:10 AM9/10/20
to
James Kuyper <james...@alumni.caltech.edu> writes:

> On 8/24/20 2:22 AM, Juha Nieminen wrote:
> ...
>
>> What are the requirement that the standard imposes onto the layout of
>> bitfields inside a struct/class?
>
> [citation from C++ standard]
>
> That's not enough specification to be of much use, but it's (barely)
> enough to refute the claim that the standard impose no requirements.

In mathematics being barely right is the same as being right.

James Kuyper

unread,
Sep 10, 2020, 10:52:41 AM9/10/20
to
That's why I parenthesized "barely". As a practical matter, the fact
that the specification is almost useless renders the fact that it does
actually exist relatively unimportant.
0 new messages