Controlling bitset's underlying type

763 views
Skip to first unread message

Matthew Fioravante

unread,
Sep 9, 2014, 11:06:49 PM9/9/14
to std-pr...@isocpp.org
It would be useful to be able to control the underlying integral type used for std::bitset. I have a use case now where I want a compact object to have a set of up to 32 bit flags but because std::bitset<32> uses 64 bits on my system, I cannot use it.

It would be nice if you could pass an integral type as template parameter to bitset to fix its implementation to have the size and alignment constraints of that type.

//Assume char is 8 bits, short is 16 bits, int is 32 bits, and  long is 64 bits


bitset
<32,int> a; //Ok
bitset
<33,int> b; //Error: 33 bits cannot fit in an int!
bitset
<int> c; //Ok: same as bitset<sizeof(int)*CHAR_BIT,int>
bitset
<char> d; //Ok

An alternative to this would be to invent another type like std::small_bitset which behaves identically to bitset but uses as few bits as possible to implement the type. This alternative is less flexible and could lead to easy to miss efficiency bugs for odd sizes like std::small_bitset<24> due to overzealous space conservation.

Jens Maurer

unread,
Sep 10, 2014, 1:50:03 AM9/10/14
to std-pr...@isocpp.org
On 09/10/2014 05:06 AM, Matthew Fioravante wrote:
> It would be useful to be able to control the underlying integral type used for std::bitset. I have a use case now where I want a compact object to have a set of up to 32 bit flags but because std::bitset<32> uses 64 bits on my system, I cannot use it.

As quality-of-implementation, your standard library could certainly
specialize std::bitset such that

std::bitset<1..8> uses std::uint8_t
std::bitset<9..16> uses std::uint16_t
std::bitset<17..32> uses std::uint32_t
std::bitset<33..64> uses std::uint64_t

This seems to be purely a quality-of-implementation issue that doesn't
require an interface change.

Jens

Brent Friedman

unread,
Sep 10, 2014, 2:47:46 AM9/10/14
to std-pr...@isocpp.org
I don't think this is really a quality of implementation issue. There are many qualities that we may want to optimize for (space, performance)  -- in this case we wish to optimize for space, whereas perhaps the implementation is optimizing for performance.


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.

David Krauss

unread,
Sep 10, 2014, 2:59:17 AM9/10/14
to std-pr...@isocpp.org
On 2014–09–10, at 2:47 PM, Brent Friedman <fourt...@gmail.com> wrote:

I don't think this is really a quality of implementation issue. There are many qualities that we may want to optimize for (space, performance)  -- in this case we wish to optimize for space, whereas perhaps the implementation is optimizing for performance.

Some types are faster than others on some hardware implementations, but anything capable of string processing can fetch a char.

I doubt that any implementation ever tried flexible bitset representation and then rejected the change for performance reasons. Unfortunately, library ABIs create a lot of inertia, and such changes tend to be discouraged in the first place. Progress would be better if libraries branched early and maintained a couple (or more) ABIs at once. (Of course, DLL compatibility would be worse. I’m not a fan of DLLs, though.)

Brent Friedman

unread,
Sep 10, 2014, 3:34:57 AM9/10/14
to std-pr...@isocpp.org
Did you know that sizeof(bool) ranges between 1 byte and 8 bytes depending on implementation? I have no trouble doubting that a library vendor would do whatever they thought sounded good to improve performance -- and that a library consumer would balk at using a loosely defined and unpredictable data structure.

David Krauss

unread,
Sep 10, 2014, 7:28:49 AM9/10/14
to std-pr...@isocpp.org
On 2014–09–10, at 3:34 PM, Brent Friedman <fourt...@gmail.com> wrote:

Did you know that sizeof(bool) ranges between 1 byte and 8 bytes depending on implementation? I have no trouble doubting that a library vendor would do whatever they thought sounded good to improve performance

Large bool is not usually for performance; it’s for compatibility with C code using typedef int bool;. I’ve yet to see 8 bytes though. Which implementation is that?

-- and that a library consumer would balk at using a loosely defined and unpredictable data structure.

Hyperbole.

Matthew Fioravante

unread,
Sep 10, 2014, 8:51:24 AM9/10/14
to std-pr...@isocpp.org
Libraries may choose to optimize for performance or space, or they may have choose a particular size for any reason whatsoever. Once the library has chosen what it will do, that choice is usually fixed as to not break ABI compatibility. If I have strict space requirements and want to remain portable, I cannot rely on the library as I have no control over what underlying type they will use.

I'd really like to stop using unsigned ints for this purpose and just stick to std::bitset. This issue is in the way of that.


On Wednesday, September 10, 2014 7:28:49 AM UTC-4, David Krauss wrote:

Large bool is not usually for performance; it’s for compatibility with C code using typedef int bool;. I’ve yet to see 8 bytes though. Which implementation is that?

Gcc x86_64 libstdc++ sizeof(std::bitset<1>) == 8, you can try it on gcc.godbolt.org. I was very disappointed by this. Maybe the implementors were just lazy? I have a hard time believing that doing bitset operations on a 32 bit int is slow.

Bo Persson

unread,
Sep 10, 2014, 12:26:07 PM9/10/14
to std-pr...@isocpp.org
The std::bitset interface has conversions to and from unsigned long
long. This strongly hints at using that type as the default underlying type.

Did the library team see a high priority interest in other
specializations? Obviously not - there are lots of other things to code
for a standard library.


Bo Persson


Matthew Fioravante

unread,
Sep 10, 2014, 12:52:03 PM9/10/14
to std-pr...@isocpp.org, b...@gmb.dk


On Wednesday, September 10, 2014 12:26:07 PM UTC-4, Bo Persson wrote:
The std::bitset interface has conversions to and from unsigned long
long. This strongly hints at using that type as the default underlying type.

I wouldn't take the existence of these methods as a hint that unsigned long long should be used under the hood. Converting a smaller integer to unsigned long long is easy. I would assume they chose the largest integral type in order to support integral conversion using one method which would work for most bitset instantiations.

It is more common to optimize for speed than size, so probably a good default for std::bitset<M> is to use uint_fastN_t, where N is the smallest integer size (8, 16, 32, 64) >= M. It would be nice if the standard mandated that implementations optimize for speed, then we have a nice usable default with known performance characteristics on all platforms.

Giving users the ability to override this choice with their own chosen type would allow them to override this choice and optimize for space.
 

Did the library team see a high priority interest in other
specializations? Obviously not - there are lots of other things to code
for a standard library.

Understandable as working on the standard library is no small task. The problem is that choosing the size affects the ABI. This is not something you can just get working now and optimize later because ABI breakage is a big deal. Its why we're still stuck with CoW std::string in libstdc++.

Brent Friedman

unread,
Sep 10, 2014, 12:54:31 PM9/10/14
to std-pr...@isocpp.org
I don't think it's entirely hyperbolic to say that bitset has unpredictable characteristics. This issue has implications in structure packing, alignment, and of course memory utilization. These, in turn, impact our ability to have nice cross-platform implementations of reflection and serialization. When faced with the above issues, you can see why many developers would rather implement their own bitset rather than leave their fate in the hands of hope or drastically increase complexity of their cross-platform build systems.

Thiago Macieira

unread,
Sep 10, 2014, 5:08:01 PM9/10/14
to std-pr...@isocpp.org
On Wednesday 10 September 2014 05:51:24 Matthew Fioravante wrote:
> Gcc x86_64 libstdc++ sizeof(std::bitset<1>) == 8, you can try it on
> gcc.godbolt.org. I was very disappointed by this. Maybe the implementors
> were just lazy? I have a hard time believing that doing bitset operations
> on a 32 bit int is slow.

On x86, there probably isn't a performance difference.

On other architectures, accessing word-sized chunks is often faster. The
std::bitset implementation uses "unsigned long" for all architectures and GCC
makes "long" have the same size as the machine's word (except for Windows
builds).

Anyway, bitset was discussed a few months ago on this list and it was left
alone.

Changing bitset now is a no-no. You can't add new template arguments. At most,
you'd need to add a new type to the Standard Library.
--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center
PGP/GPG: 0x6EF45358; fingerprint:
E067 918B B660 DBD1 105C 966C 33F5 F005 6EF4 5358

Jens Maurer

unread,
Sep 10, 2014, 5:50:05 PM9/10/14
to std-pr...@isocpp.org
On 09/10/2014 11:07 PM, Thiago Macieira wrote:
> You can't add new template arguments.

Why?

Which currently valid program would be invalidated by extending
the current

template<size_t N>
class bitset;

to

template<size_t N, class T = /* implementation-defined */>
class bitset;

?

(Yes, it's detectable using template template-parameters,
but that's a rather arcane use of a seldom-used feature.)

Jens

Zhihao Yuan

unread,
Sep 10, 2014, 6:08:43 PM9/10/14
to std-pr...@isocpp.org
On Wed, Sep 10, 2014 at 5:50 PM, Jens Maurer <Jens....@gmx.net> wrote:

  template<size_t N>
  class bitset;

to

  template<size_t N, class T = /* implementation-defined */>
  class bitset;

This changes mangled name.

Whether it's worth the change is another question.

--
Zhihao Yuan, ID lichray
The best way to predict the future is to invent it.
___________________________________________________
4BSD -- http://bit.ly/blog4bsd

Thiago Macieira

unread,
Sep 10, 2014, 6:12:55 PM9/10/14
to std-pr...@isocpp.org
Any program that uses the bitset class in a function or template parameter.

This changes the external name (mangled name) of the type.

Matthew Fioravante

unread,
Sep 10, 2014, 10:01:44 PM9/10/14
to std-pr...@isocpp.org
On Wednesday, September 10, 2014 5:08:01 PM UTC-4, Thiago Macieira wrote:

On x86, there probably isn't a performance difference. 

That is what I would suspect as well. In that case, x86 should use uint32_t to save space when N <= 32. Smaller objects mean more things can fit in a single cache line. This is a big deal for performance.

Would it be so horrible to sometimes break ABI compatibility between some major revisions of the language? What I would classify as a major revision is C++98, C++11, and C++17, with 03 and 14 being minor revisions.

bitset is a fundamental and extremely useful tool with an interface which is lacking. Adding additional types such as std::bitset2, std::bitset_fast / std::bitset_small, std::int_bitset, etc... seems counter productive.

Its already pretty much a bad idea to mix C++11 and non C++11 enabled binaries as it is. You basically have to compile everything in C++11 mode or not in C++11 mode. While it is possible to link different binaries together, this requires a lot of special cases and in general is not worth it.

Jens Maurer

unread,
Sep 11, 2014, 2:21:27 AM9/11/14
to std-pr...@isocpp.org
On 09/11/2014 12:12 AM, Thiago Macieira wrote:
> On Wednesday 10 September 2014 23:50:01 Jens Maurer wrote:
>> On 09/10/2014 11:07 PM, Thiago Macieira wrote:
>>> You can't add new template arguments.
>>
>> Why?
>>
>> Which currently valid program would be invalidated by extending
>> the current

> Any program that uses the bitset class in a function or template parameter.
>
> This changes the external name (mangled name) of the type.

Ah, so it's (only) about ABI compatibility, not about source
compatibility.

I believe we'll break ABI compatibility for about any major
C++ release (C++11, C++17), so that seems like a minor concern
to me.

Thanks,
Jens

Thiago Macieira

unread,
Sep 11, 2014, 3:12:58 AM9/11/14
to std-pr...@isocpp.org
On Wednesday 10 September 2014 19:01:43 Matthew Fioravante wrote:
> Would it be so horrible to sometimes break ABI compatibility between some
> major revisions of the language?

Yes.

Thiago Macieira

unread,
Sep 11, 2014, 3:21:47 AM9/11/14
to std-pr...@isocpp.org
On Thursday 11 September 2014 08:21:23 Jens Maurer wrote:
> I believe we'll break ABI compatibility for about any major
> C++ release (C++11, C++17), so that seems like a minor concern
> to me.

I don't believe that's in the agenda. The standard does not even discuss ABI,
but has so far managed to only make incremental changes that compiler
implementors and library writers can keep stable ABI for (except for
std::string with libstdc++).

It also goes against Herb Sutter's proposal for a more stable ABI.

No, I believe maintaining the ABI is actually very much the goal here.

David Krauss

unread,
Sep 11, 2014, 7:16:58 AM9/11/14
to std-pr...@isocpp.org

On 2014–09–11, at 3:21 PM, Thiago Macieira <thi...@macieira.org> wrote:

> It also goes against Herb Sutter's proposal for a more stable ABI.

Sutter proposed a degree of ABI versioning to enable future progress, not a commitment to eternal stagnation. The overall goal is to get the ABIs unstuck.

> No, I believe maintaining the ABI is actually very much the goal here.

The proposal here only calls for fixing a longstanding efficiency issue. Library ABIs, DLL distribution, and their brittleness are library implementers’ own fault. At some point the band-aid has to be pulled.

Fortunately, this particular issue can be solved opportunistically at any point in the future. It’s a good idea to report it as a bug to your implementation, so they can record it as “blocking” the longstanding issue of ABI revision.

Jens Maurer

unread,
Sep 11, 2014, 8:55:09 AM9/11/14
to std-pr...@isocpp.org
On 09/11/2014 09:21 AM, Thiago Macieira wrote:
> On Thursday 11 September 2014 08:21:23 Jens Maurer wrote:
>> I believe we'll break ABI compatibility for about any major
>> C++ release (C++11, C++17), so that seems like a minor concern
>> to me.
>
> I don't believe that's in the agenda. The standard does not even discuss ABI,
> but has so far managed to only make incremental changes that compiler
> implementors and library writers can keep stable ABI for (except for
> std::string with libstdc++).

Well, for the signature / name mangling issue, we now have inline
namespaces, which should help.

I believe we did break the ABI for C++11 (new allocator design seems a candidate),
and it's conceivable we'll break it again for C++17 (conceptualized standard
library containers come to mind).

Jens

Ville Voutilainen

unread,
Sep 11, 2014, 9:11:59 AM9/11/14
to std-pr...@isocpp.org
On 11 September 2014 15:55, Jens Maurer <Jens....@gmx.net> wrote:
> I believe we did break the ABI for C++11 (new allocator design seems a candidate),
> and it's conceivable we'll break it again for C++17 (conceptualized standard
> library containers come to mind).


Yep, there are other things that resulted in ABI breaks, like the
time_put in locale,
and I think move operations for iostreams can result in ABI breakage as well.
Jonathan Wakely knows more about the various details that cause ABI breakage
in C++11. C++17 is likely to result in ABI breaks for the things that were added
to the Fundamentals TS, if we manage to roll up those changes into C++17.
So while we try and avoid such ABI breaks, they do happen on major C++
releases sometimes.

Thiago Macieira

unread,
Sep 11, 2014, 12:28:44 PM9/11/14
to std-pr...@isocpp.org
On Thursday 11 September 2014 14:55:04 Jens Maurer wrote:
> On 09/11/2014 09:21 AM, Thiago Macieira wrote:
> > On Thursday 11 September 2014 08:21:23 Jens Maurer wrote:
> >> I believe we'll break ABI compatibility for about any major
> >> C++ release (C++11, C++17), so that seems like a minor concern
> >> to me.
> >
> > I don't believe that's in the agenda. The standard does not even discuss
> > ABI, but has so far managed to only make incremental changes that
> > compiler implementors and library writers can keep stable ABI for (except
> > for std::string with libstdc++).
>
> Well, for the signature / name mangling issue, we now have inline
> namespaces, which should help.

Not in this case. If I have a current std::bitset<32> in my class and call a
library method with a pointer to my class, then std::bitset<32> can't change
sizes.

An inline namespace would replace a binary compatibility issue with a source
compatibility one. I'd need to replace all uses of std::bitset<32> in my
sources with std::oldbitset<32> (assuming oldbitset is a typedef to the actual
bitset type and doesn't get overridden in the inline namespace).

> I believe we did break the ABI for C++11 (new allocator design seems a
> candidate), and it's conceivable we'll break it again for C++17
> (conceptualized standard library containers come to mind).

I have no clue about allocators, so I can't comment on what might have broken.
As for C++17, since it isn't out, there's no saying what will break.

My point is that there are years between releases of the standard, so we do
have enough time to perfect interfaces before they get set on stone. For
things that aren't ready, we have TS and libraries outside the standard that
can experiment. I really think the standard should really, really avoid
requiring an ABI break and should accumulate them to one major release and not
do it more frequently than once every 10 years.

Matthew Fioravante

unread,
Sep 11, 2014, 12:50:49 PM9/11/14
to std-pr...@isocpp.org


On Thursday, September 11, 2014 12:28:44 PM UTC-4, Thiago Macieira wrote:

My point is that there are years between releases of the standard, so we do
have enough time to perfect interfaces before they get set on stone. For
things that aren't ready, we have TS and libraries outside the standard that
can experiment. I really think the standard should really, really avoid
requiring an ABI break and should accumulate them to one major release and not
do it more frequently than once every 10 years.

We have time to think about new features. We have time to test them in a TS. Unfortunately sometimes even that is not enough and a deficiency is discovered after the feature has made it into the standard. I don't think its realistic to assume ABI breakage can never occur. It's unfortunate but it can and will happen from time to time.

Now back to the question of template <size_t N, typename T = /* impl defined */> std::bitset; Other than ABI breakage, does anyone see problems with this idea?


Thiago Macieira

unread,
Sep 11, 2014, 3:06:50 PM9/11/14
to std-pr...@isocpp.org
On Thursday 11 September 2014 09:50:49 Matthew Fioravante wrote:
> Now back to the question of template <size_t N, typename T = /* impl
> defined */> std::bitset; Other than ABI breakage, does anyone see problems
> with this idea?

I still see one: there's still no clear idea of what that T would do and what
the requirements would be. Let's assume you require it to be an integral type,
possibly even an unsigned one. What then? Is there a requirement on the
sizeof? Is there a requirement on the alignof?

bitset<64, uint32_t> would do what? Why should that be allowed if
bitset<64, uint64_t> is more efficient? And what happens when
bitset<64, uint32_t> is more efficient than bitset<64, uint64_t>?

Matthew Fioravante

unread,
Sep 11, 2014, 4:34:28 PM9/11/14
to std-pr...@isocpp.org


On Thursday, September 11, 2014 3:06:50 PM UTC-4, Thiago Macieira wrote:
On Thursday 11 September 2014 09:50:49 Matthew Fioravante wrote:
> Now back to the question of template <size_t N, typename T = /* impl
> defined */> std::bitset; Other than ABI breakage, does anyone see problems
> with this idea?

I still see one: there's still no clear idea of what that T would do and what
the requirements would be. Let's assume you require it to be an integral type,
possibly even an unsigned one. What then? Is there a requirement on the
sizeof? Is there a requirement on the alignof?

These are the requirements:
sizeof(bitset<N,T>) == sizeof(T) * (N / (sizeof(T) * CHAR_BIT)+1)
alignof(bitset<N,T>) == alignof(T)

The underlying bitset doesn't have to even be implemented using T, it just needs to satisfy the alignment and size requirements.
 

bitset<64, uint32_t> would do what?

This would be like a hand rolled bitset using uint32_t[2];
 
Why should that be allowed if
bitset<64, uint64_t> is more efficient?

The user wants to have a smaller alignment constraint, so that they can pack the bitset tightly into a small object.

For example:
struct Instruction {
  uint32_t ordinal
;
  std
::bitset<64, uint32_t> flags;
  uint32_t operands
[5];
};

static_assert(sizeof(Instruction) == 32);


And what happens when
bitset<64, uint32_t> is more efficient than bitset<64, uint64_t>?

This seems unlikely, but if it is the case then std::bitset<64> on that platform should be implemented as std::bitset<64, uint32_t>.

In addition to the new syntax, the standard should mandate that the default bitset<N> should first optimize for speed, and then if all else is equivalent optimize for space. I say standard mandated because by leaving the issue as a QoI problem means that some implementations would optimize for space, some for speed, some balance between the 2, or not at all.

The default should be speed because the optimal integer size is different on each platform. The user cannot know this unless they test and have ifdefs for each platform. Platform specific changes like this should be hidden in the standard library.

If the user has space requirements, then they can fix the type as needed.

Thiago Macieira

unread,
Sep 11, 2014, 7:03:39 PM9/11/14
to std-pr...@isocpp.org
On Thursday 11 September 2014 13:34:28 Matthew Fioravante wrote:
> These are the requirements:
> sizeof(bitset<N,T>) == sizeof(T) * (N / (sizeof(T) * CHAR_BIT)+1)
> alignof(bitset<N,T>) == alignof(T)

This doesn't work on Linux/x86 already.

struct X { long long d; };
alignof(X) != alignof(long long)

Why? Because the ABI went to great lengths to avoid getting broken when the
compiler decided that aligning 8-byte types on 8 byte boundaries was more
efficient.

Imposing the alignment like you ask means you cannot replace a long long with
a bitset<64, long long> in a structure and still keep binary compatibility.

I'd stick to the sizeof requirement and let alignof be anything.

> > bitset<64, uint32_t> would do what?
>
> This would be like a hand rolled bitset using uint32_t[2];
>
> > Why should that be allowed if
> > bitset<64, uint64_t> is more efficient?
>
> The user wants to have a smaller alignment constraint, so that they can
> pack the bitset tightly into a small object.
>
> For example:
> struct Instruction {
> uint32_t ordinal;
> std::bitset<64, uint32_t> flags;
> uint32_t operands[5];
> };
>
> static_assert(sizeof(Instruction) == 32);

Fair enough.

> In addition to the new syntax, the standard should mandate that the default
> bitset<N> should first optimize for speed, and then if all else is
> equivalent optimize for space. I say standard mandated because by leaving
> the issue as a QoI problem means that some implementations would optimize
> for space, some for speed, some balance between the 2, or not at all.

I'd say the standard should not mandate either way. It's really something that
should be QoI: an implementation could decide that space is more important
than speed (tightly constrained system) or the other way around (slow bit
instructions).

The standard could recommend what you said, but implementations should be free
to ignore the recommendation if they have overriding priorities.

Matthew Fioravante

unread,
Sep 11, 2014, 8:17:42 PM9/11/14
to std-pr...@isocpp.org


On Thursday, September 11, 2014 7:03:39 PM UTC-4, Thiago Macieira wrote:
On Thursday 11 September 2014 13:34:28 Matthew Fioravante wrote:
> These are the requirements:
> sizeof(bitset<N,T>) == sizeof(T) * (N / (sizeof(T) * CHAR_BIT)+1)
> alignof(bitset<N,T>) == alignof(T)

This doesn't work on Linux/x86 already.

struct X { long long d; };
alignof(X) != alignof(long long)

Why? Because the ABI went to great lengths to avoid getting broken when the
compiler decided that aligning 8-byte types on 8 byte boundaries was more
efficient.

Imposing the alignment like you ask means you cannot replace a long long with
a bitset<64, long long> in a structure and still keep binary compatibility.

Great catch, how about we relax the requirements in this way.

template <typename T, size_t N>
constexpr size_t struct_size {
  struct { T x[N]; } t;
  return sizeof(t);
}

template <typename T, size_t N>
constexpr size_t struct_align {
  struct { Tx[N]; } t;
  return alignof(t);
}

static_assert(alignof(bitset<N,T>) == struct_align<T,N>())):
static_assert(sizeof(bitset<N,T>) == struct_size<T,N>() * (N / (struct_size<T,N> * CHAR_BIT)+1);

I'd stick to the sizeof requirement and let alignof be anything.

I think we should try to anchor it more. The additional strength will make it more reliable for binary IO, IPC, and GPU buffer packing, marshalling, stuff like that.

Another of int types in <cstdint> could give exact alignment and size requirements.  (This would be a different idea to explore).

using uint32a32s_t = /* implementation defined */


I'd say the standard should not mandate either way. It's really something that
should be QoI: an implementation could decide that space is more important
than speed (tightly constrained system) or the other way around (slow bit
instructions).

The standard could recommend what you said, but implementations should be free
to ignore the recommendation if they have overriding priorities.

This seems reasonable. There at least should be official wording in the standard.

Matthew Fioravante

unread,
Sep 11, 2014, 11:44:45 PM9/11/14
to std-pr...@isocpp.org
Actually, I think it might still be ok to keep the original constraints.

alignof(T) == alignof(std::bitset<N,T>);

It can be implemented on linux i386 as

template <size_t N, typename T>
class bitset {
  T[N / (sizeof(T) * CHAR_BIT) +1] x;
} alignas(T);

Thiago Macieira

unread,
Sep 12, 2014, 2:40:46 AM9/12/14
to std-pr...@isocpp.org
On Thursday 11 September 2014 20:44:45 Matthew Fioravante wrote:
> template <size_t N, typename T>
> class bitset {
> T[N / (sizeof(T) * CHAR_BIT) +1] x;
> } alignas(T);

Which means you can't replace an uint64_t in a struct with a
bitset<64,uint64_t>. You may argue that this was never guaranteed to work
anyway and that's probably right.

Sean Middleditch

unread,
Sep 13, 2014, 5:32:11 PM9/13/14
to std-pr...@isocpp.org
Just to throw something crazy into the mix:

What about a bitset of a strong enum? Such that

  enum class my_enum { ... };
  std::bitset<my_enum> bits;

is equivalent to:

  enum class my_enum { ... };
  std::bitset<sizeof(my_enum) * CHAR_BIT, std::underlying_type_t<my_enum>> bits;

This might play well with a standard library set of extensions for treating strong enums as flag sets (e.g. allowing the bitwise operators on them to produce a library type like std::bitset<>, assuming that's determined to be better than a new std::flags<> or the like). We had a discussion about this a year or two ago I started based on C#'s approach (an attribute that modifies enum behavior slightly) and now use a library traits approach that specializes a std::is_flagset<> for an enum and then overloads the bitwise operators for any enumerations with std::is_flagset<> of true, but I just reuse the base enum type as the storage (which has the disadvantage of allowing a "strong" enum to contain values not in its formal list, though at least MSVS's debugger seems to recognize and handle this in a very useful and friendly manner).

On Tuesday, September 9, 2014 8:06:49 PM UTC-7, Matthew Fioravante wrote:
It would be useful to be able to control the underlying integral type used for std::bitset. I have a use case now where I want a compact object to have a set of up to 32 bit flags but because std::bitset<32> uses 64 bits on my system, I cannot use it.

It would be nice if you could pass an integral type as template parameter to bitset to fix its implementation to have the size and alignment constraints of that type.

//Assume char is 8 bits, short is 16 bits, int is 32 bits, and  long is 64 bits


bitset
<32,int> a; //Ok
bitset
<33,int> b; //Error: 33 bits cannot fit in an int!
bitset
<int> c; //Ok: same as bitset<sizeof(int)*CHAR_BIT,int>
bitset
<char> d; //Ok

An alternative to this would be to invent another type like std::small_bitset which behaves identically to bitset but uses as few bits as possible to implement the type. This alternative is less flexible and could lead to easy to miss efficiency bugs for odd sizes like std::small_bitset<24> due to overzealous space conservation.
Reply all
Reply to author
Forward
0 new messages