//Assume char is 8 bits, short is 16 bits, int is 32 bits, and  long is 64 bits
bitset<32,int> a; //Ok
bitset<33,int> b; //Error: 33 bits cannot fit in an int!
bitset<int> c; //Ok: same as bitset<sizeof(int)*CHAR_BIT,int>
bitset<char> d; //Ok
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
Visit this group at http://groups.google.com/a/isocpp.org/group/std-proposals/.
I don't think this is really a quality of implementation issue. There are many qualities that we may want to optimize for (space, performance) -- in this case we wish to optimize for space, whereas perhaps the implementation is optimizing for performance.
Did you know that sizeof(bool) ranges between 1 byte and 8 bytes depending on implementation? I have no trouble doubting that a library vendor would do whatever they thought sounded good to improve performance
-- and that a library consumer would balk at using a loosely defined and unpredictable data structure.
Large bool is not usually for performance; it’s for compatibility with C code using typedef int bool;. I’ve yet to see 8 bytes though. Which implementation is that?
The std::bitset interface has conversions to and from unsigned long
long. This strongly hints at using that type as the default underlying type.
Did the library team see a high priority interest in other
specializations? Obviously not - there are lots of other things to code
for a standard library.
template<size_t N>
class bitset;
to
template<size_t N, class T = /* implementation-defined */>
class bitset;
On x86, there probably isn't a performance difference.
My point is that there are years between releases of the standard, so we do
have enough time to perfect interfaces before they get set on stone. For
things that aren't ready, we have TS and libraries outside the standard that
can experiment. I really think the standard should really, really avoid
requiring an ABI break and should accumulate them to one major release and not
do it more frequently than once every 10 years.
On Thursday 11 September 2014 09:50:49 Matthew Fioravante wrote:
> Now back to the question of template <size_t N, typename T = /* impl
> defined */> std::bitset; Other than ABI breakage, does anyone see problems
> with this idea?
I still see one: there's still no clear idea of what that T would do and what
the requirements would be. Let's assume you require it to be an integral type,
possibly even an unsigned one. What then? Is there a requirement on the
sizeof? Is there a requirement on the alignof?
bitset<64, uint32_t> would do what?
Why should that be allowed if
bitset<64, uint64_t> is more efficient?
struct Instruction {
  uint32_t ordinal;
  std::bitset<64, uint32_t> flags;
  uint32_t operands[5];
};
static_assert(sizeof(Instruction) == 32);
And what happens when
bitset<64, uint32_t> is more efficient than bitset<64, uint64_t>?
On Thursday 11 September 2014 13:34:28 Matthew Fioravante wrote:
> These are the requirements:
> sizeof(bitset<N,T>) == sizeof(T) * (N / (sizeof(T) * CHAR_BIT)+1)
> alignof(bitset<N,T>) == alignof(T)
This doesn't work on Linux/x86 already.
struct X { long long d; };
alignof(X) != alignof(long long)
Why? Because the ABI went to great lengths to avoid getting broken when the
compiler decided that aligning 8-byte types on 8 byte boundaries was more
efficient.
Imposing the alignment like you ask means you cannot replace a long long with
a bitset<64, long long> in a structure and still keep binary compatibility.
I'd stick to the sizeof requirement and let alignof be anything.
I'd say the standard should not mandate either way. It's really something that
should be QoI: an implementation could decide that space is more important
than speed (tightly constrained system) or the other way around (slow bit
instructions).
The standard could recommend what you said, but implementations should be free
to ignore the recommendation if they have overriding priorities.
It would be useful to be able to control the underlying integral type used for std::bitset. I have a use case now where I want a compact object to have a set of up to 32 bit flags but because std::bitset<32> uses 64 bits on my system, I cannot use it.It would be nice if you could pass an integral type as template parameter to bitset to fix its implementation to have the size and alignment constraints of that type.
bitset<32,int> a; //Ok
bitset<33,int> b; //Error: 33 bits cannot fit in an int!
bitset<int> c; //Ok: same as bitset<sizeof(int)*CHAR_BIT,int>
bitset<char> d; //OkAn alternative to this would be to invent another type like std::small_bitset which behaves identically to bitset but uses as few bits as possible to implement the type. This alternative is less flexible and could lead to easy to miss efficiency bugs for odd sizes like std::small_bitset<24> due to overzealous space conservation.