Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

binary format of the number.

2 views
Skip to first unread message

J.W.

unread,
Nov 27, 2008, 1:03:00 PM11/27/08
to
In c++, we can use hex format to represent a number, for example, for number
70, we can use 0x46, but is there a way to represent a number using the
binary format, something similar to 0x.

Thanks,
J.W.

Erik Wikström

unread,
Nov 27, 2008, 1:30:51 PM11/27/08
to

No, but you can use octal if you like, all numbers starting with 0 are
considered to be in octal form.

--
Erik Wikström

Sherm Pendley

unread,
Nov 27, 2008, 1:33:31 PM11/27/08
to
"J.W." <jsunne...@gmail.com> writes:

Oddly enough, no there isn't. We can declare hex-format literals with a
leading 0x, and octal-format with a leading 0, but there's no standard
way to declare a binary-format literal.

That's always seemed to me a strange thing to omit from the language.

sherm--

--
My blog: http://shermspace.blogspot.com
Cocoa programming in Perl: http://camelbones.sourceforge.net

Bill

unread,
Nov 27, 2008, 2:32:26 PM11/27/08
to

"J.W." <jsunne...@gmail.com> wrote in message
news:492ee0d4$0$90269$1472...@news.sunsite.dk...

0x is designed as a shortcut--so that you don't have to do that, sort of
like specifiying C-style character strings with double-quotes. "Octal" is
also provided, as you may know, by omitting the x in 0x.

Why would you prefer to write down strings of 0s and 1s anyway? Seems like
it would increase your chance of error. If you really wanted to, you could
write your own function which translates binary character strings to an
integral data type. For instance, int binary2Int(const char *).

Bill


Message has been deleted
Message has been deleted

Tarmo Kuuse

unread,
Nov 28, 2008, 4:19:17 AM11/28/08
to
Bill wrote:
> Why would you prefer to write down strings of 0s and 1s anyway? Seems like
> it would increase your chance of error. If you really wanted to, you could
> write your own function which translates binary character strings to an
> integral data type. For instance, int binary2Int(const char *).

When working at low level (embedded, device drivers, ...), bit fields
are scattered throughout code. It is quite annoying to convert binary to
hexadecimal and vice versa for 32-bit values.

Let's see now, is bit 22 set in mask 0x03C00000? OK, give me a minute...

--
Tarmo

blargg

unread,
Nov 28, 2008, 10:47:16 AM11/28/08
to

Is bit 22 set in this mask? OK, give me a minute...

00000011110000000000000000000000

How about this one? Simple, yes.

(1L << 25) | (1L << 24) | (1L << 23) | (1L << 22)

If you really want binary, you can write a macro (or template) that
accepts four 8-bit chunks, something like
BIN(00000011,11000000,00000000,00000000).

Hendrik Schober

unread,
Nov 28, 2008, 2:40:45 PM11/28/08
to

ISTR that boost has some template that yields an integer
const from a string of o and 1. ICBWT.

> J.W.

Schobi

Bill

unread,
Nov 28, 2008, 5:51:34 PM11/28/08
to

"Tarmo Kuuse" <tarmo...@mail.ee> wrote in message
news:ggod2m$3ad$1...@aioe.org...

I think I would be MUCH less likely to make an error typing in hexadecimal
strings. The idea of typing in a string with a single 1 in the 22nd place
gives me a headache just thinking about it. I would need to
quadruple-check it. In hex, I would only need to double-check it.
Depending on the storage requirements I might prefer an array whose members
of type bool---then there is no ambiguity about what is meant by the 22nd
bit.

Bill


Kai-Uwe Bux

unread,
Nov 28, 2008, 6:09:39 PM11/28/08
to
Bill wrote:

>
> "Tarmo Kuuse" <tarmo...@mail.ee> wrote in message
> news:ggod2m$3ad$1...@aioe.org...
>> Bill wrote:
>>> Why would you prefer to write down strings of 0s and 1s anyway? Seems
>>> like it would increase your chance of error. If you really wanted to,
>>> you could write your own function which translates binary character
>>> strings to an integral data type. For instance, int binary2Int(const
>>> char *).
>>
>> When working at low level (embedded, device drivers, ...), bit fields are
>> scattered throughout code. It is quite annoying to convert binary to
>> hexadecimal and vice versa for 32-bit values.
>>
>> Let's see now, is bit 22 set in mask 0x03C00000? OK, give me a minute...
>>
>> --
>> Tarmo
>
> I think I would be MUCH less likely to make an error typing in hexadecimal
> strings. The idea of typing in a string with a single 1 in the 22nd
> place
> gives me a headache just thinking about it. I would need to
> quadruple-check it. In hex, I would only need to double-check it.

[snip]

I agree, but that is only because we are dealing with 32bit numbers. With
8bit numbers, things are different. I can see which bits are set in

01001110

right away. With a hex number such as

d3

I have to think, which is a BadThing(tm).


Best

Kai-Uwe Bux

James Kanze

unread,
Nov 29, 2008, 5:20:02 AM11/29/08
to
On Nov 29, 12:09 am, Kai-Uwe Bux <jkherci...@gmx.net> wrote:
> Bill wrote:
> > "Tarmo Kuuse" <tarmo.ku...@mail.ee> wrote in message

> >news:ggod2m$3ad$1...@aioe.org...
> >> Bill wrote:
> >>> Why would you prefer to write down strings of 0s and 1s
> >>> anyway?  Seems like it would increase your chance of
> >>> error.   If you really wanted to, you could write your own
> >>> function which translates binary character strings to an
> >>> integral data type.  For instance, int binary2Int(const
> >>> char *).

> >> When working at low level (embedded, device drivers, ...),
> >> bit fields are scattered throughout code. It is quite
> >> annoying to convert binary to hexadecimal and vice versa
> >> for 32-bit values.

> >> Let's see now, is bit 22 set in mask 0x03C00000? OK, give me a minute...

> > I think I would be MUCH less likely to make an error typing


> > in hexadecimal strings.   The idea of typing in a string
> > with a single 1 in the 22nd place gives me a headache just
> > thinking about it.   I would need to quadruple-check it.  In
> > hex, I would only need to double-check it.

> [snip]

> I agree, but that is only because we are dealing with 32bit
> numbers. With 8bit numbers, things are different. I can see
> which bits are set in

>   01001110

> right away. With a hex number such as

>   d3

> I have to think, which is a BadThing(tm).

Does it really matter? Depending on use, sometimes one, and
sometimes the other, may be easier. If C++ were to add binary
literals, it certainly wouldn't require their use; you'd use
whichever one seemed appropriate to the context.

I suspect that the main reason C++ doesn't support binary
literals is because C doesn't, and the main reason C doesn't is
because no one has proposed them to the committee.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Gennaro Prota

unread,
Nov 29, 2008, 10:53:51 AM11/29/08
to
James Kanze wrote:
> I suspect that the main reason C++ doesn't support binary
> literals is because C doesn't, and the main reason C doesn't is
> because no one has proposed them to the committee.

They were proposed for C++, as early as in 1993, together with
other lexical-related improvements (n0259). I don't know why
they were rejected, but the general committee attitude seems to
be way less conservative now: in fact, the current draft allows
defining a "literal operator" (which is not an operator; very
bad naming), such as:

unsigned long operator "" b( char const * ) ;

or a "literal operator template":

template< class ... Types >
unsigned long operator "" b() ;

and have whichever of them you define called for literals with
the b suffix:

11000111b

Hmm.

--
Gennaro Prota | name.surname yahoo.com
Breeze C++ (preview): <https://sourceforge.net/projects/breeze/>
Do you need expertise in C++? I'm available.

James Kanze

unread,
Nov 29, 2008, 2:46:18 PM11/29/08
to
On Nov 29, 4:53 pm, Gennaro Prota <gennaro/pr...@yahoo.com> wrote:
> James Kanze wrote:
> > I suspect that the main reason C++ doesn't support binary
> > literals is because C doesn't, and the main reason C doesn't
> > is because no one has proposed them to the committee.

> They were proposed for C++, as early as in 1993, together with
> other lexical-related improvements (n0259). I don't know why
> they were rejected,

Probably because it's not in C. Pretty much everything
regarding integral types is the responsibility of C, and C++
just adopts whatever C decides. (At least, that's the way it
should be.)

> but the general committee attitude seems to be way less
> conservative now: in fact, the current draft allows defining a
> "literal operator" (which is not an operator; very bad
> naming), such as:

>    unsigned long operator "" b( char const * ) ;

> or a "literal operator template":

>    template< class ... Types >
>    unsigned long operator "" b() ;

> and have whichever of them you define called for literals with
> the b suffix:

>    11000111b

This has to be a joke, right.

George Kettleborough

unread,
Nov 29, 2008, 4:25:24 PM11/29/08
to

For bit fields wouldn't it be a lot easier to use vector<bool>?

--
George Kettleborough

Pete Becker

unread,
Nov 29, 2008, 4:37:19 PM11/29/08
to
On 2008-11-29 16:25:24 -0500, George Kettleborough
<g.kettl...@member.fsf.org> said:

No. vector<bool> doesn't require any particular representation. Device
drivers, etc. typically need a particular bit set or cleared. That's
usually done with bitfields (implementation-dependent) or with masks.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Bill

unread,
Nov 29, 2008, 7:36:37 PM11/29/08
to

"George Kettleborough" <g.kettl...@member.fsf.org> wrote in message
news:7riYk.16560$8r3....@newsfe24.ams2...

Easier for the programmer or the processor? I suspect that people who write
software for embedded devices avoid such data strucutres. I could be wrong.

Bill


>
> --
> George Kettleborough


Hendrik Schober

unread,
Nov 30, 2008, 6:50:56 AM11/30/08
to
James Kanze wrote:
> On Nov 29, 4:53 pm, Gennaro Prota <gennaro/pr...@yahoo.com> wrote:
> [...] in fact, the current draft allows defining a

>> "literal operator" (which is not an operator; very bad
>> naming), such as:
>
>> unsigned long operator "" b( char const * ) ;
>
>> or a "literal operator template":
>
>> template< class ... Types >
>> unsigned long operator "" b() ;
>
>> and have whichever of them you define called for literals with
>> the b suffix:
>
>> 11000111b
>
> This has to be a joke, right.

Google found me n1511 and n1892. I haven't look what status
they have, though.


Schobi

Triple-DES

unread,
Dec 1, 2008, 2:04:19 AM12/1/08
to

They are in the latest WP (n2798).

Michael DOUBEZ

unread,
Dec 1, 2008, 5:04:01 AM12/1/08
to
J.W. a écrit :

> In c++, we can use hex format to represent a number, for example, for
> number 70, we can use 0x46, but is there a way to represent a number
> using the binary format, something similar to 0x.

There is not but you can make one.
This not a trivial task ; have a look at this article:
http://accu.org/index.php/journals/350

It shows how to implement it.
I guess you can copy/paste the relevant code (with credit to the author
perhaps :) ).

--
Michael

Nick Keighley

unread,
Dec 1, 2008, 6:12:05 AM12/1/08
to

no.

I've sometimes used string constants for readability
and converted them to unsigned int for use.

const char* io_mask_string = "0101 1111";
unsigned char io_mask = load_from_string (io_mask_string);

obviously you could suger the syntax a bit more

--
Nick Keighley

Tarmo Kuuse

unread,
Dec 1, 2008, 6:43:11 AM12/1/08
to
blargg wrote:
>> Let's see now, is bit 22 set in mask 0x03C00000? OK, give me a minute...
>
> Is bit 22 set in this mask? OK, give me a minute...
>
> 00000011110000000000000000000000

Point not valid. Small things - such as grouping - make a big difference.

Yes, slowly the brain rewires itself to natively process the hexadecimal
system. Until it does, however, bugs run rampant.

> How about this one? Simple, yes.
>
> (1L << 25) | (1L << 24) | (1L << 23) | (1L << 22)

This is a workaround, not provided by the language.

Many intelligent programmers (eventually) use it. And many do not. It's
a mess out there.

> If you really want binary, you can write a macro (or template) that
> accepts four 8-bit chunks, something like
> BIN(00000011,11000000,00000000,00000000).

Can you point to a sound compile time implementation of such
macro/template? I'd be interested.

--
Kind regards,
Tarmo

Michael DOUBEZ

unread,
Dec 1, 2008, 7:30:11 AM12/1/08
to
Tarmo Kuuse a écrit :

> blargg wrote:
>> If you really want binary, you can write a macro (or template) that
>> accepts four 8-bit chunks, something like
>> BIN(00000011,11000000,00000000,00000000).
>
> Can you point to a sound compile time implementation of such
> macro/template? I'd be interested.
Sorry, ignore my previous post. I thought you wanted to output it in a
stream in bin format.

With binary defined as follow:

template<typename T=unsigned,int nbbit=8>
struct binary
{
//initialize octet with binary form long
binary(long v)
{
T exp=1;
val=0;
//ensure v in range 0 11111111
v%=11111112;
//build value
while(v)
{
ldiv_t d=ldiv(v,10); //ldiv from stdlib.h C99
v=d.quot;
if(d.rem)val|=exp;
exp*=2;
}
}

//initialize octet with pre-computed value
struct value
{
T val;
value(T v):val(v){}
};
explicit binary(const value& v):val(v.val){}

operator T()const{return val;}

T val;
};

//concatenate binaries
template<typename T,int nbbit_lhs,int nbbit_rhs>
binary<T,nbbit_lhs+nbbit_rhs> operator<<(
const binary<T,nbbit_lhs>& lhs,
const binary<T,nbbit_rhs>& rhs)
{
typedef binary<T,nbbit_lhs+nbbit_rhs> binary_type;
return binary_type(
typename binary_type::value(
(lhs.val<<nbbit_rhs)|rhs.val
)
);
}

Example:
std::cout<<(binary<>(1)<<binary<>(101))<<std::endl;
int cons=binary<int>(10000000)binary<int>(0);

You can improve of the template expression such as:
* binary(int val,int number) to build representation of number with
only 0s or 1s in a given number at given position.
* using fills fill<int,24>(1) -> 111...11 24 bits

And so on.

--
Michael

jl_...@hotmail.com

unread,
Dec 1, 2008, 5:47:46 PM12/1/08
to


No way that I know of in C/C++, but for the record, it can be done
in Perl this way:

0b1000110

or even:

0b0100_0110

(I know this is slightly off-topic, but I thought it worth
mentioning.)

(In case you're wondering, Perl allows the '_' character to be
interspersed in a number, much like we use commas/periods to make big
numbers more readable.)

This has been useful for me several times, as in the past I've had
to check if a byte-value has its second (or third, or fourth...) bit
set. So my code would look like:

if ($byteValue & 0b0100_0000) # check if the second bit is set
{
...
}

Unfortunately, C++ does not support this syntax, so until it does,
I would port the above code from Perl to C++ using hexadecimal values,
like this:

if (byteValue & 0x40) // check if the second bit is set
{
...
}

(Notice that I still keep the comment. Otherwise, the maintainer
who comes after me will have no easy way of knowing if 0x40 has the
bits I intended to check, or if I accidentally created a bug in the
code.)

Of course, you could port that code using octal (by saying
"byteValue & 0100") which isn't really any better or worse than using
hexadecimal. The only difference is that you may have an easier time
with one than the other.

If converting to octal or hexadecimal still leaves a sour taste in
your mouth, your best bet would probably be to write your own function
that takes an input string of 1s and 0s and returns an unsigned long
integer (like some other posters in this thread have suggested). Then
you can write code like this:

if (byteValue & fromBinary("0100 0000")) // check if the second
bit is set
{
...
}

So using hexadecimal, octal, or your own function are three
different "work-arounds" to specifying a binary literal.

I hope this helps, J.W.

-- Jean-Luc

J. Cochran

unread,
Dec 1, 2008, 6:39:01 PM12/1/08
to
In article <492ee0d4$0$90269$1472...@news.sunsite.dk>,

Nope. Not in the language.
But if you're willing to take the runtime hit, see about using the
strtoul() function. And maybe wrap it up in a template or macro so you
don't have to specify the base.

blargg

unread,
Dec 1, 2008, 6:49:21 PM12/1/08
to
In article <gh0ikg$mou$1...@aioe.org>, Tarmo Kuuse <tarmo...@mail.ee> wrote:

> blargg wrote:
> >> Let's see now, is bit 22 set in mask 0x03C00000? OK, give me a minute...
> >
> > Is bit 22 set in this mask? OK, give me a minute...
> >
> > 00000011110000000000000000000000
>
> Point not valid. Small things - such as grouping - make a big difference.

So you want it to support grouped binary values? What would be the format,
something like this?

0b00000011_11000000_00000000_00000000

> Yes, slowly the brain rewires itself to natively process the hexadecimal
> system. Until it does, however, bugs run rampant.

Whenever I deal with hardware or other systems using bitmasks, I define
the mask constants once, then use bitwise operators to combine them. And
when defining them, I'd use 1 shifted left by the bit number, not a hex
constant.

[...]


> > If you really want binary, you can write a macro (or template) that
> > accepts four 8-bit chunks, something like
> > BIN(00000011,11000000,00000000,00000000).
>
> Can you point to a sound compile time implementation of such
> macro/template? I'd be interested.

Excuse my preference for the preprocessor (partly so it works in C and
C++). I convert the literals to octal in the macros, and also verify that
they contain 0 to 8 bits each, and nothing besides 0 or 1. You don't have
to pad constants to 8 bits, so for example BIN16(10,1) == 0x201.

#define BIN8_VALID_(a) (sizeof (char [(01##a & 0111111111) == 01##a]))
#define BIN8_(n) ((n >> 14 & 0xE0) | (n >> 8 & 0x18) | (n >> 4 & 0x07))
#define BIN8(a) (BIN8_( ((16 + 4 + 1) * (0##a)) ) * BIN8_VALID_( a ))
#define BIN16(a,b) (BIN8(a)*0x100u + BIN8(b))
#define BIN32(a,b,c,d) (BIN16(a,b)*0x10000u + BIN16(c,d))

int main()
{
#if TEST_INVALID
BIN8(000000000); // more than 8 bits
BIN8(100000000); // more than 8 bits
BIN8(2); // non-0/1 bit
#endif

assert( BIN8(00000001) == (1 << 0) );
assert( BIN8(00000010) == (1 << 1) );
assert( BIN8(00000100) == (1 << 2) );
assert( BIN8(00001000) == (1 << 3) );
assert( BIN8(00010000) == (1 << 4) );
assert( BIN8(00100000) == (1 << 5) );
assert( BIN8(01000000) == (1 << 6) );
assert( BIN8(10000000) == (1 << 7) );
assert( BIN8(01010101) == 0x55 );
assert( BIN8(10101010) == 0xAA );

assert( BIN16(10000001,10000000) == 0x8180 );
assert( BIN32(10000011,10000010,10000001,10000000) == 0x83828180 );

assert( !"passed" );
}

LR

unread,
Dec 1, 2008, 9:38:50 PM12/1/08
to
blargg wrote:

> Excuse my preference for the preprocessor

I think this might be a good place for the preprocessor, although the
usual caveats probably apply. I approached this from a slightly
different perspective.

I wonder if something like this might satisfy the OP:

#include <iostream>
#include <limits>
#include <bitset>

static const unsigned int DigitsInAnUnsignedLong =
std::numeric_limits<unsigned long>::digits;
typedef std::bitset<DigitsInAnUnsignedLong> BinaryType;


// this uses an explicit ctor
#define Binary(Z) BinaryType(std::string(#Z)).to_ulong()

int main() {
std::cout << Binary(11) << std::endl;
}

I don't particularly think that 'Binary' is a good name for a define,
but that's something to be careful about anyway, and this is just an
example, more a point of departure than a solution.

There might be some portability issues, and issues if you need signed
types, but maybe that can be taken care of with some abstraction.

LR

Kai-Uwe Bux

unread,
Dec 1, 2008, 10:02:58 PM12/1/08
to
LR wrote:

There also might be the issue that with this solution, the binary constants
cannot be used at compile-time, i.e.,

template < unsigned long n >
struct compile_time {};

int main() {
compile_time< Binary(11) > x;
}

would not compile. The snipped solution, however, deals with that.


Best

Kai-Uwe Bux

blargg

unread,
Dec 1, 2008, 10:42:55 PM12/1/08
to
Kai-Uwe Bux wrote:
> LR wrote:
> > blargg wrote:
> >> Excuse my preference for the preprocessor
> >
> > I think this might be a good place for the preprocessor, although the
> > usual caveats probably apply. I approached this from a slightly
> > different perspective.
> >
> > I wonder if something like this might satisfy the OP:
[...]

> > // this uses an explicit ctor
> > #define Binary(Z) BinaryType(std::string(#Z)).to_ulong()
[...]

> There also might be the issue that with this solution, the binary constants
> cannot be used at compile-time, i.e.,
>
> template < unsigned long n >
> struct compile_time {};
>
> int main() {
> compile_time< Binary(11) > x;
> }
>
> would not compile. The snipped solution, however, deals with that.

And optimizes out entirely. That is,
BIN32(10101010,11001001,00110101,01011001) used in an expression should
generate the EXACT same code as 0xAAC93559 used in its place. A template
meta-programmed approach could do the same, but I cringe at its
complexity.

Kai-Uwe Bux

unread,
Dec 1, 2008, 11:09:11 PM12/1/08
to
blargg wrote:

Well, the proposed solution has some remarkable trickyness, too. The
following is not as cute, but in my eyes easier to understand:

#define BIN_(n) ( ( n & 1) |\
( n>>2 & 2 ) |\
( n>>4 & 4 ) |\
( n>>6 & 8 ) |\
( n>>8 & 16 ) |\
( n>>10 & 32 ) |\
( n>>12 & 64 ) |\
( n>>14 & 128 ) |\
( n >> 16 & 256 ) )
#define BIN8(a) ( BIN_(0##a) * BIN8_VALID_(a) )

(the rest is as upthread.)


Best

Kai-Uwe Bux

Hendrik Schober

unread,
Dec 2, 2008, 3:14:21 AM12/2/08
to
jl_...@hotmail.com wrote:
> [...]

> if (byteValue & 0x40) // check if the second bit is set
> {
> ...
> }
>
> (Notice that I still keep the comment. Otherwise, the maintainer
> who comes after me will have no easy way of knowing if 0x40 has the
> bits I intended to check, or if I accidentally created a bug in the
> code.)

The maintainer after you will still have to find out whether
checking the 2nd bit actually makes sense at this point and
will have little chance to change the bit order without
introducing subtle bugs in the code base because he missed a
few oiccurances of '0x40'. That's why I'd strongly prefer
if( byteValue & kBitPowerLED )
In fact, I might even hide the test behind a function
if( isBitSet(byteValue,kBitPowerLED) )
so that any future maintainers don't have to worry whether I
intentionally used '&' and whether I actually might have meant
to use '|' and just gotten the logic wrong...

> [...]
> -- Jean-Luc

Schobi

Tarmo Kuuse

unread,
Dec 2, 2008, 5:57:46 AM12/2/08
to
blargg wrote:
> So you want it to support grouped binary values? What would be the format,
> something like this?
>
> 0b00000011_11000000_00000000_00000000

That's a good solution. The underscore is widely used for grouping in
specs and documents. I think most programmers would recognize this
presentation of binary immediately.

Dreaming is nice :)

>> Yes, slowly the brain rewires itself to natively process the hexadecimal
>> system. Until it does, however, bugs run rampant.
>
> Whenever I deal with hardware or other systems using bitmasks, I define
> the mask constants once, then use bitwise operators to combine them. And
> when defining them, I'd use 1 shifted left by the bit number, not a hex
> constant.

Not everybody does this. Each time I modify headers that define bits in
hex, it feels like a grain of sand is in my shoe.

> Excuse my preference for the preprocessor (partly so it works in C and
> C++). I convert the literals to octal in the macros, and also verify that
> they contain 0 to 8 bits each, and nothing besides 0 or 1. You don't have
> to pad constants to 8 bits, so for example BIN16(10,1) == 0x201.
>
> #define BIN8_VALID_(a) (sizeof (char [(01##a & 0111111111) == 01##a]))
> #define BIN8_(n) ((n >> 14 & 0xE0) | (n >> 8 & 0x18) | (n >> 4 & 0x07))
> #define BIN8(a) (BIN8_( ((16 + 4 + 1) * (0##a)) ) * BIN8_VALID_( a ))
> #define BIN16(a,b) (BIN8(a)*0x100u + BIN8(b))
> #define BIN32(a,b,c,d) (BIN16(a,b)*0x10000u + BIN16(c,d))
>

> [snip]

Very nifty. Preprocessor is fine - C is unavoidable in this area.

So, gcc is able to evaluate this compile time?

--
Kind regards,
Tarmo

Kai-Uwe Bux

unread,
Dec 2, 2008, 10:07:08 AM12/2/08
to
Tarmo Kuuse wrote:

> blargg wrote:
[snip]


>> #define BIN8_VALID_(a) (sizeof (char [(01##a & 0111111111) == 01##a]))
>> #define BIN8_(n) ((n >> 14 & 0xE0) | (n >> 8 & 0x18) | (n >> 4 &

>> #0x07))


>> #define BIN8(a) (BIN8_( ((16 + 4 + 1) * (0##a)) ) * BIN8_VALID_(

>> #a ))


>> #define BIN16(a,b) (BIN8(a)*0x100u + BIN8(b))
>> #define BIN32(a,b,c,d) (BIN16(a,b)*0x10000u + BIN16(c,d))
>>
>> [snip]
>
> Very nifty. Preprocessor is fine - C is unavoidable in this area.
>
> So, gcc is able to evaluate this compile time?

Any compliant compiler should. The tricky bits are

(a) that the result is correct.
(b) that no intermediate value overflows the bounds for unsigned constant
expressions (guaranteed to be >= 2^32).


Best

Kai-Uwe Bux

Gennaro Prota

unread,
Dec 2, 2008, 11:25:01 AM12/2/08
to
Tarmo Kuuse wrote:
> blargg wrote:
>> So you want it to support grouped binary values? What would be the
>> format,
>> something like this?
>>
>> 0b00000011_11000000_00000000_00000000
>
> That's a good solution. The underscore is widely used for grouping in
> specs and documents. I think most programmers would recognize this
> presentation of binary immediately.
>
> Dreaming is nice :)

Excuse me guys, do my posts appear on your news server? I'm
asking because I mentioned n0259 and its fate, about three days
ago. Am I properly plugged into the Usenet thing? :-)

Tarmo Kuuse

unread,
Dec 2, 2008, 11:59:23 AM12/2/08
to
Gennaro Prota wrote:
> Excuse me guys, do my posts appear on your news server? I'm
> asking because I mentioned n0259 and its fate, about three days
> ago. Am I properly plugged into the Usenet thing? :-)

Yes, your post is visible. There are some follow-up messages from James
Kanze and others.

--
Kind regards,
Tarmo

blargg

unread,
Dec 2, 2008, 1:01:26 PM12/2/08
to
Kai-Uwe Bux wrote:
> blargg wrote:
> > Kai-Uwe Bux wrote:
> >> LR wrote:
> >> > blargg wrote:
[...]

> Well, the proposed solution has some remarkable trickyness, too. The
> following is not as cute, but in my eyes easier to understand:
>
> #define BIN_(n) ( ( n & 1) |\
> ( n>>2 & 2 ) |\
> ( n>>4 & 4 ) |\
> ( n>>6 & 8 ) |\
> ( n>>8 & 16 ) |\
> ( n>>10 & 32 ) |\
> ( n>>12 & 64 ) |\
> ( n>>14 & 128 ) |\
> ( n >> 16 & 256 ) )
> #define BIN8(a) ( BIN_(0##a) * BIN8_VALID_(a) )
>
> (the rest is as upthread.)

Thanks for the refinement. I just whipped up the macro for the posting,
and figured doing each bit would be too verbose (and had recently read
Hacker's Delight, a book full of bitwise hacks).

Hendrik Schober

unread,
Dec 2, 2008, 5:25:23 PM12/2/08
to
blargg wrote:
> [...]

> And optimizes out entirely. That is,
> BIN32(10101010,11001001,00110101,01011001) used in an expression should
> generate the EXACT same code as 0xAAC93559 used in its place. A template
> meta-programmed approach could do the same, but I cringe at its
> complexity.

Mhmm. I just toyed with the idea a bit and it doesn't seem that
hard. Here's something to start with:

template< unsigned long BinNum >
struct const_bin {
static const unsigned long result = const_bin<BinNum/10>::result * 2
+ BinNum % 10;
};

template<>
struct const_bin<0> {
static const unsigned long result = 0;
};

This lacks a check for nonsensical input ('const_bin<3>::result'
compiles just fine) and support for multiple template arguments.
But unless I'm missing something (I usually do) both should be
rather easy to add.

Schobi

Kai-Uwe Bux

unread,
Dec 2, 2008, 5:38:44 PM12/2/08
to
Hendrik Schober wrote:

Did you try

const_bin< 00010001 >::result


Best

Kai-Uwe Bux

Hendrik Schober

unread,
Dec 2, 2008, 5:53:16 PM12/2/08
to

No, I didn't. That's because I know it doesn't support octal numbers. :)

> Kai-Uwe Bux

Schobi

0 new messages