raw bit types without arithmetic

189 views
Skip to first unread message

jimmy frasche

unread,
May 13, 2024, 4:51:19 PMMay 13
to golang-nuts
I'm not 100% sure if this is a good idea but it's been knocking around
in my head all week so I thought I'd share in case it has any merit:

Introduce bitsN types for N=8, 16, 32, 64, 128, 256, and 512.

These are similar to uintN but they are unordered and have no
arithmetic operations defined.

They only have literals, comparison, and bitwise operations.
(fmt.Print and friends should render them in hex by default.)

Conversions between the numeric types and the bitN are allowed, which,
for example, let's us rewrite math.Float64bits as simply

func Float64bits(f float64) uint64 {
return uint64(bits64(f))
}

Since there are no arithmetic operations, the 128+ sizes should be
fairly efficient to fake on architectures without special
instructions/registers.

Potential uses:

UUIDs could be stored as a bits128 instead of a [2]uint64 or [16]byte.

SIMD vectors could be created and stored easily, even if they need
assembly to operate on them efficiently.

Same for int128/uint128 values or even for more exotic numeric types
like the various float16 definitions or "floating slash" rationals.

It would also be handy to have larger bitsets that are easy to work with.

Kevin Chowski

unread,
May 13, 2024, 11:38:36 PMMay 13
to golang-nuts
How about just allowing bitwise operations on byte arrays (of the same length)?

Kevin Chowski

unread,
May 13, 2024, 11:41:38 PMMay 13
to golang-nuts
Sorry, sent too early.

Obviously that doesn't support the bitwise type conversion you mentioned; I don't really have an opinion on that one, I don't really convert from float to bits very often.

It seems like the compiler optimizations you mention could happen with or without these extra types, if such optimizations just worked on byte arrays in general.

jimmy frasche

unread,
May 14, 2024, 2:27:01 PMMay 14
to Kevin Chowski, golang-nuts
Arrays are arrays regardless of what they're arrays of. So it would be
strange for arrays of certain things to have properties that other
arrays don't have and bitwise ops don't make sense for arrays of, say,
strings.

Also some processors support these sizes natively but wouldn't support
a [4096]byte natively. On processors that don't support all or some of
these sizes it would need to fake it by doing m operations† but that's
bounded and if, for example, the target processor supports 256 bit but
not 512 bit values it can use two 256 ORs instead of four 64 bit ORs.
Maybe that could be made to work in general and if so that would be
great but it's not the only benefit of these types.

† except for shifts, those would have to deal with carries. That may
be a problem, but I think even then it should be fast enough to not be
an issue the way faking div or something very expensive like that
would be.
> --
> You received this message because you are subscribed to the Google Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/1f0329c1-8b82-4683-999e-62b9a046c0a8n%40googlegroups.com.

Kevin Chowski

unread,
May 15, 2024, 11:55:58 AMMay 15
to golang-nuts
By the way - I definitely don't feel strongly about this, and I am just guessing that it's more likely to support new operators for existing types than it is to create new built-in types. I think bitsN types are fine, although again I don't know if I'd personally ever use them. But I am enjoying the conversation so far, so here are some more thoughts I had :)

I think the fact that you can compare huge byte arrays with == means that there is some desire inside the Go compiler team to support variable-length operations based on the type regardless of whether it can be "efficiently" compiled on all systems, so I don't see that aspect as being inconsistent with the existing language. As another example that is more dynamic (and therefore even more "surprising"), doing `string1 == string2` might take a very long time if string1 and string2 have the same contents and have a huge length. To me, those seem the same (or worse) as accepting that `byteArray1 | byteArray2` might take a long time if they are statically known to be long byte arrays; it seems pretty straightforward that doing an operation on `[N]byte` will be faster than an operation on `[N*1000]byte` and I think people will intuit that.

As for whether it would be weird to support some operations on some arrays but not others, technically this is already true: you can't use == on an array when the elements are not comparable. And people seem OK with that :)   there are also some other special cases for the byte type, e.g. you can convert a byte slice into a string (and viceversa) but not with an int slice. I personally don't see this inconsistency as an issue or learning impediment, but I can understand that someone may disagree with my perspective here.

(Another idea: how about both? e.g. if we had both `bitsN` types AND the ability to use bitwise operations on arrays of (only) bitsN types? Or something along those lines. That would open up the ability to express some SIMD operations in otherwise extremely idiomatic-looking code, while also supporting the smaller-scale operations you are suggesting.)
Reply all
Reply to author
Forward
0 new messages