Unsized integer types

318 views
Skip to first unread message

Jamie Gennis

unread,
Dec 9, 2010, 4:19:59 AM12/9/10
to golan...@googlegroups.com
This proposed change to remove the float built-in type got me thinking about the role of the int and uint types in Go.  I currently see very little value in using unsized integer types to store general integer or bit flag values, and lately I have been avoiding int/uint for everything except len results and array/slice indices.  I do this because on multiple occasions now I've created int variables, fields, or parameters that I've later had to change to an int32 when the need arose for them to interact with an explicitly sized value.  I dislike doing system-dependent conversions, e.g. from an int to an int32, and I've found that it's rarely needed so long as I follow this rule.

Have others adopted similar attitudes toward using unsized int types?  Are there other compelling reasons to use them?

My current use of the int type seems more similar to that of the size_t or ptrdiff_t types in C++ than that of the C++ int type.  If this is the proper usage of the unsized types, then I personally think that would be much clearer if the only unsized types were intptr and uintptr, and if the default integer constant type and string code point iteration type were int32.  If I'm off base on this then I'd appreciate some guidance from the Go gurus on when I should be using unsized integer types.

Thanks,
Jamie

Matt Joiner

unread,
Dec 9, 2010, 7:50:36 AM12/9/10
to golan...@googlegroups.com
Seems logical to me. I use int/uint in both C and Go as the fallback
integer when size doesn't matter. I assumed when I started Go that int
would be ssize_t/intptr_t in width but it's not the case, kind of
making it useless.

peterGo

unread,
Dec 9, 2010, 10:00:22 AM12/9/10
to golang-nuts
Jamie,

I use unsized types unless I specifically need a sized type. If you
look at the source code for the core Go packages, that's the usual
rule.

Peter

xophos

unread,
Dec 9, 2010, 11:00:43 AM12/9/10
to golang-nuts
Wouldn't it be much better for portability to remove non-pointer non-
sized
types all together?
What's the purpose anyway?
If someone want's their int32 or int64 renamed to int they could
simply define
the type.

Russell Newquist

unread,
Dec 9, 2010, 2:04:04 PM12/9/10
to xophos, golang-nuts
Depending upon the CPU in question, you might get substantially better performance out of one size vs another. For example, on an old school (pre-X64) Pentium chip, your best performance for standard (non-SSE) operations would be with 32-bit sizes, hands down, despite the fact that the chip provides native instructions for 8 and 16 bit arithmetic. On older 386 chips, however, the 16-bit operations were faster despite the chip providing hardware op codes for 32-bit instructions. I'm not actually sure off hand what the best performance is on an X64 chip, but it's entirely possible (especially on the earlier X64 chips) that the 32-bit operations are faster than 64-bit. The point is, by having the unsized types in the language, a particular implementation can choose which size to default to for best performance. I think this is entirely reasonable, given that 90% of the time, even in today's world, somebody declaring an "int" just plain doesn't need the full -9223372036854775808 to 9223372036854775807 range of a 64-bit int (in fact, most of the time such a type is only going to have a very limited range indeed). So why not allow the compiler the chance to choose the best default size for the particular platform? Especially with Go's approach, where if you really need a sized type you can explicitly declare one.

The key point is that in the vast majority of cases the programmer won't actually care whether it's a 32-bit or 64-bit integer under the hood, because either way they're not going to be using that much precision. Consider, for example, the typical loop counter. Most of us aren't out there every day writing dozens and dozens of loops that execute 9223372036854775807 or even 2147483647 times. Most of our loops are executing, at most, a few thousand times (many even only a few hundred or even a few dozen; consider iterating a string, for example).

I'm a little dismayed even to see the suggestion of getting rid of the unsized float and complex types. People haven't had to really deal with this problem for a generation (a human generation, not a computer generation; the early 90s was the last time this was really an issue), but this is exactly the moment in time when I think it's becoming relevant again. Between the transition to 64-bit chips and the transition to non-Intel based platforms (mobile chips, GPUs, etc), I think it's a huge mistake to take out these types.

I've spent most of my career writing highly portable software, so I'm sympathetic to the argument. But Go has already helped the issue tremendously just by including the explicitly sized types. In my C and C++ development, one of the first things we'd always do for projects that required portability was create explicitly sized types via typedef. It's much better to just include them in the language, as Go did. But since the tools are already there to aid developers who care about portability, I don't really see the point in taking away tools that could help the compiler generate better code.

Russell Newquist
rus...@newquistsolutions.com
www.newquistsolutions.com
(706) 352-9101

Jamie Gennis

unread,
Dec 9, 2010, 3:06:20 PM12/9/10
to golan...@googlegroups.com
That's the approach that I used to take, but I found myself often having to go back and change code later to explicitly specify the int size.  Having to do that is not the end of the world, but I don't see the advantage of not committing to a size up front.

Jamie

Russ Cox

unread,
Dec 9, 2010, 3:24:09 PM12/9/10
to golan...@googlegroups.com
In contrast, I find myself basically never using int32.
Where did your int32s come from?

Russ

Jamie Gennis

unread,
Dec 9, 2010, 3:42:04 PM12/9/10
to golan...@googlegroups.com
On Thursday, December 9, 2010 11:04:04 AM UTC-8, Russell Newquist wrote:
Depending upon the CPU in question, you might get substantially better performance out of one size vs another. For example, on an old school (pre-X64) Pentium chip, your best performance for standard (non-SSE) operations would be with 32-bit sizes, hands down, despite the fact that the chip provides native instructions for 8 and 16 bit arithmetic. On older 386 chips, however, the 16-bit operations were faster despite the chip providing hardware op codes for 32-bit instructions. I'm not actually sure off hand what the best performance is on an X64 chip, but it's entirely possible (especially on the earlier X64 chips) that the 32-bit operations are faster than 64-bit. The point is, by having the unsized types in the language, a particular implementation can choose which size to default to for best performance. I think this is entirely reasonable, given that 90% of the time, even in today's world, somebody declaring an "int" just plain doesn't need the full -9223372036854775808 to 9223372036854775807 range of a 64-bit int (in fact, most of the time such a type is only going to have a very limited range indeed). So why not allow the compiler the chance to choose the best default size for the particular platform? Especially with Go's approach, where if you really need a sized type you can explicitly declare one.

It seems like you're arguing for a use similar to that of C's int_fast32_t.  I would say two things to that:
  1. I personally don't find the performance argument to be too compelling.  I believe all of the mainstream processors these days perform pretty well with 32 bit ints, and I doubt that will change going forward.  I would not want to trade simplicity in the language for performance on some esoteric processors.
  2. The fact that 'int' is the return type of len means that in order to support large arrays/slices (for whatever "large" means on a given architecture) the size of int must be tied to the size of pointers anyway.  I don't think it makes sense to force the compiler to trade off maximum array size for integer performance (or visa versa).
The key point is that in the vast majority of cases the programmer won't actually care whether it's a 32-bit or 64-bit integer under the hood, because either way they're not going to be using that much precision. Consider, for example, the typical loop counter. Most of us aren't out there every day writing dozens and dozens of loops that execute 9223372036854775807 or even 2147483647 times. Most of our loops are executing, at most, a few thousand times (many even only a few hundred or even a few dozen; consider iterating a string, for example).

I would say that iterating over integer values should just use int32 if 32 bits is large enough.  Iterating over array/slice elements should use a type based to the pointer size.
 
I'm a little dismayed even to see the suggestion of getting rid of the unsized float and complex types. People haven't had to really deal with this problem for a generation (a human generation, not a computer generation; the early 90s was the last time this was really an issue), but this is exactly the moment in time when I think it's becoming relevant again. Between the transition to 64-bit chips and the transition to non-Intel based platforms (mobile chips, GPUs, etc), I think it's a huge mistake to take out these types.

I don't see how the compiler would be able to make intelligent decisions between the performance of smaller floats and the precision of larger ones.  Are there any mainstream architectures for which a float64 will be faster than a float32?  You could argue that the switch should be a compiler flag, but at that point why not just define a custom type (let's call it "float") that will allow you to switch between the different sized floats just as easily?

Jamie

Russell Newquist

unread,
Dec 9, 2010, 4:15:33 PM12/9/10
to golan...@googlegroups.com
To the points in order:

1) All mainstream processors today perform well with 32-bit ints. Not all of them perform well with 64-bit ints. My example was from the 16-bit era because it was a little more clear cut back then than it is now, but the difference still exists.
2) This is already violated in the mainstream Go release. The 64-bit Go compilers use 32-bit values for the untyped "int", and hence the builtin "len" function. If anything, I think your point is an argument for changing this function to return a "uintptr" value instead of an "int".

There are plenty of mainstream architectures in use today in which a 32-bit float is faster than a 64-bit float. The best example offhand would be mobile chips that don't have FPUs at all. If you have to do the floating point math in software emulation, 32-bit would blow the pants off of 64-bit for performance. Also, for many GPUs 32-bit (or even 16-bit) floats are much faster than 64-bit floats. This may seem like an odd point, but all it takes is a glance at the AMD or Nvidia road maps to see that the day is coming (and soon) when these processors can handle general purpose code.

Simplicity is a nice goal, and I'm a fan of it. I'm not such a fan of it that I believe that it trumps all other considerations. It's not at all obvious to my how the unsized types that Go offers add such a huge burden of complexity that they should be removed. Personally, I think they strike a very nice balance as they are. They give the programmer the tools he needs to tell the compiler "Within the constraints of the language (ie, 32 or 64-bit) I don't really care what size you use, I just need an integer value" or to tell the compiler "I really need an integer of x bits."

In short, there are times when you care and times when you don't. Go's type system as it is allows you to express that cleanly. Any advantage I can see from taking out the unsized types is small enough that, at best, I think it's a wash with the advantages of leaving them in. At best. I believe that as the development tools mature and other compilers become available for future platforms the advantages to leaving the types in outweigh the advantages of getting rid of them.

Russell Newquist
rus...@newquistsolutions.com
www.newquistsolutions.com

Jamie Gennis

unread,
Dec 9, 2010, 4:17:29 PM12/9/10
to golan...@googlegroups.com
Interesting.  My uses of the sized ints come mostly from protocol buffers and interacting with OpenGL.

Jamie

Jamie Gennis

unread,
Dec 9, 2010, 4:45:02 PM12/9/10
to golan...@googlegroups.com
1) All mainstream processors today perform well with 32-bit ints. Not all of them perform well with 64-bit ints. My example was from the 16-bit era because it was a little more clear cut back then than it is now, but the difference still exists. 
I was suggesting using int32 rather than int.  In Go, the int type only guarantees at least 32 bits, so anything that requires more than that must already use a sized type.  I think that in the 16-bit era there was not an obvious right answer for choosing a size.  16 bits was faster, but often couldn't represent a large enough numeric range.  Today 32 bits is both fast enough and has a large enough numeric range for most uses.
2) This is already violated in the mainstream Go release. The 64-bit Go compilers use 32-bit values for the untyped "int", and hence the builtin "len" function. If anything, I think your point is an argument for changing this function to return a "uintptr" value instead of an "int".
I'm aware that 6g currently uses 32 bit ints, and it's causing problems for some folks.  Changing len to return an intptr (I think there's some value in having it be signed) would address this issue.  If that were done, I personally wouldn't see much use for the int or uint types.

There are plenty of mainstream architectures in use today in which a 32-bit float is faster than a 64-bit float.
This was my point.  On pretty much all architectures the choice is between fast execution and better numeric precision.  That's not a decision that the compiler can reasonably make.  Only a developer with knowledge of the performance and precision requirements can make that call.

Simplicity is a nice goal, and I'm a fan of it. I'm not such a fan of it that I believe that it trumps all other considerations.
Just to be clear, I'm not fanatical about getting rid of unsized pointer types.  I understand that some folks like them.  I think there's a small problem with the fact that int is used both as a general integer type and as the return value of len.  I also think that there's a small problem with overuse of unsized types that will cause some people to go back an add sizes to their code.  I don't see much value in having the unsized int types, and my goal here was to better understand why other people value them.

Thanks,
Jamie

Jamie Gennis

unread,
Dec 9, 2010, 4:46:11 PM12/9/10
to golan...@googlegroups.com
Just to be clear, I'm not fanatical about getting rid of unsized pointer types.
Just to be MORE clear, I'm not fanatical about getting rid of unsized *integer* types either :)

Gerald

unread,
Dec 12, 2010, 4:27:54 AM12/12/10
to golang-nuts
If int guarantees 32 bits, then I guess porting Go to a 16-bit or 8-
bit system will require all unsized ints to be converted to int32?

David Roundy

unread,
Dec 12, 2010, 9:28:23 AM12/12/10
to Russell Newquist, xophos, golang-nuts
On Thu, Dec 9, 2010 at 2:04 PM, Russell Newquist
<rus...@newquistsolutions.com> wrote:
> I'm a little dismayed even to see the suggestion of getting rid of the
> unsized float and complex types. People haven't had to really deal with this
> problem for a generation (a human generation, not a computer generation; the
> early 90s was the last time this was really an issue), but this is exactly
> the moment in time when I think it's becoming relevant again. Between the
> transition to 64-bit chips and the transition to non-Intel based platforms
> (mobile chips, GPUs, etc), I think it's a huge mistake to take out these
> types.

The problem with the analogy between integer types and float types is
that in the case of integer types you don't care about the size unless
it overflows. In the case of float types, you always need to care
about the size, because it always affects your answer (unless you're
only doing arithmetic involving small integers * 2^n, in which case
it's exact, in which case you'd be better off with a fixed-point
representation). So there isn't the same possibility of "I just want
a good representation".

There has never been a speed advantage to 32-bit floats except in
terms of memory use (and cache), so the existing 32-bit float type
isn't defined as a "fast" float. It's just there (I presume) because
that's what it's called in C. I wouldn't object to that if the
float64 were called "double", which it is in most languages I know.
But I really think the language would be nicer without the "float"
type. Size really does matter for any floating-point use, either
because of memory consumption or because of required precision.

David

Ian Lance Taylor

unread,
Dec 12, 2010, 10:07:28 PM12/12/10
to Gerald, golang-nuts
Gerald <gpl...@gmail.com> writes:

> If int guarantees 32 bits, then I guess porting Go to a 16-bit or 8-
> bit system will require all unsized ints to be converted to int32?

No, on such a system the type "int" would be 32 bits. On such a system
code which used "int" would most likely run slower than code which used
"int16" (or "int8") where possible.

Ian

Gabor

unread,
Dec 13, 2010, 3:11:02 AM12/13/10
to golang-nuts
On Dec 13, 4:07 am, Ian Lance Taylor <i...@google.com> wrote:
Sounds logical.
Something similar would be useful for floats.
The default float should be 64-bit and
be accepted as arguments of math functions.

Jamie Gennis

unread,
Dec 13, 2010, 4:43:36 AM12/13/10
to golan...@googlegroups.com, Russell Newquist, xophos
There has never been a speed advantage to 32-bit floats except in terms of memory use (and cache)...
While I agree with most of the rest of what you said, I don't think this is true.  The ARM Cortex A8, for example, can perform scalar floating point operations on its NEON unit much faster than it can on its VFP unit, but NEON only supports single precision operations.

Jamie

Russ Cox

unread,
Dec 13, 2010, 8:45:41 AM12/13/10
to golan...@googlegroups.com, Russell Newquist, xophos
> While I agree with most of the rest of what you said, I don't think this is
> true.  The ARM Cortex A8, for example, can perform scalar floating point
> operations on its NEON unit much faster than it can on its VFP unit, but
> NEON only supports single precision operations.

I've found it very hard to find authoritative information
about all the different ARM FP options, but my understanding
was that the really fast NEON operations came from either
(a) being able to do more SIMD operations in parallel, which doesn't
really help Go proper, or (b) not being completely accurate in
the result, which also doesn't really help. I'd be very happy
to see a pointer to authoritative information in this area.

Russ

David Roundy

unread,
Dec 13, 2010, 9:58:21 AM12/13/10
to golan...@googlegroups.com, Russell Newquist, xophos

Yeah, I actually rewrote that sentence a few times. Interestingly,
(so far as I know) it's only with recent processors that there has
been a tendency towards single precision floating point operations
being faster than operations with doubles, with the introduction of
SIMD units. Which I think relates to floats being used in multimedia
applications where 32 bits is overkill, because it's dealing with
something that's going to end up either as 16-bit audio or 8-bit color
channels.

David

Ian Lance Taylor

unread,
Dec 13, 2010, 1:47:08 PM12/13/10
to Gabor, golang-nuts
Gabor <g...@szfki.hu> writes:

> On Dec 13, 4:07 am, Ian Lance Taylor <i...@google.com> wrote:
>> Gerald <gpla...@gmail.com> writes:
>> > If int guarantees 32 bits, then I guess porting Go to a 16-bit or 8-
>> > bit system will require all unsized ints to be converted to int32?
>>
>> No, on such a system the type "int" would be 32 bits.  On such a system
>> code which used "int" would most likely run slower than code which used
>> "int16" (or "int8") where possible.
>

> Sounds logical.
> Something similar would be useful for floats.
> The default float should be 64-bit and
> be accepted as arguments of math functions.

If I understand your comment correctly, you are saying that the math
functions should change from taking float64 parameters to taking float
parameters. Is that correct? Still, what do we gain by keeping the
type "float"? It's not really the same as the type "int", which is
basically a combination of the maximum size of an object and an
efficient integer type.

Ian

Gabor

unread,
Dec 13, 2010, 2:17:54 PM12/13/10
to golang-nuts
Thank you for your time and consideration.

I am not a computer scientist,
just a plain user with a scientific background.

Whenever I touch a float, float32 or float64
I wish to use mathematical functions with it.
Right now this is only possible with float64,
so float and float32 are pretty useless for math.
I do not know whether this situation
will ever change but I feel it should.
To make the default float 64-bit
and math functions accepting this float
might not be the ideal solution, I admit.
The ideal solution would be a generic math package.

Russ Cox

unread,
Dec 13, 2010, 3:50:08 PM12/13/10
to Gabor, golang-nuts
> Whenever I touch a float, float32 or float64
> I wish to use mathematical functions with it.
> Right now this is only possible with float64,
> so float and float32 are pretty useless for math.
> I do not know whether this situation
> will ever change but I feel it should.
> To make the default float 64-bit
> and math functions accepting this float
> might not be the ideal solution, I admit.

There is a tentative plan to delete float from
the language and make constants like 1.5
have default type float64, precisely so that
they will be usable with the mathematical functions.

Russ

Message has been deleted

Gabor

unread,
Dec 13, 2010, 4:21:15 PM12/13/10
to golang-nuts
Thank you very much for this inside info!
This decision would really make life easier.

Jamie Gennis

unread,
Dec 14, 2010, 4:54:03 AM12/14/10
to golan...@googlegroups.com, Russell Newquist, xophos
I'm by no means an authority on ARM FP, but my understanding is as follows (limited to the Cortex-A8 and -A9).  These APs will either support the VFP instructions or VFP + NEON instructions.  VFP is fully IEEE 754 compliant, while NEON doesn't do double precision or exception traps (maybe not even flags?), and it treats subnormals as 0.  On A8 the VFP unit is fairly wimpy (called "VFPLite"), which is the main reason the NEON unit is so much faster.  The VFPLite unit can be put in 'RunFast' mode by disabling traps and enabling subnormal zeroing, which allows the single precision VFP instructions to execute on in NEON unit.  A9 has faster (still compliant) VFP support with NEON instruction support being optional.

On both A8 and A9 the NEON float units are pipelined units capable of issuing many instructions at a rate of 1 instr every 1 or 2 cycles.  VFPLite is not pipelined AFAIK.  The A8 NEON unit is 2-wide FP32 internally, so a 4-wide instruction will occupy multiple pipeline stages.  The A9 and Qualcomm Scorpion's NEON implementations are both 4-wide FP32 internally.

The ARM docs are a decent source of info on instruction issue rates and latencies:


Also, be aware that there are multiple versions of the VFP and NEON (a.k.a. "Advanced SIMD") instruction sets and the AP cores, so some of what I said may apply only to certain versions.

This is all pretty confusing, so hopefully I didn't say too much that's just flat out wrong.  If I did, I hope someone will please correct me.

Jamie
Reply all
Reply to author
Forward
0 new messages