Why was int chosen to be 32 bits on x86-64?

645 views
Skip to first unread message

Michael Shields

unread,
May 25, 2011, 2:54:07 PM5/25/11
to golan...@googlegroups.com
This question is asked in the FAQ (http://golang.org/doc/go_faq.html#q_int_sizes), but it is not answered.  The FAQ explains why Go has an 'int' type of implementation-defined size, but it does not explain why the two existing implementations chose to make this 32 bits on x86-64, and if there was a discussion already on golang-nuts I can't find it.

Russ Cox

unread,
May 25, 2011, 3:05:53 PM5/25/11
to Michael Shields, golang-nuts
it seemed like a good idea at the time.

in retrospect, we'll need to change it to
be 64 bits in both 6g and gccgo in order
to have >2G-element slices. we know that,
it just hasn't happened. there are other
things that need to happen first.

russ

Ian Lance Taylor

unread,
May 25, 2011, 3:06:11 PM5/25/11
to Michael Shields, golan...@googlegroups.com
Michael Shields <mshi...@google.com> writes:

It matches what C++ does.

It may change.

Ian

Russ Cox

unread,
May 25, 2011, 3:07:20 PM5/25/11
to Ian Lance Taylor, Michael Shields, golang-nuts
> It matches what C++ does.

I'm pretty sure that wasn't Ken's rationale for the choice in 6g. :-)

roger peppe

unread,
May 25, 2011, 3:19:33 PM5/25/11
to Michael Shields, golan...@googlegroups.com
i imagine that the main rationalisation might have been that most ints
hold nothing
like the full range of 32 bits, let alone 64, so moving to 64 bits will
mean that loops will use more memory and blow the cache earlier,
and things might get slower as a result.

not to mention overall memory usage.

Gustavo Niemeyer

unread,
May 25, 2011, 4:07:38 PM5/25/11
to roger peppe, Michael Shields, golan...@googlegroups.com
> i imagine that the main rationalisation might have been that most ints
> hold nothing like the full range of 32 bits, let alone 64,

I'd go even beyond that and say that most ints hold a single element
of the range!

--
Gustavo Niemeyer
http://niemeyer.net
http://niemeyer.net/blog
http://niemeyer.net/twitter

André Moraes

unread,
May 25, 2011, 5:03:30 PM5/25/11
to golan...@googlegroups.com
Also,

Making int representing only one size, either 32 or 64 bits, reduce
the headaches that will come from having the same type representing
more than one thing depending on what plataform you are
compiling/running.

--
André Moraes
http://andredevchannel.blogspot.com/

Scott Pakin

unread,
May 25, 2011, 5:30:22 PM5/25/11
to golang-nuts
On May 25, 3:03 pm, André Moraes <andr...@gmail.com> wrote:
> Making int representing only one size, either 32 or 64 bits, reduce
> the headaches that will come from having the same type representing
> more than one thing depending on what plataform you are
> compiling/running.

I seem to recall some discussion on this list of getting rid of int
and uint entirely and forcing programmers to choose a specific-sized
integer type. (Go already lacks an unspecified-size float type and
requires programmers to choose either float32 or float64.) I think
that'd be a smart route to take for the very reason you indicated
above: reducing the headaches that come from having variables change
size when moving from one architecture to another. It's just that int
and uint don't tell you what size your value is on *any* architecture
so dropping those types entirely would force the programmer to be
explicit about what he means. As for the default type for declaration
statements like "i := 0", I'd say it'd be most convenient to make that
be int64 to facilitate >2G-element slices, as Russ mentioned.

Scott

John Asmuth

unread,
May 25, 2011, 5:35:26 PM5/25/11
to golan...@googlegroups.com
+1 for int going the way of float.

Arlen Cuss

unread,
May 25, 2011, 6:14:30 PM5/25/11
to Michael Shields, golan...@googlegroups.com

It doesn't answer the "why", but one possible reply is than if you
really care about how many bits your int has, state it (and use
int64/32/etc.)!

--
Arlen Cuss
Software Engineer

Phone: +61 3 9877 9921
Email: ar...@noblesamurai.com

Noble Samurai Pty Ltd
Level 1, 234 Whitehorse Rd
Nunawading, Victoria, 3131, Australia

noblesamurai.com | arlen.co

signature.asc

Andrew Gerrand

unread,
May 25, 2011, 6:29:06 PM5/25/11
to André Moraes, golang-nuts
On 26 May 2011 07:03, André Moraes <and...@gmail.com> wrote:
> Making int representing only one size, either 32 or 64 bits, reduce
> the headaches that will come from having the same type representing
> more than one thing depending on what plataform you are
> compiling/running.

It's more a case of: if a program behaves incorrectly when the size of
an int changes then the program is wrong and should be fixed.

Andrew

zhai

unread,
May 25, 2011, 8:59:16 PM5/25/11
to Andrew Gerrand, André Moraes, golang-nuts
To respect the processor,change the slice index and pointer to 64bits, and remain the int 32bits.

<<Intel® 64 and IA-32 Architectures
Software Developer’s Manual>>
Volume 1:
Basic Architecture

"64-bit mode is enabled by the operating system on a code-segment basis. Its 
default address size is 64 bits and its default operand size is 32 bits. The default 
operand size can be overridden on an instruction-by-instruction basis using a REX 
opcode prefix in conjunction with an operand size override prefix.

REX prefixes allow a 64-bit operand to be specified when operating in 64-bit 
mode. By using this mechanism, many existing instructions have been promoted 
to allow the use of 64-bit registers and 64-bit addresses."


David Symonds

unread,
May 25, 2011, 9:00:50 PM5/25/11
to zhai, Andrew Gerrand, André Moraes, golang-nuts
On Thu, May 26, 2011 at 10:59 AM, zhai <qyz...@gmail.com> wrote:

> To respect the processor,change the slice index and pointer to 64bits, and
> remain the int 32bits.

That's not going to work. len(s) is defined to return 'int'.

David Roundy

unread,
May 25, 2011, 9:36:08 PM5/25/11
to r...@golang.org, Michael Shields, golang-nuts
I think it'd be lovely to introduce a separate rune type around the
same time, or maybe switch to int32 for runes. Using 64-bit ints to
represent runes seems like a huge waste. I like the idea of ints
being 64-bit, so you avoid large-slice issues, but it seems a waste to
make possibly large slices of runes take twice the memory, when runes
are known to always fit in 32 bits, even on large machines.

I think that using int for runes is a bit of an ugly hack anyhow.
Does int have some advantage over int32 in this context?

David

--
David Roundy

Rob 'Commander' Pike

unread,
May 25, 2011, 9:45:29 PM5/25/11
to David Roundy, r...@golang.org, Michael Shields, golang-nuts
Although a rune type makes sense, it's easy to overthink the distinctions between different uses of integers. The basic int type is good most of the time.

-rob

bflm

unread,
May 26, 2011, 3:00:31 AM5/26/11
to golang-nuts
On May 25, 11:35 pm, John Asmuth <jasm...@gmail.com> wrote:
> +1 for int going the way of float.

-1 for the same. 'int' is the type name probably most often written in
a Go program by me and I don't want the name get longer (I miss
'float' when I now must write the longer 'floatXY'). I don't care if
int is 32 or 64 bit (that's what the specs say and how I use it), so I
wouldn't mind if it became just an alias for int32, like byte is for
uint8.

Namegduf

unread,
May 26, 2011, 3:18:44 AM5/26/11
to golan...@googlegroups.com
On Thu, 26 May 2011 00:00:31 -0700 (PDT)
bflm <befeleme...@gmail.com> wrote:

> On May 25, 11:35 pm, John Asmuth <jasm...@gmail.com> wrote:
> > +1 for int going the way of float.

Getting rid of int can't be done without using one of the fixed int
sizes for slice indexes, lengths, and capacities, always a consistent
one so Go programs can be portable.

This would either permanently tie us to int32, which is not acceptable,
or require using int64 for all indexing, lengths, and capacities, and
doing all arithmetic involving them using int64. On x86 and ARM would be
a significant slowdown, I'd expect, as (I believe) operations on 64bit
numbers are slow on those architectures.

As I suspect manipulating memory is common in the critical parts of a
program for performance, and it is an operation used throughout Go
programs (anywhere the stdlib uses arithmetic to get a slice index), I
think it could significantly impede the performance of all Go programs.
There'd be little you could do to optimise without switching into
assembly or unsafe.

So -1 from me unless someone shows the performance effect of using
int64 for all indexing and sizes is a trivial problem.

roger peppe

unread,
May 26, 2011, 4:32:47 AM5/26/11
to Namegduf, golan...@googlegroups.com
as a more radical proposal, you could potentially add slices
with different index types to the language by allowing a
integer type name instead of the empty [].

e.g.
var x [int32]string

then the index type is explicit; []T would be equivalent to [int]T.

when slicing such a type, the index would be allowed to be
of any integer type, as currently, and the resulting slice would
have the index type that was used.

e.g.

func foo(x [int64]byte) {
for len(x) > 0 {
buf := x[0:8192] // type of buf is [int]byte because
default constant type is int.
read(buf)
x = x[8192:len(x)] // len(x) is int64, so this assignment is fine.
}
}

i don't know how well this would work in practice, but
perhaps it's worth considering.

Ibrahim M. Ghazal

unread,
May 26, 2011, 11:26:39 AM5/26/11
to Russ Cox, golan...@googlegroups.com

This is probably far-fetched, but what about using uintptr for
indexes? It makes some sense when you think about it, the largest
slice/array is as big as the memory.

Russ Cox

unread,
May 26, 2011, 11:31:02 AM5/26/11
to Ibrahim M. Ghazal, golang-nuts
for i := 0; i < len(x); i++ {

implies that len has to be an int.
we've been through all this.
we know int has to grow.
it's not going to happen today.

roger peppe

unread,
May 26, 2011, 11:43:33 AM5/26/11
to r...@golang.org, Ibrahim M. Ghazal, golang-nuts

if x was of type [int64]T then it would have
to be

for i := int64(0); i < len(x); i++ {
}

same as for map[int64]T, or passing
an offset to os.File.ReadAt. i don't think
that's a particularly strong argument against.

having the freedom to have a slice
take only two words rather than three might
be useful, especially when dealing with slices
of slices.

John Asmuth

unread,
May 26, 2011, 11:54:55 AM5/26/11
to golan...@googlegroups.com


On Thursday, May 26, 2011 11:31:02 AM UTC-4, Russ Cox wrote:

we know int has to grow.
it's not going to happen today.

Famous last words. 

Steve McCoy

unread,
May 26, 2011, 1:05:00 PM5/26/11
to golan...@googlegroups.com, Namegduf
Seems like int64-length slices would be a pain to use and implement on 32-bit.


On Thursday, May 26, 2011 4:32:47 AM UTC-4, rog wrote:
as a more radical proposal, you could potentially add slices
with different index types to the language by allowing a
integer type name instead of the empty [].

i don't know how well this would work in practice, but

roger peppe

unread,
May 26, 2011, 1:32:17 PM5/26/11
to golan...@googlegroups.com, Namegduf
On 26 May 2011 18:05, Steve McCoy <mcc...@gmail.com> wrote:
> Seems like int64-length slices would be a pain to use and implement on
> 32-bit.

i don't think there's a problem - you'd just run out of memory
if you tried to allocate one that was too big, same as if you tried to
allocate three 2GB
arrays under 32 bit.

kortschak

unread,
May 26, 2011, 6:14:02 PM5/26/11
to golang-nuts
I like this idea, though it doesn't seen necessary to have a
[int64]array declaration since int in 6g would be equivalent to int64
and int64 would not be functionally usable on 32 bit architecture.

> as a more radical proposal, you could potentially add slices
> with different index types to the language by allowing a
> integer type name instead of the empty [].

Extending this idea to allow using uint types as indices as well gives
additional space in arrays without significant extra computational
cost - I know a single extra bit would certainly help out with the
project I'm working on.

Definitely looking forward to int growing on 6g, but for the sake of
portability I think int should remain as the default, implementation
dependent, integer type.

Steve McCoy

unread,
May 26, 2011, 8:42:48 PM5/26/11
to golan...@googlegroups.com, Namegduf
Good point.

Florian Weimer

unread,
May 28, 2011, 3:40:27 PM5/28/11
to André Moraes, golan...@googlegroups.com
* André Moraes:

> Making int representing only one size, either 32 or 64 bits, reduce
> the headaches that will come from having the same type representing
> more than one thing depending on what plataform you are
> compiling/running.

There's also the option to make int arbitary-precision, using fixnums
for numbers near zero.

Constructs like

> for i := 0; i < len(x); i++ {

could be specialized to word-sized integers by the compiler. Perhaps
there's even a bit-fiddling check which folds the fixnum check into
the array bounds check.

Reply all
Reply to author
Forward
0 new messages