in retrospect, we'll need to change it to
be 64 bits in both 6g and gccgo in order
to have >2G-element slices. we know that,
it just hasn't happened. there are other
things that need to happen first.
russ
I'm pretty sure that wasn't Ken's rationale for the choice in 6g. :-)
not to mention overall memory usage.
I'd go even beyond that and say that most ints hold a single element
of the range!
--
Gustavo Niemeyer
http://niemeyer.net
http://niemeyer.net/blog
http://niemeyer.net/twitter
Making int representing only one size, either 32 or 64 bits, reduce
the headaches that will come from having the same type representing
more than one thing depending on what plataform you are
compiling/running.
--
André Moraes
http://andredevchannel.blogspot.com/
It doesn't answer the "why", but one possible reply is than if you
really care about how many bits your int has, state it (and use
int64/32/etc.)!
--
Arlen Cuss
Software Engineer
Phone: +61 3 9877 9921
Email: ar...@noblesamurai.com
Noble Samurai Pty Ltd
Level 1, 234 Whitehorse Rd
Nunawading, Victoria, 3131, Australia
It's more a case of: if a program behaves incorrectly when the size of
an int changes then the program is wrong and should be fixed.
Andrew
> To respect the processor,change the slice index and pointer to 64bits, and
> remain the int 32bits.
That's not going to work. len(s) is defined to return 'int'.
I think that using int for runes is a bit of an ugly hack anyhow.
Does int have some advantage over int32 in this context?
David
--
David Roundy
-rob
> On May 25, 11:35 pm, John Asmuth <jasm...@gmail.com> wrote:
> > +1 for int going the way of float.
Getting rid of int can't be done without using one of the fixed int
sizes for slice indexes, lengths, and capacities, always a consistent
one so Go programs can be portable.
This would either permanently tie us to int32, which is not acceptable,
or require using int64 for all indexing, lengths, and capacities, and
doing all arithmetic involving them using int64. On x86 and ARM would be
a significant slowdown, I'd expect, as (I believe) operations on 64bit
numbers are slow on those architectures.
As I suspect manipulating memory is common in the critical parts of a
program for performance, and it is an operation used throughout Go
programs (anywhere the stdlib uses arithmetic to get a slice index), I
think it could significantly impede the performance of all Go programs.
There'd be little you could do to optimise without switching into
assembly or unsafe.
So -1 from me unless someone shows the performance effect of using
int64 for all indexing and sizes is a trivial problem.
e.g.
var x [int32]string
then the index type is explicit; []T would be equivalent to [int]T.
when slicing such a type, the index would be allowed to be
of any integer type, as currently, and the resulting slice would
have the index type that was used.
e.g.
func foo(x [int64]byte) {
for len(x) > 0 {
buf := x[0:8192] // type of buf is [int]byte because
default constant type is int.
read(buf)
x = x[8192:len(x)] // len(x) is int64, so this assignment is fine.
}
}
i don't know how well this would work in practice, but
perhaps it's worth considering.
This is probably far-fetched, but what about using uintptr for
indexes? It makes some sense when you think about it, the largest
slice/array is as big as the memory.
implies that len has to be an int.
we've been through all this.
we know int has to grow.
it's not going to happen today.
if x was of type [int64]T then it would have
to be
for i := int64(0); i < len(x); i++ {
}
same as for map[int64]T, or passing
an offset to os.File.ReadAt. i don't think
that's a particularly strong argument against.
having the freedom to have a slice
take only two words rather than three might
be useful, especially when dealing with slices
of slices.
we know int has to grow.
it's not going to happen today.
as a more radical proposal, you could potentially add slices
with different index types to the language by allowing a
integer type name instead of the empty [].
i don't know how well this would work in practice, but
i don't think there's a problem - you'd just run out of memory
if you tried to allocate one that was too big, same as if you tried to
allocate three 2GB
arrays under 32 bit.
> Making int representing only one size, either 32 or 64 bits, reduce
> the headaches that will come from having the same type representing
> more than one thing depending on what plataform you are
> compiling/running.
There's also the option to make int arbitary-precision, using fixnums
for numbers near zero.
Constructs like
> for i := 0; i < len(x); i++ {
could be specialized to word-sized integers by the compiler. Perhaps
there's even a bit-fiddling check which folds the fixnum check into
the array bounds check.