No laziness, 1<<(2*16) overflows the uint32 and becomes a zero.
You just made a slice of zero length.
>
> I'm running the program on a Corei5 under Ubuntu amd64, so I would
> expect (possibly unreasonably) that go would be implementing ints as
> 64 bits wide allowing me to use more than int32 equivalent indexes for
> array, but this is obviously not the case.
>
> So my questions (other than the implicit one above about laziness -
> and how to catch errors like that so it is clear what the cause is -
> try to assign/refer immediately after allocation?) are:
>
> What was the outcome in terms of int implementation dependency on
> underlying architecture outside the discussion referred to above?
>
> Is there some way to force 6g to make ints 64 bits wide?
>
> thanks
--
=====================
http://jessta.id.au
They can't be used; that would cause a panic. Just like you'd get a
panic for indexing beyond the length of the slice (which is what would
happen if you underflowed an unsigned integer, anyway).
Algorithms can benefit from the ability to express negative offsets
and such. If indexes were unsigned you'd always need a conversion in
these cases.
Andrew
At some point we will make int 64-bits wide on amd64. That's why the
language spec permits it.
> I've had a look at compartmentalising arrays in a BigArray object
> (naively an array of arrays), but all the solutions I see are going to
> give a significant time cost and either loss of slice functionality or
> really dreadful hits to efficiency when pulling slices across sub-
> arrays.
Really? I'm surprised to hear that. I'd imagine you could index with
an int64 and use the high bits to select sub-arrays with a very small
cost.
What kinds of algorithms need random access to such a large piece of memory?
Andrew
> type bodySlice [1 << loBits]int32
>
> type BigI32Array struct {
> body []bodySlice
> tail []int32
> }
>
> v = self.body[hi][lo]
> bigarray.go:35: internal compiler error: index is zero width
(when loBits == 30).
I think this is a compiler bug in 6g. It's happening because the total
size of the object is (1 << 30) * 4 == 0 (when computed using a 32-bit
type). But I think 6g ought to permit this, since the index will always
fit in an int. At the very least it should give a different error
message. Please open an issue for this. Thanks.
I reduced the test case down to this:
package p
type bodySlice [1 << 30]int32
type BigI32Array struct {
body []bodySlice
}
func (self *BigI32Array) At(hi, lo int) int32 {
return self.body[hi][lo]
}
By the way, this fails with 8g:
foo.go:2: type [1073741824]int32 larger than address space
That won't be fixed.
Ian
a variation of issue 713, perhaps?
I believe that is a different issue, happening at link time rather than
at compile time. Fixing this problem may run into that one, though, I
don't know.
Ian
> Compilation fails with:
>
> $ gb
> (in pkg/bio/bigarray) building pkg "bio/bigarray"
> bigarray.go:82: internal compiler error: bad argwid func(*uint8, int,
> [][268435456]int32, [268435456]int32) [][268435456]int32
> 1 broken target
>
> (line 82 is bracketed with the comments, but: 'self.body =
> append(self.body, t)')
>
> A simpler version of this:
>
> package main
>
> func main() {
> var t [1<<28]byte
> a:= make([][1<<28]byte,5)
> a = append(a, t)
> }
>
> fails with the similar but not identical error:
>
> $ 6g t.go
> t.go:6: internal compiler error: bad width
>
> Using the value 1<<20 compiles without complaint, so the syntax is
> clearly fine.
>
> Suggestions?
Any time you see the words "internal compiler error" you've found a
compiler bug. Often the code is invalid, though this code looks OK to
me.
However, it's worth pointing out that your test cases are going to copy
huge amounts of data. While the compiler really ought to accept them,
or at least give a sensible error message, in practice this code is not
going to be feasible to run. Calling append for a gargantuan array is
going to copy the entire array. I think you need to be working with
slices of slices here, and just arrange to allocate your slices at the
desired size. Using arrays tends to lead to copying, which you can't
afford.
Ian
> As an aside, can anyone tell me why runtime.Free() is noted as only
> for debugging?
Calling runtime.Free() is unsafe, in that it can cause your program to
fail if you free something and retain a pointer to it.
Ian
>
One more array proposal:
I'd keep ints 32 bit.
int size does not hinder bigger arrays.
Currently, make() and [] accept ANY integral type without explicit
conversion to int:
a := make([]int32, byte(2))
println(a[byte(1)])
Why do not they deal appropriately with integrals longer than int
(uint32,int64,uint64) ?
a := make([]int8, uint64(1<<32+2))
println(a[uint64(1<<32)])
Well, there is one problem still remain: return type of len() and cap()
Simplest (but ugly) solution: to add len64() and cap64()
A bit nicer:
to give them a "fuzzy" integral type, similar to parameter of make().
In "len(a) + 1" it is int.
In "len(a) < byte(1)" it is byte.
In "println(len(a))" or "x := len(a)" it is uintptr.