It depends on the types involved in the conversion. Converting uint64
to int64 costs nothing: the bits don't change. Only the interpretation
of them does.
Just be careful about how large your uint64 is because it could end up
becoming negative once it's converted to int64.
- Evan
From memory, the only parts of the standard library that use uint64 in
a file system context are data structures that mirror those specified
by the operating system (in syscall, etc).
If you don't expect to work with files larger than 9e18 bytes, just
use int64 and avoid the conversion entirely. Using a signed type has
the added benefit of making it easier to work with relative offsets.
Andrew
They're not? If you point out places where we use a uint64
for file size we'll probably change it to uint64.
Russ
Usually the easiest way to answer these sorts of questions is to knock up some quick benchmarks using gotest, but unfortunately the go compiler won't allow a cast on its own so it has to be measured in the context of a variable set:
package baseline
import "testing"
var x64 int64
var u64 uint64
func BenchmarkBaselineCastInt64ToUint64(b *testing.B) {
for i := 0; i < b.N; i++ { u64 = uint64(x64) }
}
func BenchmarkBaselineCastUint64ToInt64(b *testing.B) {
for i := 0; i < b.N; i++ { x64 = int64(u64) }
}
On an Atom 270 (1.6GHz) under OSX 10.6.4 this code gives:
baseline.BenchmarkBaselineCastInt64ToUint64 500000000 4 ns/op
baseline.BenchmarkBaselineCastUint64ToInt64 500000000 4 ns/op
this is sufficiently close to the minimum resolution of the benchmarking library that taking into account likely overhead of accessing the global variables you're looking at a negligible cost for the cast.
Ellie
Eleanor McHugh
Games With Brains
http://feyeleanor.tel
----
raise ArgumentError unless @reality.responds_to? :reason
You could just compare against a case sans cast*... There shouldn't be
a difference (not even a negligeable one) since the conversion
shouldn't be issuing any instructions, it only serves as an annotation
for the compiler and the maintainer in this case (if I understand
correctly).
*ie:
var x64_1 int64
func BenchmarkBaselineAssignInt64SansCast(b *testing.B) {
for i := 0; i < b.N; i++ { x64 = x64_1 }
}
According to the spec:
• When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v := uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow.
I haven't checked in the compiler so I may well be wrong, but this appears to be a runtime operation.
> *ie:
> var x64_1 int64
> func BenchmarkBaselineAssignInt64SansCast(b *testing.B) {
> for i := 0; i < b.N; i++ { x64 = x64_1 }
> }
For the hardware I'm using this benchmarks slower than the cast :)