Russ
Depending upon the CPU in question, you might get substantially better performance out of one size vs another. For example, on an old school (pre-X64) Pentium chip, your best performance for standard (non-SSE) operations would be with 32-bit sizes, hands down, despite the fact that the chip provides native instructions for 8 and 16 bit arithmetic. On older 386 chips, however, the 16-bit operations were faster despite the chip providing hardware op codes for 32-bit instructions. I'm not actually sure off hand what the best performance is on an X64 chip, but it's entirely possible (especially on the earlier X64 chips) that the 32-bit operations are faster than 64-bit. The point is, by having the unsized types in the language, a particular implementation can choose which size to default to for best performance. I think this is entirely reasonable, given that 90% of the time, even in today's world, somebody declaring an "int" just plain doesn't need the full -9223372036854775808 to 9223372036854775807 range of a 64-bit int (in fact, most of the time such a type is only going to have a very limited range indeed). So why not allow the compiler the chance to choose the best default size for the particular platform? Especially with Go's approach, where if you really need a sized type you can explicitly declare one.
The key point is that in the vast majority of cases the programmer won't actually care whether it's a 32-bit or 64-bit integer under the hood, because either way they're not going to be using that much precision. Consider, for example, the typical loop counter. Most of us aren't out there every day writing dozens and dozens of loops that execute 9223372036854775807 or even 2147483647 times. Most of our loops are executing, at most, a few thousand times (many even only a few hundred or even a few dozen; consider iterating a string, for example).
I'm a little dismayed even to see the suggestion of getting rid of the unsized float and complex types. People haven't had to really deal with this problem for a generation (a human generation, not a computer generation; the early 90s was the last time this was really an issue), but this is exactly the moment in time when I think it's becoming relevant again. Between the transition to 64-bit chips and the transition to non-Intel based platforms (mobile chips, GPUs, etc), I think it's a huge mistake to take out these types.
1) All mainstream processors today perform well with 32-bit ints. Not all of them perform well with 64-bit ints. My example was from the 16-bit era because it was a little more clear cut back then than it is now, but the difference still exists.
2) This is already violated in the mainstream Go release. The 64-bit Go compilers use 32-bit values for the untyped "int", and hence the builtin "len" function. If anything, I think your point is an argument for changing this function to return a "uintptr" value instead of an "int".
There are plenty of mainstream architectures in use today in which a 32-bit float is faster than a 64-bit float.
Simplicity is a nice goal, and I'm a fan of it. I'm not such a fan of it that I believe that it trumps all other considerations.
Just to be clear, I'm not fanatical about getting rid of unsized pointer types.
The problem with the analogy between integer types and float types is
that in the case of integer types you don't care about the size unless
it overflows. In the case of float types, you always need to care
about the size, because it always affects your answer (unless you're
only doing arithmetic involving small integers * 2^n, in which case
it's exact, in which case you'd be better off with a fixed-point
representation). So there isn't the same possibility of "I just want
a good representation".
There has never been a speed advantage to 32-bit floats except in
terms of memory use (and cache), so the existing 32-bit float type
isn't defined as a "fast" float. It's just there (I presume) because
that's what it's called in C. I wouldn't object to that if the
float64 were called "double", which it is in most languages I know.
But I really think the language would be nicer without the "float"
type. Size really does matter for any floating-point use, either
because of memory consumption or because of required precision.
David
> If int guarantees 32 bits, then I guess porting Go to a 16-bit or 8-
> bit system will require all unsized ints to be converted to int32?
No, on such a system the type "int" would be 32 bits. On such a system
code which used "int" would most likely run slower than code which used
"int16" (or "int8") where possible.
Ian
There has never been a speed advantage to 32-bit floats except in terms of memory use (and cache)...
I've found it very hard to find authoritative information
about all the different ARM FP options, but my understanding
was that the really fast NEON operations came from either
(a) being able to do more SIMD operations in parallel, which doesn't
really help Go proper, or (b) not being completely accurate in
the result, which also doesn't really help. I'd be very happy
to see a pointer to authoritative information in this area.
Russ
Yeah, I actually rewrote that sentence a few times. Interestingly,
(so far as I know) it's only with recent processors that there has
been a tendency towards single precision floating point operations
being faster than operations with doubles, with the introduction of
SIMD units. Which I think relates to floats being used in multimedia
applications where 32 bits is overkill, because it's dealing with
something that's going to end up either as 16-bit audio or 8-bit color
channels.
David
> On Dec 13, 4:07 am, Ian Lance Taylor <i...@google.com> wrote:
>> Gerald <gpla...@gmail.com> writes:
>> > If int guarantees 32 bits, then I guess porting Go to a 16-bit or 8-
>> > bit system will require all unsized ints to be converted to int32?
>>
>> No, on such a system the type "int" would be 32 bits. On such a system
>> code which used "int" would most likely run slower than code which used
>> "int16" (or "int8") where possible.
>
> Sounds logical.
> Something similar would be useful for floats.
> The default float should be 64-bit and
> be accepted as arguments of math functions.
If I understand your comment correctly, you are saying that the math
functions should change from taking float64 parameters to taking float
parameters. Is that correct? Still, what do we gain by keeping the
type "float"? It's not really the same as the type "int", which is
basically a combination of the maximum size of an object and an
efficient integer type.
Ian
There is a tentative plan to delete float from
the language and make constants like 1.5
have default type float64, precisely so that
they will be usable with the mathematical functions.
Russ