Thinking about Go

978 views
Skip to first unread message

Michael Jones

unread,
Jul 13, 2011, 3:25:59 AM7/13/11
to golang-nuts
Thinking about Go

I've been using Go actively for months in various ways: extending big/nat.go to do speedier output conversions, finessing fmt, looking at compiler code optimizations, and implementing float128 and float256 algebra and intrinsic function packages. This has given me the chance to use things deeply enough to have a feel for the simple/basic parts of Go. (Nothing significant yet on goroutines or interfaces)  In turn this unleashed my free thinking to develop several proposals. I've collected the first dozen here. They are the most basic of the ideas and I offer them for the consideration.

You may cringe at yet another wish list. My optimism is that at least the ideas or approach to implementing them are different, logical, and interesting. They do not seem redundant to what I see in the mailing list and archives. Let's hope!

Respectfully and in awe that so much good has come from the Go project,
Michael (in a hotel room in NYC)

PROPOSAL 1: Implicit concatenation
Combining static strings: merge adjacent strings without "compile time '+' operator"
v := "aaa" "bbb" means v := "aaabbb", where space between strings may include newline

This can be implemented in the lexer (just saw quoted string, peek ahead to quoted string … wait a minute, that's one big string!)

PROPOSAL 2: Implicit widening within types for assignments
Experiencing frustrations with strict type equivalence. Appreciate clarity of meaning, simplicity of coercion-free language design, and avoidance of "invisible" type/coercion/operator bugs. Despite these, want at least a single liberty here, but prefer a series of slightly expanded versions as well:

a. Want language to support assignment a := b where Type(a) is an unsigned integer type and proper superset of unsigned integer Type(b). (i.e., intA and intB where A >= B) This is an elaboration of the assignability section of the specification.
var uint32 a
var uint8 b
b = 3
a = b // this needs nothing more than zero extension and can have no unexpected consequence.

b. Would like this same liberty in more indirect assignments: argument to function call, assigning return values from function call, slice index (promote small uint to a larger uint type required), sending on a channel, and so on. The argument is that nothing is lost/risked/confused here. This adds a free and unambiguous degree of polymorphism in the simple places and simplifies code by removing "meaningless" casts. (as in a = b vs a = uint32(b) above) 

c. Would like this same liberty for other types having perfect supersets. Complete list:
small unsigned int assigned to larger unsigned int. (case a above)
small float assigned to larger float. every possible float32 can be expressed identically as a float64
small signed int assigned to a larger signed int. CPUs have an instruction for this.
small complex float assigned to larger complex float.

Expands and regularizes the request of b. Please understand that I am not asking for T3 = T2 op T1 or anything where two choices are plausible, or where evaluation makes sizes/signs/whatever confusing. This is more akin to lexical semicolon insertion. If the types don't match then we widen within the same type family if they share one, otherwise we bail like now. NB. Must not assign int to same-size float. That is not what I mean. Only proper FP=INT supersets are 
float64 = [u]int{32,16,8}, complex128 = [u]int{32,16,8}
float32 = [u]int{16,8}, complex64 = [u]int{16,8}

These would OK if you're really gung ho but not what I'm asking for.

The basic idea here is that widened size same meaning value assignment (for subset/superset types) is not confusing, while code littered with casts to allow assigning a smaller uint to a bigger one are tedious and "noisy." Also, I am not asking for plausible but semantically richer assignments like float to complex, where the implicit zero for the imaginary component is invisibly presumed. Maybe we should make the programmer communicate what is happening in that instance. (I would not, but don't want to ask too much.)

I ask that the team consider this one carefully, and Proposal #5 at the same time.

PROPOSAL 3: Bit slices
Want to extend/repurpose slice syntax for numeric types and maybe boolean types. Yes, this sounds odd and yes it is a bit of mixed metaphor, but the inner idea here is to view these types as a fixed-length array of bits that can be referenced by slice syntax as lvalues and rvalues. This is not about C-language fields. I don't want to name them. I don't want them to be signed or whatever. Just a stream of innocent, unsigned bits. If I can have this, and am allowed point a above, then I can cleanly code:

var  m uint16
var s int32
:
m[0:8], m[8,16] = m[8:16], m[0:8] // swap bytes
if s[31:32] == 1 {
// handle negative values
}

Why do I like this? Because the swap bytes example more cleanly says what I mean than:

m = m << 8 | m >> 8

to a compiler that has instructions for moving the AH and AL registers around or a SWAB instruction to swap bytes. (Imagine byte shuffling or bit shuffling in an S-Box.) Or for something like:

var fpuCSW uint16
//fpuCSW[8:10] = 3 // set 80-bit extended precision mode
fpuCSW[8:10] = 2 // set 64-bit double precision mode

which seems clearer to me than the masking I would need to do to clear the old value (say 3) and insert the 2. (Though admittedly C-style named bit fields would be clearer here since their names would be self documenting.)

fpuCSW = fpuCSW &^ 0x300 | 0x200

I understand the second way fully and type it almost subconsciously (well, the C version), but that does not make the code itself clear.

I'll admit here that there is a problem with this. If the sliced bit block is not 8, 16, 32, or 64 bits in size, then there is an issue of type mismatch in the assignment. That is, you may insist that I write:

fpuCSW[8:10] = 2[0:2]

to prove my good intentions, and this seems ugly. For a constant of course you could count bits, but in general this must be addressed for variables or else we need to allow a very open minded "chopping" in the assignment which seems brutish for Go, though clear in practice. Likewise we could/should allow simple unsigned widening here as well to let a byte go into a 13-bit "bit slice." (Apologies to AMD and the 2901)

PROPOSAL 4: First-class radix 2 constants and formatted output
The fmt package is fantastic. The "%T" format is clever. Also, the "%#o" and "%#x" print numbers in a designated base that can then be read back in as programs or data ("032" and "0x3e" respectively.) We can also print numbers in binary, but there is no "%#b" to print them with a base specifier ("0b1101") and no support in the Go spec for binary literals. There can be times when this is the clearest way to specify a number and given the role of binary in computing, seems not too much to ask of compiler and library.

PROPOSAL 5: Operand-derived "integer size" for result.
One pain I experienced that motivated these notes was in src/pkg/big/nat.go, Imagine code like this:

var a uint64
var digits string = "0123456789abcdef…"
:
q := a/10
r := a % 10

      s[i] = digits[r]

what type is q? uint64. r? uint64. this all makes sense. However, r can never be larger than 9 and would certainly fit in a uint4, so to speak. The same with "uint64(whatever) & 0x3f" which is a uint7. ;-) Now this funky size is not so important in general, but the following came up in various ways:

a. I had to cast the "10" (when it was b, for base, in the general code) to the same int64 type as a so that the operation was possible since mixed types are forbidden. My base ranged from 2 to 16. I had to make that a 64-bit int and ask the compiler to do a 64-mod and div (divrem is a later point so hold the thought) when knowing that I only wanted to divide by a much shorter number would have helped in code generation and in the case of an boolean and operation, would make the result of the and the minimum of the two operands. My first attempt to put a % 10 into a byte, was impossible. I had to cast it big then cast it all small, and the whole exercise was noise because the operator and one constant operand defined the size of the result. Convenience and clarity are the argument here, but there is another: efficiency. A smart compiler doing a mod or & with a small integer value like 10 could easily understand (interval analysis) that the result of that operation could never exceed the bounds of my 36 byte string of digits and skip bounds checking in this time critical inner loop. 

What frustrated me was that I could not say what was actually happening algorithmically. I had to make the small divisor (when a variable) big, do the big divide, then cast the small result to small, etc. Seems less perfect than the rest of Go.

b. in the context of the bit slice proposal, a z-bit slice of an int64, as in i[n:n+z] is not an int64, it is an intZ and should be assignable to integers of size >= Z. There is a confusion though about what the natural size should be for the newthing := oldthing[n:n+z]. T(oldthing) is safe, but if n and z are constants, should the natural size be the just big enough uint? uint8 for a "uint7" bit and as above?

c. Presumably people could not use bit slices on ambiguously sized values like int and float; since they could be "any" size they could be smaller than the supplied upper or lower indices.

PROPOSAL 6: multiple valued built-in operations
The multiple return value logic is sublime. I have used it to greatly improve code clarity vs. passing secondary return placeholders by pointer in C. But, there is no flexibility about ignoring secondary result values returned from function calls ("tuple assignment" in the sense of silently ignoring rather than using "_") Setting this aside for a moment, it would be an interesting subtlety to have tuple valued versions of basic arithmetic that work this way:

var a, b int
sum, carry := a+b
difference, borrow := a-b
product, highProduct := a*b
quotient, remainder := a/b  // aka, divMod

The core idea here is that these hardware operations inherently compute more than is requested of them, yet they are designed in all languages to yield only the standard result (first in each case above.) However, the other result is easily come by and efficient to generate in assembler, but often no support exists for getting to it and a function call is an expensive alternative. How much nicer then if I could just ask for it using  the simple and existing tuple construct! However, for this to work a change of rules would somehow be needed in tuple assignment to either make ignoring the secondary value OK, to make the rules in assignment special, or to make the exact code above OK. (Otherwise, x := y+z would be illegal!) I really like the look of this and with the right rethinking of rules it would be super. An ugly approach would be a compiler built-in like "q,r := TwoValues(a/b)" [yucky though, would really like to write what is shown in the example above.]

PROPOSAL 7: integer exponentiation operator in Go
Despite C's success without power ("**" in FORTRAN and "^" in BASIC), I have always missed built in exponentiation for two reasons. In human mathematical parlance, powers are part of the simple repertoire along with +, -, *, and / so I mentally expect to express it with recourse to packages and function calls; and, for integer powers, expansion in binary terms or otherwise is easily done perfectly by a compiler.

b := a**15

compiler: 5 mul, 2 registers

t1 := a*a
t2 := t1*a // a**3
t3 := t2*t2 // a**6
t4 := t3*t3 // a**12
b := t4*t2 // a**15 

Yes, everyone who read Knuth's Vol 2 knows to do this, but that is not everyone who will touch Go. One day in tweaking the compiler can give this to everyone all the time. Of course a**b, n not positive integer constant is just a call to a function, either the binary power function for integer exponents or the expensive fp log/mul/exp for real arguments, at least for their last little bit. Admission: not a lot of a**15 in any code, but there is a lot of a*x**3 + b*x**2, that a compiler could factor and use Horner's evaluation for the last bit if it so desires. Still, the code above is what compilation is all about: make the expression clear in source and do the good translation automatically.

PROPOSAL 8: Optional UTF-8 source symbols
Negative literal numbers are not "0-3" or "-3" ("additive inverse of 3") but actually "negative 3" with the subtle point being that unary minus is not the same as "this number is negative." Pedants (like me) may care about the distinction and as it happens, UTF has several small raised bar symbols that would make a nice looking proper "negative literal" indicator. If the language allowed one of these to be equivalent to '-' in "-123" that would be swell.

PROPOSAL 9: Swap (actually, an observation more than a suggestion)
Since you have ":=" I wanted to remind/observe that Dijkstra had used ":=:" in writings to indicate an interchange. Your "a,b = b,a" is his "a :=: b". Just saying, in case it appeals to you. It would work with tuples naturally, "a,b,c :=: x,y,z" and bit slices:

    m[0:8] :=: m[8:16] // swap bytes: short, terse, clear.

PROPOSAL 10: Operator overloading. Wait, don't say no yet, this is slightly different. I don't mean arbitrary overloading in the sense of non-sequitur C++ "<<" I/O operations which have always irritated me. I mean mathematical operator overloading and in the context of mathematical types. While a language feature may be more general than this, I'm eager to engage in the discussion about how to make the following work well.

I often craft programs that do extensive computations in mathematical structures of my liking, including: extended precision integers (mpint-like compile-time sizes), unlimited precision integers, interval numbers and associated arithmetic, and complex numbers composed of any of these, such as complex integers (aka, Gaussian integers), quaternions and octonions. I do all of this in C++ with code that ends up looking like math so that I can leverage my understanding of "a = 3*x + 2*y + 14" even if a, x, and y are "complex interval extended-precision" floats. Go is so close here. All I need is a way for "a+b" to match against the various "Add(a,b)" functions that I am writing. If this worked well, then complex could come out of the language and become a package, GRI's Int and Rat could be used in existing equations, and so on. This could be as simple (i.e., restrictive) as admitting that "+" is "BinaryPlus" and letting me create a BinaryPlus for my operand types ("func BinaryPlus (a, b ComplexIntervalFloat512) ComplexIntervalFloat512" or "func (a * ComplexIntervalFloat512) Plus (b ComplexIntervalFloat512) ComplexIntervalFloat512" and so on)

This could also mean the same function name disambiguated by argument structure, which hints at mangling and may disgust you, but I have an idea about that. Why not be explicit with some type of:

func arbitraryUniqueName (a, b ComplexIntervalFloat512) (ComplexIntervalFloat512) implements "+" {

or some such. That means that the function names would all be different and an explicit statement of "I am '+' for this pair of types at least one of which is defined here in my package" so no name mangling, no duplicated function names, easy use in a debugger, and an absolutely clear definition of operator overloading intention. Would need some standard scheme for method interface in terms of pointers and non-pointers, lhs/rhs for "=", etc.

I would do any amount of work to help this happen. Not sure if two packages could overlap functions. Maybe the arguments that are not types defined in your package would have to come from the standard collection of int, float, bool, etc. so that user packages would never both match. Downside would be adding two different things, like a Point and a DirectedRay, but that might make one bundle them in a package.

PROPOSAL 11: Computable booleans
The other day I wanted to add one to something if a condition was true. The something was complex. What would have been great would have been:

x := 3*y +14*z + (j == 2)

actually it was "table = table[0:index + (lo == 0)]" where I wanted to slice off the final element unless lo == 0.

where I knew that the last summand was either 0 or 1 depending on false or true, respectively. This is also nice in products where a correction is not needed for an index of zero. Knuth used this concept in his Concrete Mathematics to great avail. If we accepted proposal 2, then this would be a simple "widening" of an implicitly uint1. ;-) Only have to convert t/f to 1/0 when it is used this way. At all other times it remains opaque.

PROPOSAL 12: Rename complex number sizes
I mentioned in proposal 10 that I've implemented several mathematical structures that are tuples of numbers of various types. Examples are complex (2 floats), quaternion (4 floats), and octonion (8 floats) with associated arithmetic. There are two important aspects to these tuple types. One is the cardinality (1, 2, 4, 8) and one is the element type, as in Go's built-in float32 and float 64 and now my Float128 and Float256. The element type tells the user about precision, dynamic range, resolution, and the like. From this experience, I object to Go's complex64 and complex128 types because they conflate the two important independent variables. I guess knowing the overall number of bits is handy, but not nearly as knowing "float64 means 53 bit mantissa." Look at my table from the end of Proposal 2:

float64 = [u]int{32,16,8}, complex128 = [u]int{32,16,8}
float32 = [u]int{16,8}, complex64 = [u]int{16,8} 

We care about element assignability here which is based on element size. the 64/128 and 32/64 are jarring. It only gets worse as the cardinality and precision increase:

element type: structure names
float32: complex64, Quaternion128, Octonion 256
float64: complex128, Quaternion256, Octonion 512
Float128: complex256, Quaternion512, Octonion 1024
Float256: complex512, Quaternion1024, Octonion 2048

Worse, imagine a 4x4 transformation/projection/windowing matrix such as in OpenGL:

float32: Transform512
float64: Transform1024
float128: Transform2048
float256: Transform4096

Ugh! And I have to do this too because I need to follow the plan complex establish. So to be clear, my request is that a complex number (2-tuple) made of 32-bit floats should become complex32 and for 64-bit floats, complex64. The user interested in aggregate data type size in bits can simply multiply "complex means 2" with "64-bit element size" for "128 bits in total" and same for octonion, transform, or whatever. Imagine a 512-point complex FFT with a name like FFT65536!

Rob 'Commander' Pike

unread,
Jul 13, 2011, 9:32:09 AM7/13/11
to Michael Jones, golang-nuts

On 13/07/2011, at 5:25 PM, Michael Jones wrote:
>
> PROPOSAL 1: Implicit concatenation
> Combining static strings: merge adjacent strings without "compile time '+' operator"
> v := "aaa" "bbb" means v := "aaabbb", where space between strings may include newline
>
> This can be implemented in the lexer (just saw quoted string, peek ahead to quoted string … wait a minute, that's one big string!)

That used to be in the language but was dropped when the semicolon insertion rules went in. You need the + to be able to span lines, and if you can't span lines operatorless concatenation is close to pointless.

> PROPOSAL 2: Implicit widening within types for assignments
> Experiencing frustrations with strict type equivalence. Appreciate clarity of meaning, simplicity of coercion-free language design, and avoidance of "invisible" type/coercion/operator bugs. Despite these, want at least a single liberty here, but prefer a series of slightly expanded versions as well:

...

The proposed rules are too specialized. If such rules don't work for int, they're not worth having. In any case the clarity of Go's strictness is worth the occasional conversion. A huge class of bugs is simply gone, and a huge piece of tricky language in the specification never needed to be written.

> PROPOSAL 3: Bit slices

I bet very few people would use them. They wouldn't carry their weight.

> PROPOSAL 4: First-class radix 2 constants and formatted output
> The fmt package is fantastic. The "%T" format is clever. Also, the "%#o" and "%#x" print numbers in a designated base that can then be read back in as programs or data ("032" and "0x3e" respectively.) We can also print numbers in binary, but there is no "%#b" to print them with a base specifier ("0b1101") and no support in the Go spec for binary literals. There can be times when this is the clearest way to specify a number and given the role of binary in computing, seems not too much to ask of compiler and library.

Really? Base 2 constants just don't come up much. If anything, a case could be made for ditching octal and going to a more general form like 8r15 (r for radix), but it's not worth the turmoil to switch. We went over this some time ago and decided to leave it as is, familiar and sufficient for almost everything.

> PROPOSAL 5: Operand-derived "integer size" for result.

This is another one where you'd lose too much clarity in expression types. I think Nigel's right that you're using the language for a specialized purpose and want more help, but anyone using any language for a specialized purpose wants their own version of specialization.

> PROPOSAL 6: multiple valued built-in operations
> The multiple return value logic is sublime. I have used it to greatly improve code clarity vs. passing secondary return placeholders by pointer in C. But, there is no flexibility about ignoring secondary result values returned from function calls ("tuple assignment" in the sense of silently ignoring rather than using "_") Setting this aside for a moment, it would be an interesting subtlety to have tuple valued versions of basic arithmetic that work this way:
>
> var a, b int
> sum, carry := a+b
> difference, borrow := a-b
> product, highProduct := a*b
> quotient, remainder := a/b // aka, divMod

This has come up before too, and again although it's cute it doesn't seem worthwhile. Another instance of domain-specific specialization. Although arguably closer to the spirit of the language, I suspect you'd see these uses very rarely.

> PROPOSAL 7: integer exponentiation operator in Go

Ditto. Other than base 2, which we have, I can't think of a single time I've wanted integer exponentiation.

> PROPOSAL 8: Optional UTF-8 source symbols

This has come up several times and has been rejected as leading to too much cuteness, visual ambiguity, and worries about presentation. I still see mangled UTF-8 daily.

> PROPOSAL 9: Swap (actually, an observation more than a suggestion)
> Since you have ":=" I wanted to remind/observe that Dijkstra had used ":=:" in writings to indicate an interchange. Your "a,b = b,a" is his "a :=: b". Just saying, in case it appeals to you. It would work with tuples naturally, "a,b,c :=: x,y,z" and bit slices:
>
> m[0:8] :=: m[8:16] // swap bytes: short, terse, clear.

Cute but unnecessary and also a little misleading, since it has nothing to do with :=.

> PROPOSAL 10: Operator overloading.

A long design discussion about this general topic is on hold until and if the generic types question is resolved.

> PROPOSAL 11: Computable booleans

Back to C's idea of a boolean. Easy to work around with a function or map in the rare cases an if feels wrong.

> PROPOSAL 12: Rename complex number sizes

I fought hard for this at the time but was voted down because these names are well known from existing languages. I'm still not sure it was the right decision but I don't want to revisit it. A decision was made.

Sorry to be so negative. It's interesting how many of your suggestions have been thought through before.

-rob


David Roundy

unread,
Jul 13, 2011, 10:00:08 AM7/13/11
to Rob 'Commander' Pike, Michael Jones, golang-nuts
On Wed, Jul 13, 2011 at 2:32 PM, Rob 'Commander' Pike <r...@google.com> wrote:
>> PROPOSAL 7: integer exponentiation operator in Go
>
> Ditto. Other than base 2, which we have, I can't think of a single time I've wanted integer exponentiation.

I understand that scientific programming is a niche market, and not
one that is really targeted by the go language, but it's a pretty
large niche, and I think it'd be worth considering the addition of an
exponentiation operator. You may not need it very often, but when you
need it, it *really* helps to have it. It certainly seems as
generally useful as the bitwise operators that go already has. I'd
hate to see them removed from the language, but I also haven't yet
needed them in go.
--
David Roundy

Ian Lance Taylor

unread,
Jul 13, 2011, 10:35:33 AM7/13/11
to David Roundy, Rob 'Commander' Pike, Michael Jones, golang-nuts
David Roundy <rou...@physics.oregonstate.edu> writes:

It's easy enough to write a function to do integer exponentiation.

The only advantage I see to adding it to the language is so that the
compiler can optimize exponentiation when the power (right) operand is a
constant. And that does not seem to me to be a compelling advantage.
It's easy enough to optimize those cases by hand in the very few cases
where they matter. Or just wait until the compiler can inline simple
functions, and then you'll get the same optimization anyhow.

Further, it would be unreasonable to add the exponentiation operator
only for integers. So then we need to ask what about the relationship
of the type of the two operands. Exponentiation is like the shift
operator in that there is no necessary relationship between the two
operands. It is unlike shift in that a floating point operand on the
right makes sense. Adding the operator but only permitting integer
powers would lead directly to a request for supporting floating point
powers. Supporting floating point powers has no compilation advantage
at all--it would be compiled directly into a call to math.Pow, only we
would have to have a different version of math.Pow for each integer,
floating, and complex type.

So I don't see this as being all that simple. We would be introducing
considerable complexity in order to save a function call and some
compiler optimization.

Ian

Michael Jones

unread,
Jul 13, 2011, 10:43:39 AM7/13/11
to Rob 'Commander' Pike, golang-nuts
Seems I made a mistake by grounding my examples in a common framework as both responses are influenced by that. I was trying to be specific rather than vague. As it is, the important central code in big/nat is already in assembler. Not looking for Go to replace that. Just wondering which of the expensive habits could be trimmed.

On Wed, Jul 13, 2011 at 6:32 AM, Rob 'Commander' Pike <r...@google.com> wrote:

On 13/07/2011, at 5:25 PM, Michael Jones wrote:
>
> PROPOSAL 1: Implicit concatenation
> Combining static strings: merge adjacent strings without "compile time '+' operator"
>       v := "aaa" "bbb" means v := "aaabbb", where space between strings may include newline
>
> This can be implemented in the lexer (just saw quoted string, peek ahead to quoted string … wait a minute, that's one big string!)

That used to be in the language but was dropped when the semicolon insertion rules went in. You need the + to be able to span lines, and if you can't span lines operatorless concatenation is close to pointless.

> PROPOSAL 2: Implicit widening within types for assignments
> Experiencing frustrations with strict type equivalence. Appreciate clarity of meaning, simplicity of coercion-free language design, and avoidance of "invisible" type/coercion/operator bugs. Despite these, want at least a single liberty here, but prefer a series of slightly expanded versions as well:
...

The proposed rules are too specialized. If such rules don't work for int, they're not worth having. In any case the clarity of Go's strictness is worth the occasional conversion. A huge class of bugs is simply gone, and a huge piece of tricky language in the specification never needed to be written.

Hmm... What is tricky about assignments? The huge morass you fear concerns coercion, as in signed short char * other value of large size and deciding how to do the sign extensions and the widths in which to operate. Everything is ugly is in that I a celebrate Go's avoidance of it. You may be throwing the baby out with the bathwater though. I describe widening in pure assignment. The spec for that is two sentences:

Assignment between different sizes of the a conceptual type (uint, int, float, complex) is allowed only when universally value preserving, that is when widening the source to match the equal or greater width of the destination. In the case of the ambiguously-sized int and float, the universally value preserving representation is the largest such size.

the complete list would be:

    int8 = int8
    int16 = int8 or int16
    int32 = any of int8, int16, or int32
    int64 = any of int8, int16, int32, int, or int64

    uint8 = uint8 or byte
    uint16 = uint8 or uint16
    uint32 = any of uint8, uint16, or uint32
    uint64 = any of uint8, uint16, uint32, uint, or uint64

    float32 = float32
    float64 = float32 or float64

    complex64 = complex64
    complex128 = complex64 or complex128

and the CPU opcode is simply zero extend or sign extend for integer moves, and FpConvertToDouble in the floating point case.
 
> PROPOSAL 3: Bit slices

I bet very few people would use them. They wouldn't carry their weight.

> PROPOSAL 4: First-class radix 2 constants and formatted output
> The fmt package is fantastic. The "%T" format is clever. Also, the "%#o" and "%#x" print numbers in a designated base that can then be read back in as programs or data ("032" and "0x3e" respectively.) We can also print numbers in binary, but there is no "%#b" to print them with a base specifier ("0b1101") and no support in the Go spec for binary literals. There can be times when this is the clearest way to specify a number and given the role of binary in computing, seems not too much to ask of compiler and library.

Really? Base 2 constants just don't come up much.  If anything, a case could be made for ditching octal and going to a more general form like 8r15 (r for radix), but it's not worth the turmoil to switch. We went over this some time ago and decided to leave it as is, familiar and sufficient for almost everything.

BTW, the <radix in base 10> 'r' <digits in stated radix> is precisely how my C++ math software handles non-decimal values. It is simple, parses easily, and allows any base to be clearly communicated without special codes and escape sequences.

> PROPOSAL 5: Operand-derived "integer size" for result.

This is another one where you'd lose too much clarity in expression types. I think Nigel's right that you're using the language for a specialized purpose and want more help, but anyone using any language for a specialized purpose wants their own version of specialization.

Maybe so. What actually motivated me was the assembly output. Not that it was lightly optimized, because compilers improve, but because the code generator understood that I wanted to do a more expensive division than I needed and that was because I was forced to make everything bigger than necessary because of the strict type equality rules.
 
> PROPOSAL 6: multiple valued built-in operations
> The multiple return value logic is sublime. I have used it to greatly improve code clarity vs. passing secondary return placeholders by pointer in C. But, there is no flexibility about ignoring secondary result values returned from function calls ("tuple assignment" in the sense of silently ignoring rather than using "_") Setting this aside for a moment, it would be an interesting subtlety to have tuple valued versions of basic arithmetic that work this way:
>
>       var a, b int
>       sum, carry := a+b
>       difference, borrow := a-b
>       product, highProduct := a*b
>       quotient, remainder := a/b  // aka, divMod

This has come up before too, and again although it's cute it doesn't seem worthwhile. Another instance of domain-specific specialization.  Although arguably closer to the spirit of the language, I suspect you'd see these uses very rarely.

Sorry to hear this. It is how computers work. Seems wonderful to imagine a language that expresses such facts clearly to the programmer. For example, would be a great way to showcase algorithms in textbooks. So far only assembler does this but the idea is basic and pervasive.
 
> PROPOSAL 7: integer exponentiation operator in Go

Ditto. Other than base 2, which we have, I can't think of a single time I've wanted integer exponentiation.

I want it all the time in Go's constant construction because 2**n is mathematics and 1<<n is C. Even more when I want 3**n or 10**n.
 
> PROPOSAL 8: Optional UTF-8 source symbols

This has come up several times and has been rejected as leading to too much cuteness, visual ambiguity, and worries about presentation. I still see mangled UTF-8 daily.

> PROPOSAL 9: Swap (actually, an observation more than a suggestion)
> Since you have ":=" I wanted to remind/observe that Dijkstra had used ":=:" in writings to indicate an interchange. Your "a,b = b,a" is his "a :=: b". Just saying, in case it appeals to you. It would work with tuples naturally, "a,b,c :=: x,y,z" and bit slices:
>
>     m[0:8] :=: m[8:16]        // swap bytes: short, terse, clear.

Cute but unnecessary and also a little misleading, since it has nothing to do with :=.

True, but then Wirth and Dijkstra beat you to ":=" by 30 years so it was not misleading when ":=:" was a symmetric version of ":="
 
> PROPOSAL 10: Operator overloading.

A long design discussion about this general topic is on hold until and if the generic types question is resolved.

Well lets solve the generic types question then, so that we can have the long discussion about this general topic. Making package-extended math look like math in a programming language feels important.

> PROPOSAL 11: Computable booleans

Back to C's idea of a boolean. Easy to work around with a function or map in the rare cases an if feels wrong.

> PROPOSAL 12: Rename complex number sizes

I fought hard for this at the time but was voted down because these names are well known from existing languages. I'm still not sure it was the right decision but I don't want to revisit it. A decision was made.

Sorry that this argument was lost. Natural intuitions about element capabilities and equivalences will never be clear to the programmer the way it is now. Sigh. Octonion2048 here I come.
 
Sorry to be so negative. It's interesting how many of your suggestions have been thought through before.

-rob

0 out of 12 isn't so negative. Just think, it could have been 0 out of 100 good ideas. ;-)

Of all of these, what I like best (will miss the most) are the tuple result versions of arithmetic, tuple types with element size names, and same conceptual type widening in assignment. Operator overloading seems to be not dead yet, and feels important too. Please keep the idea of using an explicit (implements "+") notation rather than mangling and signatures.

Off to an event,
Michael

--

Michael T. Jones

   Chief Technology Advocate, Google Inc.

   1600 Amphitheatre Parkway, Mountain View, California 94043

   Email: m...@google.com  Mobile: 650-335-5765  Fax: 650-649-1938

   Organizing the world's information to make it universally accessible and useful


Michael Jones

unread,
Jul 13, 2011, 10:48:42 AM7/13/11
to Ian Lance Taylor, David Roundy, Rob 'Commander' Pike, golang-nuts
absolutely agree that the best motivation is when the power is a constant integer as in my example. am not sure that 100% of programmers could do x**15 in five multiplies. am sure that they could type "x**15"

David Roundy

unread,
Jul 13, 2011, 10:59:13 AM7/13/11
to Michael Jones, Ian Lance Taylor, Rob 'Commander' Pike, golang-nuts
I guess a reasonable solution (since I was also thinking in terms of
performance) would be to add an integer-power function to the standard
libraries, and then once inlining is implemented we'll have our
optimized integer power. Any suggestions as to what it should be
called? I presume it would go in the math package.

David

--
David Roundy

Michael Jones

unread,
Jul 13, 2011, 11:29:39 AM7/13/11
to David Roundy, Ian Lance Taylor, Rob 'Commander' Pike, golang-nuts
Is there an integer function version of "math" with things like max, min, bit length, population count, power, gcd, egcd, multiplicative inverse, etc.? I have these in my own c-code of course, and some are in src/pkg/big/nat.go as internal functions on big integers, but we could make a polished package.

Note that the typical "integer power by squaring and bit testing" function is not optimal in cases like x**15. Here my c++ code:

inline int iPow(int a, int b)
{
    int p = 1;
    while (b)
    {
        if (b & 1) p *= a;
        b >>= 1;
        a *= a;
    }
    return p;
}

inline int iPowMod(int a, int b, int m)
{
a = a % m;
    int p = 1 % m;
    while (b)
    {
        if (b & 1) 
p = (p*a) % m;
        b >>= 1;
        a = (a*a) % m;
    }
    return p;
}

Don't have an optimal binary power snippet to send you now. In a meeting in NYC.

jimmy frasche

unread,
Jul 13, 2011, 12:09:08 PM7/13/11
to Michael Jones, David Roundy, Ian Lance Taylor, Rob 'Commander' Pike, golang-nuts
On proposal 11 (computable bools,) what's wrong with allowing an
explicit conversion, like int(a==b)?

Russ Cox

unread,
Jul 13, 2011, 1:25:38 PM7/13/11
to Michael Jones, golang-nuts
Hi Michael,

Thanks for your thoughts. Almost all of them have come up before,
but we don't yet have an established place to look for design rationales
and removed or declined features. We have been meaning to make one.

In general, we are trying to keep Go small, easy to know in full.
Many of your proposals are things that have already been considered
or done but were deemed not to carry their weight. It is all too easy
to build a language of many individually useful features that ends up
being an unreasonable amount to learn, with the effect that the features
go unused. (Witness C++, where everyone knows and programs in
a different subset.)

Another important point is that what makes sense in a small program
may not make sense in a large one. All of your proposals make
sense in a 1000-line or even 10,000-line program written by a single person
or small group, where everyone involved can keep the whole thing in
his or her head. Haskell and ML seem to me examples of great languages
that target that spot, whether intentionally or not. Python is another
language that targets it.

Go, in contrast, targets much much larger programs, written by many
people, where no one involved has the whole thing in his or her head.
In that situation, it becomes more important to keep the language small,
so that everyone will know all the syntax they see, and to keep the rules
simple, so that everyone can predict what code does and the compiler
can catch mistakes.

Russ

Russ Cox

unread,
Jul 13, 2011, 1:50:17 PM7/13/11
to Michael Jones, Rob 'Commander' Pike, golang-nuts
> PROPOSAL 2

> Hmm... What is tricky about assignments?

After type T int32, you can't even pass a T where an int32
is expected. Our experience has been that the clarity here is
worthwhile and catches bugs. Being able to pass it where an
int64 is expected would be truly surprising.

> PROPOSAL 5


> Maybe so. What actually motivated me was the assembly output. Not that it
> was lightly optimized, because compilers improve, but because the code
> generator understood that I wanted to do a more expensive division than I
> needed and that was because I was forced to make everything bigger than
> necessary because of the strict type equality rules.

I think the answer here is "compilers improve" or "Go lets you
be clear about which division you want" or both. Note that you
can even ask for a 8-bit division if you really want one, something
that is impossible in C because of the cleverness of the usual
arithmetic conversions.

> PROPOSAL 7


> I want it all the time in Go's constant construction because 2**n is
> mathematics and 1<<n is C. Even more when I want 3**n or 10**n.

You justified PROPOSAL 6 by saying it is how computers work.
But this is not how computers work. x<<y is a hardware instruction.
3**n is not. x**y even less so. It seems strange to single out one
such algorithm for implementation by the compiler.

> PROPOSAL 9


> True, but then Wirth and Dijkstra beat you to ":=" by 30 years so it was not
> misleading when ":=:" was a symmetric version of ":="

Saying that Dijkstra "beat" Go to ":=" makes it sound like
the notations mean the same thing, but they don't.
Dijkstra's ":=" is more like Go's "=".

> PROPOSAL 12


>
> Sorry that this argument was lost. Natural intuitions about element
> capabilities and equivalences will never be clear to the programmer the way
> it is now. Sigh. Octonion2048 here I come.

It will be clear to Fortran, Mathematica, and numpy programmers.

The intuitions don't go as far as you claim.
A float32 can't accurately hold an int32.

Russ

Robert Griesemer

unread,
Jul 13, 2011, 2:07:51 PM7/13/11
to Michael Jones, golang-nuts
A few additional comments, most has been said already.

Proposal 1: Even if string concatenation would work (and it doesn't
with the semicolon insertion rules in place), it's an extra rule in
the spec for something that can be done with + (and w/o breaking a leg
in the process). It's in the spirit of Go to leave away redundant
features.

Proposal 3: Translating individual bit slice operations into machine
code generates quite a bit of code _in general_, and even a very good
compiler may not be able to optimize it as well as one can do with
explicit shifts and masks, especially if it's more than one bit slice
operation in an expression. If it's not performance critical but bit
slices are frequent, it's easy to write some functions (and eventually
they may even get inlined). As Russ pointed out, this may be useful
for certain kinds of codes, but it's not carrying it's weight in large
programs. Keepinh the language small is more important.

Proposal 4: 0b1010 constants seem fine, but it's only a question of
time until somebody wants radix 3 or 4 or what have you. Hex values
are the mechanism to get to the bit representation; octal literals are
only there for historic reasons (os.Open). If anything, one would want
a general radix notation. I have proposed in the past:
<radix>x<mantissa>, where radix 0 has the special meaning 16 to be
compatible with the time-honed C notation. Then one could write:
2x1010 (for 0b1010), but also 3x101 (base 3), 10 (or 10x10, base 10),
0xa (or 16xa, base 16); and one could get rid of the octal notation in
turn. One argument against this (and other general radix notations) is
that the octal prefix 0 would not be special anymore (so 010 means 10
not 8), which might become a source of subtle bugs when quickly
porting existing C code.

Proposal 6: Multi-valued operators have been proposed in the past, and
I think it would be very nice to have (and even natural), but
unfortunately it breaks down for division: For a/b one actually wants
an a that is twice the size of b. One could say that in the form q, r
:= a/b, a is of a type with twice the size as b, but it gets
complicated (q may be twice the size, too).

- gri

Michael Jones

unread,
Jul 13, 2011, 2:09:45 PM7/13/11
to r...@golang.org, Rob 'Commander' Pike, golang-nuts
Thanks!

On Wed, Jul 13, 2011 at 10:50 AM, Russ Cox <r...@golang.org> wrote:
:

> PROPOSAL 5
> Maybe so. What actually motivated me was the assembly output. Not that it
> was lightly optimized, because compilers improve, but because the code
> generator understood that I wanted to do a more expensive division than I
> needed and that was because I was forced to make everything bigger than
> necessary because of the strict type equality rules.

I think the answer here is "compilers improve" or "Go lets you
be clear about which division you want" or both.  Note that you
can even ask for a 8-bit division if you really want one, something
that is impossible in C because of the cleverness of the usual
arithmetic conversions.

Actually this relates to my motivation for making the list.

    :
    var a uint32 = 99999
    var b uint8 = 3
    c := a/b
    :

6g test.go
test.go:24: invalid operation: a / b (mismatched types uint32 and uint8)

The language prevents me from saying "I want to divide a 32-bit uint by an 8-bit uint" because I must code it as a/uint32(b) so it always sees 32-bit/32-bit or in my case 64-bit/64-bit when the divisor was 10.


> PROPOSAL 7
> I want it all the time in Go's constant construction because 2**n is
> mathematics and 1<<n is C. Even more when I want 3**n or 10**n.

You justified PROPOSAL 6 by saying it is how computers work.
But this is not how computers work.  x<<y is a hardware instruction.
3**n is not.  x**y even less so.  It seems strange to single out one
such algorithm for implementation by the compiler.

Indeed. 3**7 or 3**n is how arithmetical notation works rather than how computers work. The first (#6) was to access the hard work that the CPU had done and that I needed. The second is to make life convenient for the programmer, who can always turn to their calculator or Mathematica for 3**7 or the like.
 
> PROPOSAL 9
> True, but then Wirth and Dijkstra beat you to ":=" by 30 years so it was not
> misleading when ":=:" was a symmetric version of ":="

Saying that Dijkstra "beat" Go to ":=" makes it sound like
the notations mean the same thing, but they don't.
Dijkstra's ":=" is more like Go's "=".

> PROPOSAL 12
>
> Sorry that this argument was lost. Natural intuitions about element
> capabilities and equivalences will never be clear to the programmer the way
> it is now. Sigh. Octonion2048 here I come.

It will be clear to Fortran, Mathematica, and numpy programmers.

The intuitions don't go as far as you claim.
A float32 can't accurately hold an int32.

Certainly
 

Russ

Ian Lance Taylor

unread,
Jul 13, 2011, 2:24:46 PM7/13/11
to Michael Jones, r...@golang.org, Rob 'Commander' Pike, golang-nuts
Michael Jones <m...@google.com> writes:

> Actually this relates to my motivation for making the list.
>
> :
> var a uint32 = 99999
> var b uint8 = 3
> c := a/b
> :
>
> 6g test.go
> test.go:24: invalid operation: a / b (mismatched types uint32 and uint8)
>
> The language prevents me from saying "I want to divide a 32-bit uint by an
> 8-bit uint" because I must code it as a/uint32(b) so it always sees
> 32-bit/32-bit or in my case 64-bit/64-bit when the divisor was 10.

The language prevents you from saying it explicitly, but nothing
prevents the compiler from optimizing a/uint32(b) into a 32/8 division
instruction, if such an instruction exists.

Ian

Michael Jones

unread,
Jul 13, 2011, 2:46:30 PM7/13/11
to Ian Lance Taylor, r...@golang.org, Rob 'Commander' Pike, golang-nuts
True. Had not thought of the compiler having that freedom. But it is a fine answer. (And, for those who use x86 CPUs, the instructions are in fact just like this: 64/32, 32/16, and 16/8)

Bobby Powers

unread,
Jul 13, 2011, 6:29:57 PM7/13/11
to Ian Lance Taylor, David Roundy, Rob 'Commander' Pike, Michael Jones, golang-nuts
On Wed, Jul 13, 2011 at 7:35 AM, Ian Lance Taylor <ia...@google.com> wrote:
> David Roundy <rou...@physics.oregonstate.edu> writes:
>
>> On Wed, Jul 13, 2011 at 2:32 PM, Rob 'Commander' Pike <r...@google.com> wrote:
>>>> PROPOSAL 7: integer exponentiation operator in Go
>>>
>>> Ditto. Other than base 2, which we have, I can't think of a single time I've wanted integer exponentiation.
>>
>> I understand that scientific programming is a niche market, and not
>> one that is really targeted by the go language, but it's a pretty
>> large niche, and I think it'd be worth considering the addition of an
>> exponentiation operator.  You may not need it very often, but when you
>> need it, it *really* helps to have it.  It certainly seems as
>> generally useful as the bitwise operators that go already has.  I'd
>> hate to see them removed from the language, but I also haven't yet
>> needed them in go.
>
> It's easy enough to write a function to do integer exponentiation.
>
> The only advantage I see to adding it to the language is so that the
> compiler can optimize exponentiation when the power (right) operand is a
> constant.  And that does not seem to me to be a compelling advantage.
> It's easy enough to optimize those cases by hand in the very few cases
> where they matter.  Or just wait until the compiler can inline simple
> functions, and then you'll get the same optimization anyhow.

The advantage to adding it to the language that I see is that
a*x**3 + b*x**2
is clearer than
a*math.Pow(x, 3) + b*math.Pow(x, 2)

As Michael initially said, "the code above is what compilation is all


about: make the expression clear in source and do the good translation
automatically."

yours,
Bobby

Robert Johnstone

unread,
Jul 13, 2011, 9:42:25 PM7/13/11
to golang-nuts
Hello Everyone,

I'd like to throw some support behind proposal 2, the automatic
widening of types. I understand that there is a resistance to
automatic type conversion (which I generally support), but there is
something kind of ridiculous in the following code:

a := int32(1)
b := int8(1)
c := a + int32(b)

In 99.99% of cases, the 'b' is going to be expanded, and the explicit
cast does not help with the clarity at all. It is simply mundane book-
keeping in a case where the compiler should be able to easily see what
is the correct action. The insistence on the cast appears to me to be
more pedantic than helpful. The explicit cast is just noise.

To those who argue that the above is not a good idea because of
possible type aliases (i.e. type MyInt int), this is irrelevant. (1)
It is an oddity to argue against a set of automatic casts while
simultaneously arguing that int and MyInt should be interchangeable.
(2) The whole purpose of creating the type like MyInt is to create a
type that does *not* behave like an int. Otherwise, why create a new
type? (3) Go already has a clear distinction between built-in and
user defined types, so I'm not sure why others are arguing that the
compiler treat them exactly the same.

This change would likely simply significant amounts of code, and would
not sacrifice correctness. The only downside I see is some additional
complexity in the compiler.

Robert

Rob 'Commander' Pike

unread,
Jul 13, 2011, 9:47:28 PM7/13/11
to Robert Johnstone, golang-nuts

On 14/07/2011, at 11:42 AM, Robert Johnstone wrote:

> Hello Everyone,
>
> I'd like to throw some support behind proposal 2, the automatic
> widening of types. I understand that there is a resistance to
> automatic type conversion (which I generally support), but there is
> something kind of ridiculous in the following code:
>
> a := int32(1)
> b := int8(1)
> c := a + int32(b)

It may look ridiculous in this case (I would say it looks clear, but let's take your adjective) but it's not nearly so risible when those assignments are hidden inside function invocations, behind interfaces, inside assignments of composite types, and so on. The rule is the rule because it makes the tricky stuff not tricky any more; the cost is that some simple examples get a little noisier. That's a price worth paying.

-rob

Robert W. Johnstone

unread,
Jul 13, 2011, 10:14:09 PM7/13/11
to Rob 'Commander' Pike, golang-nuts
Hello,

I understand your concern, but I'm having trouble imagining a case
where the resulting code would be incorrect. I would think that a
situation where Go can make programmers' jobs easier without
sacrificing correctness would be a win. I suppose this is what I
meant by pedantic - emphasizing a detail that (in most cases) doesn't
help the programmer evaluate the correctness of the code.

Also, I'm not certain that I follow your concern about the assignments
being hidden. You either have access to the function body (or
interface), and so the behaviour is visible. Otherwise, that piece of
code is a black box, so the details of the cast are hidden whether
explicit or not.

Robert

P.S. In case my original email came across as overly critical, I like
Go very much. I've switched to using it for most of my personal
projects. Thank-you.

Russ Cox

unread,
Jul 13, 2011, 11:41:55 PM7/13/11
to Robert Johnstone, golang-nuts
On Wed, Jul 13, 2011 at 18:42, Robert Johnstone <r.w.jo...@gmail.com> wrote:
> I'd like to throw some support behind proposal 2, the automatic
> widening of types.

Please see my response to proposal 2 from earlier in this thread,
reproduced below. Inserting automatic widening means giving
up the "named types are not the same" rule, which makes it
pretty much a non-starter.

Russ


> PROPOSAL 2
> Hmm... What is tricky about assignments?

After type T int32, you can't even pass a T where an int32
is expected. Our experience has been that the clarity here is
worthwhile and catches bugs. Being able to pass it where an
int64 is expected would be truly surprising.

 I understand that there is a resistance to

Michael Jones

unread,
Jul 14, 2011, 1:23:54 AM7/14/11
to r...@golang.org, Robert Johnstone, golang-nuts
Disclaimer: I am not arguing about this, just curious to reconcile what I understand with what Russ is saying.

Numeric types are either the type families (uintN, intN, floatN, or complexN),
Or they are a renamed version of these (Type myIntegerType int16)

The compiler, at some point, surely has two facts for each variable:
a. The logical type name or identifier ("float32", "int16", or "myIntegerType")
b. The physical numeric type or identifier ("float32", "int16", and "int16", respectively)

When confronted with a "validate or emit an assignment" task, presumably the compiler does something like:

// validate "a = b"
if a.logicalTypeName == b.logicalTypeName {
    emit(...) // move b to a, both of type a.physicalTypeName
    return OK
    }     
bail("type mismatch, you lose")

That is now. What we've been contemplating is:

// validate a = b
if a.logicalTypeName == b.logicalTypeName {
    // good citizens -- they cast everything nicely
    emit(...) // move b to a, both of type a.physicalTypeName
    return OK
if (a is built in type and b is built in  type) && // i.e. not MyIntegerType
    (if a.familyName == b.familyName) && // i.e., both int or both float
    (if a.N > b.N) { // a is wider version of b
        rewrite b as cast "a.logicalTypeName ( b ) " // i.e., "ui32 := ui16" => "ui32 := uint32(ui16)"
        // or just do it: emit(sign extend or zero extend b); emit(mov b to a) or fp/complex equivalent
        return OK // enjoy satisfaction of another trivial cast saved
}
bail("type mismatch, you lose")

Am struggling to imagine the harm that you and Rob and others see and feel so vividly. Not disputing, just eager to share your insights.

Curious in NYC,
Michael

Rob 'Commander' Pike

unread,
Jul 14, 2011, 2:17:26 AM7/14/11
to Michael Jones, r...@golang.org, Robert Johnstone, golang-nuts
Every feature request is reasonable in isolation, when reduced to the simplest case. In reality, though, it's never that simple. In language design, perhaps even more so than in other areas, what matters is interoperation in all the other cases.

What you write as
a = b
characterizes this as one case in a compiler: the assignment statement. But there are many more:
a += b
f(b)
T{b}
interfaceVal = b
and so on. And they're not all in code generation either, particularly in Go, which does lots of interesting things in libraries such as JSON, XML, gobs, .... Each one must be thought about, reasoned about, maybe argued about, and then implemented. And that leaves aside the profound concern about how the change affects other parts of the language, parts we often don't realize are affected until implementation of some feature is begun.

Is this feature (any feature) worth the cost of fixing all those places, testing them, documenting them, making them all consistent? On multiple platforms with multiple compilers and interpreters? Worth the cost of maintenance as things develop?

Is it worth changing the simple arithmetic compatibility rule today (exact match or error) into something more subtle? Is it worth taking a step away from exactitude towards more general assignment compatibility rules? What comes next? If we added this feature; from that step it would be only one more step to some new proposal that again seems reasonable in isolation, when reduced to the simplest case.

In short, your characterization of what's required misses much of the deep stuff that factors in. Without paying any attention to the specific proposal, the question is always, "does the benefit of the feature outweigh the cost of implementation, reliability, documentation, and maintenance?" And what precedent does it set? When the topic is arithmetic on basic types, the bar must be set very high.

Now, apply this general discussion to all the different feature requests we get and the complexities multiply. A feature can't just be reasonable, it needs to do more than carry its weight. It needs to make things much better than the cost of implementing and maintaining it, factoring in how frequently it will be used and how it interacts with the rest of the language.

In a language in which mixed-radix arithmetic was the norm, the rule rather than the exception, things might be different. For you it may well be the norm, and for that I apologize for the extra typing you must do. But for most Go programmers it's probably a minor annoyance at most, and always clear in meaning and result.

So the "harm" (your word, not mine) is primarily philosophical. Even if this feature had perfect technical merits on its own, I'd be very reluctant to let it in because of the knock-on effects. One of the many things in Go that I think we got absolutely right was strictness on arithmetic type compatibility and I see no reason to change it. Perhaps I write very different code from you, but the freedom of eliding the occasional cast simply isn't worth the loss of clarity or the precedent that modifying these rules would set.

In short, each feature is reasonable on its own, but that is not sufficient to make it a good idea.

-rob

Nigel Tao

unread,
Jul 14, 2011, 2:19:46 AM7/14/11
to Michael Jones, golang-nuts
On 14 July 2011 15:23, Michael Jones <m...@google.com> wrote:
> Am struggling to imagine the harm that you and Rob and others see and feel
> so vividly. Not disputing, just eager to share your insights.

I imagine a scenario like this... Suppose you have

var (
a uint32
x, y uint8
)

then these two things mean different things:

a = uint32(x) + uint32(y)
a = uint32(x + y)

Which of those should "a = x + y" mean? Presumably the latter.
However, suppose that these were fields in distant types, rather than
local variables declared right next to each other:

foo.U = bar0.V + bar1.V

and originally, both the U and V fields are uint32. Now, a year after
that code was written, a different programmer notices that V values
are always < 256, and changes V's type from uint32 to uint8. In Go,
the assignment to foo.U then becomes a compile-time error. Under your
proposal, it is silently promoted, and means uint32(x + y) instead of
the correct refactoring uint32(x) + uint32(y). This could lead to
subtle bugs.

Sure, comprehensive tests could pick this up, but it's better to catch
this at compilation time. Static typing is there for a reason.
Explicit casts are a feature, not a bug.

Ian Lance Taylor

unread,
Jul 14, 2011, 2:21:00 AM7/14/11
to Michael Jones, r...@golang.org, Robert Johnstone, golang-nuts
Michael Jones <m...@google.com> writes:

> if (a is built in type and b is built in type) && // i.e. not MyIntegerType

Today there is nothing special about the builtin types (except that
untyped constants are converted to int/float64/complex128 if there is no
other type to convert them to). The builtin types are named types like
user defined named types, they just happen to be predefined.
(Admittedly they are predefined in a magic way.)

All language decisions are a tradeoff. Making the language more complex
is a drawback. Adding hidden type conversions, even ones that are safe,
is a drawback. These need a compensating benefit. Most Go code seems
to look OK even though type conversions are required to mix different
types. What is the benefit that significantly outweighs the drawbacks?
Why would much code ever use this feature?


> (if a.familyName == b.familyName) && // i.e., both int or both float

By the way, I assume that int and uint count as different families in
your proposal.

Ian

Michael Jones

unread,
Jul 14, 2011, 2:56:26 AM7/14/11
to Ian Lance Taylor, r...@golang.org, Robert Johnstone, golang-nuts
Rob, Nigel, Ian:

Thank you! This is great. It's three personal views on the sense that guides you, just what I was looking for. My takeaway is that any change needs to fix something significant or add something necessary because even a (hypothetical) harmless refinement has n^2 mental complexity to verify as both harmless and a refinement, and the same-type widening proposal is neither immediately harmless to you nor a certain refinement. I can accept that and appreciate the education.

Michael

P.S. Nigel, the example of "a = x + y" is outside my imagined use case as I only postulated assignment. I did advocate the cases Rob lists, but in my mind your example is quite different. Three address CPUs are less prevalent these decades, so "a = x + y" really means:

compute x+y in a register
store result in variable 'a' (or send it on a channel, or call a function, or ...)

thus I see the "compute x+y" part as independent and unaddressed by the proposal. In this view, my issue is only about the "a = sum" part. If the sum happens to have type int8 or int16 and 'a' has type int32, I would advocate the sign-extended move to 'a' rather than have the compiler bail out. How the language views "compute x+y" is is about Go-as-it-as. As far as I know, mixed arithmetic is forbidden by the specification so the only valid Go meaning be assign(widenIfAllowable(AddSameTypes(a,b))). That is, the proposal is only about value preserving widening in assignment (by zero extension or signed extension, or SP=>DP FP.)

I lack the context to know if the involutions Rob postulates are successfully addressed by this proposal, so I must yield to their spectre. The P.S. is just for clarity as I sit back down. ;-)

unread,
Jul 14, 2011, 5:42:01 AM7/14/11
to golang-nuts
On Jul 14, 8:19 am, Nigel Tao <nigel...@golang.org> wrote:
> On 14 July 2011 15:23, Michael Jones <m...@google.com> wrote:
>
> > Am struggling to imagine the harm that you and Rob and others see and feel
> > so vividly. Not disputing, just eager to share your insights.
>
> I imagine a scenario like this... Suppose you have
>
> var (
>     a uint32
>     x, y uint8
> )
>
> then these two things mean different things:
>
> a = uint32(x) + uint32(y)
> a = uint32(x + y)
>
> Which of those should "a = x + y" mean? Presumably the latter.
> However, suppose that these were fields in distant types, rather than
> local variables declared right next to each other:
>
> foo.U = bar0.V + bar1.V
>
> and originally, both the U and V fields are uint32. Now, a year after
> that code was written, a different programmer notices that V values
> are always < 256, and changes V's type from uint32 to uint8. In Go,
> the assignment to foo.U then becomes a compile-time error. Under your
> proposal, it is silently promoted, and means uint32(x + y) instead of
> the correct refactoring uint32(x) + uint32(y). This could lead to
> subtle bugs.

Ambiguity is just what it is: an ambiguity. There is no such thing as
"correct refactoring" that you could proclaim to be a general and the
only one refactoring. You always need some other bits of information
to determine what the correct refactoring in such a case is.

Anyway, in my opinion, the only way out is to have an arbitrary-
precision "+" operator in the language from the very beginning, and to
rename the N-bit addition operators to something like "+[32-bit]". In
that case, if the code in your example does "bar0.V + bar1.V" it is
clear what it means after someone changes V's type from uint32 to
uint8: it means arbitrary-precision addition. The *meaning* is
unchanged.

If "+" in a programming language does "+[N-bit]" - as it does in Go -
then changing V's type from uint32 to uint8 changes the meaning of the
code. You always need to explicitly check whether the new meaning
corresponds to the intended meaning. Anybody claiming to have a
general rule for refactoring the code under these circumstances is
clearly mistaken. There is no such general rule.
Reply all
Reply to author
Forward
0 new messages