Add comparisons to all types

437 views
Skip to first unread message

will....@gmail.com

unread,
May 2, 2022, 10:43:53 PM5/2/22
to golang-nuts
All types should have unrestricted comparisons (`==`, `!=`), but a few pre-declared types don't. Adding them would bridge a semantic gap between pre-declared and user-declared types, enabling all types to be used as map keys, and otherwise make reasoning about them more consistent and intuitive.

For the types that don't yet have unrestricted comparisons:
  • Functions: Compare the corresponding memory addresses. The time complexity is constant.
  • Maps: Compare the corresponding `*runtime.hmap` (pointer) values. The time complexity is constant.
  • Slices: Compare the corresponding `runtime.slice` (non-pointer struct) values. The time complexity is constant.
Examples:

```
// Functions

func F1() {}
func F2() {}

var _ = F1 == F1 // True
var _ = F1 != F2 // True

// Maps

var M1 = map[int]int{}
var M2 = map[int]int{}

var _ = M1 == M1 // True
var _ = M1 != M2 // True

// Slices

var S1 = make([]int, 2)
var S2 = make([]int, 2)

var _ = S1 == S1 // True
var _ = S1 != S2 // True

var _ = S1 == S1[:] // True because the lengths, capacities, and pointers are equal
var _ = S1 != S1[:1] // True because the lengths aren't equal
var _ = S1[:1] != S1[:1:1] // True because the capacities aren't equal
var _ = S1 != append(S1, 0)[:2:2] // True because the pointers aren't equal
```

Function and map equality are consistent with channel equality, where non-nil channels are equal if they were created by the same call to `make`. Function values are equal if they were created by the same function literal or declaration. Map values are equal if they were created by the same map literal or the same call to `make`. Functions that are equal will always produce the same outputs and side effects given the same inputs and conditions; however, the reverse is not necessarily true. Maps that are equal will always contain the same keys and values; however, the reverse is not necessarily true.

Slice equality is consistent with map equality. Slice values are equal if they have the same array pointer, length, and capacity. Slices that are equal will always have equal corresponding elements. However, like maps, slices that have equal corresponding elements are not necessarily equal.

This approach to comparisons for functions, maps, and slices makes all values of those types immutable, and therefore usable as map keys.

This would obviate the `comparable` constraint, since all type arguments would now satisfy it. In my opinion, this would make the language simpler and more consistent. Type variables could be used with comparison operations without needing to be constrained by `comparable`.

If you think slice equality should incorporate element equality, here's an example for you:

```
type Slice1000[T any] struct {
    xs *[1000]T
    len, cap int
}

func (s Slice1000[T]) Get(i int) T {
    // ...
    return s.xs[i]
}

func (s Slice1000[T]) Set(i int, x T) {
    // ...
    s.xs[i] = x
}

var xs1, xs2 [1000]int

var a = Slice1000[int]{&xs1, 1000, 1000}
var b = Slice1000[int]{&xs2, 1000, 1000}
var c = a == b
```

Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

Any thoughts?

Kurtis Rader

unread,
May 2, 2022, 11:22:34 PM5/2/22
to will....@gmail.com, golang-nuts
On Mon, May 2, 2022 at 7:44 PM will....@gmail.com <will....@gmail.com> wrote:
```
type Slice1000[T any] struct {
    xs *[1000]T
    len, cap int
}

func (s Slice1000[T]) Get(i int) T {
    // ...
    return s.xs[i]
}

func (s Slice1000[T]) Set(i int, x T) {
    // ...
    s.xs[i] = x
}

var xs1, xs2 [1000]int

var a = Slice1000[int]{&xs1, 1000, 1000}
var b = Slice1000[int]{&xs2, 1000, 1000}
var c = a == b
```

Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

No. Did you actually try your hypothetical `make([]int, 2) == make([]int, 2)`? When I do so using the source below this reply the Go compiler emits the error "slice can only be compared to nil". Which is what I expect given the specification for the Go language. This seems like an example of the XY Problem. What caused you to open this thread?

package main

import (
"fmt"
)

func main() {
fmt.Printf("%v\n", make([]int, 2) == make([]int, 2))
}


--
Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

Will Faught

unread,
May 3, 2022, 12:41:55 AM5/3/22
to Kurtis Rader, golang-nuts
You seem to have misunderstood the point. It's an idea for changing the language. You're just demonstrating the current behavior, which is what would be changed. The argument is to make `make([]int, 2) == make([]int, 2)` legal, and evaluate to false.

Rob Pike

unread,
May 3, 2022, 12:58:46 AM5/3/22
to Will Faught, Kurtis Rader, golang-nuts
* Functions: Compare the corresponding memory addresses. The time
complexity is constant.

There are cases involving closures, generated trampolines, late
binding and other details that mean that doing this will either
eliminate many optimization possibilities or restrict the compiler too
much or cause surprising results. We disabled function comparison for
just these reasons. It used to work this way, but made closures
surprising, so we backed out and allow comparison only to nil.

* Maps: Compare the corresponding `*runtime.hmap` (pointer) values.
The time complexity is constant.
* Slices: Compare the corresponding `runtime.slice` (non-pointer
struct) values. The time complexity is constant.

In LISP terms, these implementations do something more like `eq`, not
`equal`. I want to know if the slices or maps are _equivalent_, not if
they point to identical memory. No one wants this semantics for slice
equality. Checking if they are equivalent raises difficult issues
around recursion, slices that point to themselves, and other problems
that prevent a clean, efficient solution.

Believe me, if equality for these types was efficient _and_ useful, it
would already be done.

-rob
> --
> You received this message because you are subscribed to the Google Groups "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAKbcuKheNk99JrYJ8u6knu15LSwf6nZXxD6_UqUOF_1JhFVHjA%40mail.gmail.com.

Will Faught

unread,
May 3, 2022, 2:32:18 AM5/3/22
to Rob Pike, Kurtis Rader, golang-nuts
There are cases involving closures, generated trampolines, late
binding and other details that mean that doing this will either
eliminate many optimization possibilities or restrict the compiler too
much or cause surprising results. We disabled function comparison for
just these reasons. It used to work this way, but made closures
surprising, so we backed out and allow comparison only to nil.

That's interesting. I didn't know that. :)

When I run:

```
func f() {
    x := func() {}
    y := func() {}
    fmt.Printf("%#v %#v %#v %#v\n", x, y, func() {}, func() {})
}

func g() {}

func main() {
    fmt.Printf("%#v %#v %#v %#v\n", f, g, func() {}, func() {})
    f()
}
```

I get:

```
(func())(0x108ac80) (func())(0x108ad40) (func())(0x108ad60) (func())(0x108ad80)
(func())(0x108ac00) (func())(0x108ac20) (func())(0x108ac40) (func())(0x108ac60)
```

I don't know where those integer values are coming from, but those are what I meant by memory addresses. They seem to be unique per function value. Can't the runtime calculate those same values for comparisons?

In LISP terms, these implementations do something more like `eq`, not
`equal`. I want to know if the slices or maps are _equivalent_, not if
they point to identical memory. No one wants this semantics for slice
equality. Checking if they are equivalent raises difficult issues
around recursion, slices that point to themselves, and other problems
that prevent a clean, efficient solution.

Can't the same argument be made for pointer comparisons? Why should we compare pointer values when doing so won't compare the referenced, logical values? Because sometimes we want to compare just the pointer values because it's fast, and pointers are small, and you can conclude logical equivalence if they're equal. Sometimes shallow/literal comparisons are useful, and sometimes deep/logical comparisons are useful.

Just as pointer comparisons are shallow, so too are comparisons for types that contain pointers. I included the Slice1000 example above specifically to address your point. Based on your argument here, I assume your answer to the question about `c` in that example would be "yes," however the answer is no, according to current Go behavior. Comparison of structs containing pointers does a shallow comparison of the pointer value, not the value it references. My argument is that under the hood, Go slices work the same way.

I'm proposing a shallow comparison, not a deep comparison, and that's arguably a feature here. I highlighted the time complexity for a reason, because I recall someone in the Go Team at one point arguing somewhere that doing a logical comparison would be too slow, since one of the benefits of Go comparisons is that their time complexity is small and well-understood. Checking for equality for your struct-based type won't ever cause your program to slow or hang; it's "safe."

The point isn't to provide equivalence operations; it's to provide useful comparison operations that are consistent with the other types' comparison operations, to make all types consistent and simplify the language. We could provide a separate equivalence operation, perhaps something like `===` that behaves like `reflect.DeepEquals`, but that's a separate issue. Shallow slice comparisons do allow you to conclude that elements are equal if slices compare equal, and we can still iterate slices manually to compare elements.

Axel Wagner

unread,
May 3, 2022, 2:45:28 AM5/3/22
to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Tue, May 3, 2022 at 8:32 AM Will Faught <will....@gmail.com> wrote:
There are cases involving closures, generated trampolines, late
binding and other details that mean that doing this will either
eliminate many optimization possibilities or restrict the compiler too
much or cause surprising results. We disabled function comparison for
just these reasons. It used to work this way, but made closures
surprising, so we backed out and allow comparison only to nil.

That's interesting. I didn't know that. :)

When I run:

```
func f() {
    x := func() {}
    y := func() {}
    fmt.Printf("%#v %#v %#v %#v\n", x, y, func() {}, func() {})
}

func g() {}

func main() {
    fmt.Printf("%#v %#v %#v %#v\n", f, g, func() {}, func() {})
    f()
}
```

I get:

```
(func())(0x108ac80) (func())(0x108ad40) (func())(0x108ad60) (func())(0x108ad80)
(func())(0x108ac00) (func())(0x108ac20) (func())(0x108ac40) (func())(0x108ac60)
```

I don't know where those integer values are coming from, but those are what I meant by memory addresses. They seem to be unique per function value. Can't the runtime calculate those same values for comparisons?

If function comparison is possible, the spec has to define what the result is. Otherwise, the validity of a Go program would depend on the implementation. We want to avoid that as much as possible.

You are right that a func value always is represented as a pointer, so those pointers can always be compared. The difficulty is that putting restrictions on which func values are represented by the same pointer, without restricting the implementation too much, while also being useful.

FWIW the only real useful way to make func comparable IMO would be for any comparison to be false (unless both funcs are nil). This would make == irreflexive for func values, but it already is for NaN, so that alone wouldn't be an exclusion criterion. Of course, the reason we are fine with NaNs is that they are only a small subset of floats and we want floats to be comparable in general - an irreflexive == for *all* values of a given type would be pretty useless.
 

Jan Mercl

unread,
May 3, 2022, 2:59:19 AM5/3/22
to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Tue, May 3, 2022 at 8:32 AM Will Faught <will....@gmail.com> wrote:

> Just as pointer comparisons are shallow, so too are comparisons for types that contain pointers.

Pointer comparisons are not shallow. Comparing two pointers compares
the entire values. a == b and *a == *b compare different values but in
both cases, and always, the entire values. (What concerns the
semantics of the Go '==' operator, equality per se can be defined in
many other ways.)

Axel Wagner

unread,
May 3, 2022, 3:30:47 AM5/3/22
to golang-nuts
On Tue, May 3, 2022 at 8:32 AM Will Faught <will....@gmail.com> wrote:
Can't the same argument be made for pointer comparisons?

I think what it comes down to is: Yes, this argument can be made for pointers as well. But it would be more controversial. There is no absolutely more/less confusing semantic. But, at least that's the argument, it's less controversial for pointers to be compared as they are, than it would be for slices.

For pointers there are essentially two ways to define comparisons: 1. You compare the pointees, which can lead to bad results, because you can easily build circular pointer structures, or 2. you compare the pointers, which is easy to specify and well-defined. Note that even that leads to ambiguities which sometimes (but relatively rarely) come up - for example, the spec doesn't say if new(struct{}) == new(struct{}). But apart from this rarely important edge-case, it is easy to specify an unsurprising pointer comparison.

For slices, even with your definition, there are questions. For example, should s[0:0] == s[0:0:0], for non-empty slices? That is, should capacity matter? Should make([]T, 0) == make([]T, 0)? That is, what if the "pointer" in the slice header doesn't actually mean anything, as the slice has capacity 0?

The thing is, for pointers there pretty much is exactly one *useful* way to define comparison. For slices, there are multiple useful ways to define it, none of them *clearly* superior to any of the others.

Ian Lance Taylor

unread,
May 3, 2022, 2:14:10 PM5/3/22
to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Mon, May 2, 2022 at 11:32 PM Will Faught <will....@gmail.com> wrote:
>>
>> There are cases involving closures, generated trampolines, late
>> binding and other details that mean that doing this will either
>> eliminate many optimization possibilities or restrict the compiler too
>> much or cause surprising results. We disabled function comparison for
>> just these reasons. It used to work this way, but made closures
>> surprising, so we backed out and allow comparison only to nil.
>
>
> That's interesting. I didn't know that. :)
>
> When I run:
>
> ```
> func f() {
> x := func() {}
> y := func() {}
> fmt.Printf("%#v %#v %#v %#v\n", x, y, func() {}, func() {})
> }
>
> func g() {}
>
> func main() {
> fmt.Printf("%#v %#v %#v %#v\n", f, g, func() {}, func() {})
> f()
> }
> ```
>
> I get:
>
> ```
> (func())(0x108ac80) (func())(0x108ad40) (func())(0x108ad60) (func())(0x108ad80)
> (func())(0x108ac00) (func())(0x108ac20) (func())(0x108ac40) (func())(0x108ac60)
> ```
>
> I don't know where those integer values are coming from, but those are what I meant by memory addresses. They seem to be unique per function value. Can't the runtime calculate those same values for comparisons?

Yes, it can. That's not the issue. The issue is that whether those
pointer values are equal for two func values is not intuitive at the
language level. When using shared libraries (-buildmode=shared) you
can get two different pointer values for references to the same
function. When using method values you can get the same pointer value
for references to method values of different expressions of the same
type. When using closures you will sometimes get the same value,
sometimes different values, depending on the implementation and the
escape analysis done by the compiler.



> The point isn't to provide equivalence operations; it's to provide useful comparison operations that are consistent with the other types' comparison operations, to make all types consistent and simplify the language. We could provide a separate equivalence operation, perhaps something like `===` that behaves like `reflect.DeepEquals`, but that's a separate issue. Shallow slice comparisons do allow you to conclude that elements are equal if slices compare equal, and we can still iterate slices manually to compare elements.

It's important that Go operators be intuitive for programmers. For
example, many new Java programmers find that the == operator for
strings is not intuitive in Java. What is people's intuition for
slice equality? I think that different people make different
assumptions. Not supporting the == operators ensures that nobody gets
confused.

Ian

Will Faught

unread,
May 3, 2022, 6:15:53 PM5/3/22
to Jan Mercl, Rob Pike, Kurtis Rader, golang-nuts
I'm not sure we're on the same page in terminology. I meant shallow as opposed to deep. E.g. pointer equality in terms of `==` vs. `reflect.DeepEqual`. Unequal pointers can reference values that are equivalent.

will....@gmail.com

unread,
May 3, 2022, 7:40:20 PM5/3/22
to golang-nuts
On Tuesday, May 3, 2022 at 12:30:47 AM UTC-7 axel.wa...@googlemail.com wrote:
On Tue, May 3, 2022 at 8:32 AM Will Faught <will....@gmail.com> wrote:
Can't the same argument be made for pointer comparisons?

I think what it comes down to is: Yes, this argument can be made for pointers as well. But it would be more controversial. There is no absolutely more/less confusing semantic. But, at least that's the argument, it's less controversial for pointers to be compared as they are, than it would be for slices.


I don't think controversy is a good counterargument. It's vague, unquantifiable, and subjective. I could easily claim the opposite, while also giving no proof. I would find the underlying reasons for the supposed controversy far more useful and productive. I understand that some people might expect a deep comparison, which is why I gave the Slice1000 example to explain the reasoning behind choosing a shallow comparison.

Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

The best path is to pick the best way for the common case, and make the other way possible. If slices are compared shallowly, we can still compare them deeply ourselves, or with `reflect.DeepEqual`; but if slices are compared deeply, then we lose the ability to compare them shallowly when that's appropriate.
 
For pointers there are essentially two ways to define comparisons: 1. You compare the pointees, which can lead to bad results, because you can easily build circular pointer structures, or 2. you compare the pointers, which is easy to specify and well-defined. Note that even that leads to ambiguities which sometimes (but relatively rarely) come up - for example, the spec doesn't say if new(struct{}) == new(struct{}). But apart from this rarely important edge-case, it is easy to specify an unsurprising pointer comparison.


As a tangent, I don't understand why this wasn't made unambiguous in the language spec. Why not have `new(struct{})` always allocate a new pointer? Who's allocating all these empty structs on the heap where this is something that needs to be optimized for? Is that really worth complicating the language? 🤔

I would argue this isn't really a deficiency with pointer comparisons, but rather with `new`. If `new(struct{}) == new(struct{})` is true, then they point to the same value in memory; that's all it means. Pointer comparisons are still valid in that case, it's just that the behavior of `new` can vary.
 
For slices, even with your definition, there are questions. For example, should s[0:0] == s[0:0:0], for non-empty slices? That is, should capacity matter? Should make([]T, 0) == make([]T, 0)? That is, what if the "pointer" in the slice header doesn't actually mean anything, as the slice has capacity 0?


I specified slice comparisons like this:

> Slices: Compare the corresponding `runtime.slice` (non-pointer struct) values. The time complexity is constant.

For those unfamiliar, `runtime.slice` is implemented like this:

```
type slice struct {
    array unsafe.Pointer
    len   int
    cap   int
}
```

It contains the array pointer, the length, and the capacity. Doing a shallow comparison of `runtime.slice` will do a comparison of only `unsafe.Pointer`, `int`, and `int`. If any of those three fields are different, the slices won't compare as equal. This is explicitly demonstrated in the examples:

```
var S1 = make([]int, 2)
var S2 = make([]int, 2)

var _ = S1 == S1 // True
var _ = S1 != S2 // True

var _ = S1 == S1[:] // True because the lengths, capacities, and pointers are equal
var _ = S1 != S1[:1] // True because the lengths aren't equal
var _ = S1[:1] != S1[:1:1] // True because the capacities aren't equal
var _ = S1 != append(S1, 0)[:2:2] // True because the pointers aren't equal
```

The last four lines relate to the fields of `runtime.slice`.

The reason to include capacity in comparisons, aside from it being convenient when doing comparisons, is that the capacity is an observable attribute of slices in regular code. Programmers are encouraged to reason about slice capacity, so it should be included in comparisons. `cap(S1[:1]) != cap(S1[:1:1])` is true, therefore `S1[:1] != S1[:1:1]` should be true, even though `len(S1[:1]) == len(S1[:1:1])` is true.

I assume `make([]T, 0)` sets the array pointer to nil, because `reflect.DeepEqual` says two of those expressions are equal.

The thing is, for pointers there pretty much is exactly one *useful* way to define comparison. For slices, there are multiple useful ways to define it, none of them *clearly* superior to any of the others.


We could do deep comparisons for pointers, but we don't, because shallow comparisons are useful, and we can dereference pointers when we need to do deep comparisons. As I argued above, this is exactly the same situation for slices regarding shallow and deep comparisons.

will....@gmail.com

unread,
May 3, 2022, 8:49:14 PM5/3/22
to golang-nuts
Interesting. This certainly differs from how I pictured functions working (more like C function pointers with extra steps). I'd be curious to know more about the details. Do you know if that's documented somewhere?

Between Rob's answer and yours, it makes sense why function comparisons were removed. It's too bad they didn't work out.

Just curious, what would be the cost if things were rejiggered under the hood to make function comparisons work? Would any language features be impossible, or would it be worse compiler/runtime complexity/performance, or both?
 

> The point isn't to provide equivalence operations; it's to provide useful comparison operations that are consistent with the other types' comparison operations, to make all types consistent and simplify the language. We could provide a separate equivalence operation, perhaps something like `===` that behaves like `reflect.DeepEquals`, but that's a separate issue. Shallow slice comparisons do allow you to conclude that elements are equal if slices compare equal, and we can still iterate slices manually to compare elements.

It's important that Go operators be intuitive for programmers. For
example, many new Java programmers find that the == operator for
strings is not intuitive in Java. What is people's intuition for
slice equality? I think that different people make different
assumptions. Not supporting the == operators ensures that nobody gets
confused.


Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.

Many new Go programmers expect assertions, and yet Go doesn't have them.

Regarding intuitions, I'll copy part of my response to Axel here:

>Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?
>
>The best path is to pick the best way for the common case, and make the other way possible. If slices are compared shallowly, we can still compare them deeply ourselves, or with `reflect.DeepEqual`; but if slices are compared deeply, then we lose the ability to compare them shallowly when that's appropriate.

If `len(slice1) == len(slice2)` is true, and if `cap(slice1) == cap(slice2)` is true, and if `slice1[0] = newValue; slice2[0] == newValue` is true, then it's intuitive to conclude that `slice1 == slice2` is true, and they contain equal elements. This is no different than doing a shallow comparison of pointers: if `p1 == p2` is true, then it's intuitive to conclude that they reference equal values. Is it possible for p1 and p2 to not be equal, but their referenced values to be equal? Sure. Is it possible for slice1 and slice2 to not be equal, but their elements to be equal? Sure. If we want to do deep comparisons for these types, we can choose to opt into that using other means like `reflect.DeepEqual` or manual iteration (or a new `===` equivalence operation, who knows).
 
Ian

Will Faught

unread,
May 3, 2022, 9:08:41 PM5/3/22
to Ian Lance Taylor, Rob Pike, Kurtis Rader, golang-nuts
My apologies, it seems that "reply all" in the Google Groups UI doesn't send email to individuals like "reply all" in Gmail does, just the group. Response copied below.

Yes, it can. That's not the issue. The issue is that whether those 
pointer values are equal for two func values is not intuitive at the 
language level. When using shared libraries (-buildmode=shared) you 
can get two different pointer values for references to the same 
function. When using method values you can get the same pointer value 
for references to method values of different expressions of the same 
type. When using closures you will sometimes get the same value, 
sometimes different values, depending on the implementation and the 
escape analysis done by the compiler.

Interesting. This certainly differs from how I pictured functions working (more like C function pointers with extra steps). I'd be curious to know more about the details. Do you know if that's documented somewhere?

Between Rob's answer and yours, it makes sense why function comparisons were removed. It's too bad they didn't work out.

Just curious, what would be the cost if things were rejiggered under the hood to make function comparisons work? Would any language features be impossible, or would it be worse compiler/runtime complexity/performance, or both?
> The point isn't to provide equivalence operations; it's to provide useful comparison operations that are consistent with the other types' comparison operations, to make all types consistent and simplify the language. We could provide a separate equivalence operation, perhaps something like `===` that behaves like `reflect.DeepEquals`, but that's a separate issue. Shallow slice comparisons do allow you to conclude that elements are equal if slices compare equal, and we can still iterate slices manually to compare elements. 

It's important that Go operators be intuitive for programmers. For 
example, many new Java programmers find that the == operator for 
strings is not intuitive in Java. What is people's intuition for 
slice equality? I think that different people make different 
assumptions. Not supporting the == operators ensures that nobody gets 
confused. 

Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.

Many new Go programmers expect assertions, and yet Go doesn't have them.

Regarding intuitions, I'll copy part of my response to Axel here:

>Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?
>
>The best path is to pick the best way for the common case, and make the other way possible. If slices are compared shallowly, we can still compare them deeply ourselves, or with `reflect.DeepEqual`; but if slices are compared deeply, then we lose the ability to compare them shallowly when that's appropriate.

If `len(slice1) == len(slice2)` is true, and if `cap(slice1) == cap(slice2)` is true, and if `slice1[0] = newValue; slice2[0] == newValue` is true, then it's intuitive to conclude that `slice1 == slice2` is true, and they contain equal elements. This is no different than doing a shallow comparison of pointers: if `p1 == p2` is true, then it's intuitive to conclude that they reference equal values. Is it possible for p1 and p2 to not be equal, but their referenced values to be equal? Sure. Is it possible for slice1 and slice2 to not be equal, but their elements to be equal? Sure. If we want to do deep comparisons for these types, we can choose to opt into that using other means like `reflect.DeepEqual` or manual iteration (or a new `===` equivalence operation, who knows).
 
Ian 

Ian Lance Taylor

unread,
May 3, 2022, 10:28:08 PM5/3/22
to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Tue, May 3, 2022 at 6:08 PM Will Faught <will....@gmail.com> wrote:
>
> My apologies, it seems that "reply all" in the Google Groups UI doesn't send email to individuals like "reply all" in Gmail does, just the group. Response copied below.
>
>> Yes, it can. That's not the issue. The issue is that whether those
>> pointer values are equal for two func values is not intuitive at the
>> language level. When using shared libraries (-buildmode=shared) you
>> can get two different pointer values for references to the same
>> function. When using method values you can get the same pointer value
>> for references to method values of different expressions of the same
>> type. When using closures you will sometimes get the same value,
>> sometimes different values, depending on the implementation and the
>> escape analysis done by the compiler.
>
>
> Interesting. This certainly differs from how I pictured functions working (more like C function pointers with extra steps). I'd be curious to know more about the details. Do you know if that's documented somewhere?

I'm not aware of any specific documentation on the topic, sorry.

The fact that C function pointers are guaranteed to compare equal can
actually be a performance hit at program startup for dynamically
linked C programs. I wrote up the details of the worst case several
years ago at https://www.airs.com/blog/archives/307. There are other
lesser issues.


> Just curious, what would be the cost if things were rejiggered under the hood to make function comparisons work? Would any language features be impossible, or would it be worse compiler/runtime complexity/performance, or both?

Regardless of compiler/runtime issues, this would introduce language
complexity, similar to the issues with slices. We would have to
precisely specify when two func values are equal and when they are
not. There is no intuitive answer to that.

Does a program like this print true or false?

func F() func() int { return func() int { return 0 } }
func G() { fmt.Println(F() == F()) }

What about a program like this:

func H(i int) func() *int { return func() *int { return &i } }
func J() { fmt.Println(H(0) == H(1)) }

Whatever we define for cases like this some people will be ready to
argue for a different choice. The costs of forcing a decision exceed
the benefits.


> Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.

I don't agree. Unexpected behavior is a footgun. Go is intended to
be a simple language. When special explanation is required, something
has gone wrong.

In saying this I don't at all claim that Go is perfect. There are
places where we made mistakes. But I don't think that our decision to
not define == on slices or functions is one of them.


> >Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

The fact that range over []byte and string are different may well have
been a mistake. It can certainly be convenient, but it trips people
up. (Certainly others may disagree with me on this.)


> >The best path is to pick the best way for the common case, and make the other way possible

I do not agree. The best path is to make no choice, and force the
program writer to be explicit about what they want.

Ian

Bakul Shah

unread,
May 3, 2022, 10:46:59 PM5/3/22
to Ian Lance Taylor, Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On May 3, 2022, at 7:27 PM, Ian Lance Taylor <ia...@golang.org> wrote:
>
> Does a program like this print true or false?
>
> func F() func() int { return func() int { return 0 } }
> func G() { fmt.Println(F() == F()) }
>
> What about a program like this:
>
> func H(i int) func() *int { return func() *int { return &i } }
> func J() { fmt.Println(H(0) == H(1)) }

Note that in Scheme eq? works for functions as one would expect.

> (define (f (lambda (x) x))
> (eq? f f)) => #t
> (define g f)
> (eq? f g) => #t

But
> (eq? f (lambda (x) x)) => #f
> (define g (lambda () (lambda (x) x)))
> (eq? (g) (g)) => #f

One can make the case that each closure would be a fresh instance.
This is more clear with a slightly more complex version:

(define (counter m) (let ((n m) (lambda () (set! n (+ n 1)) n))

And equal? is unspecified in the Scheme RnRS but would typically implemented
to return #f.

> (equal? (lambda (x) x) (lambda (x) x)) => #f

Technically (lambda (x) x) & (lambda (y) y) behave identically but
proving the more general case of this in even an interpreter would
be hard to impossible.

Go pretty much has the same considerations (but for a more complex
data model).


Axel Wagner

unread,
May 4, 2022, 1:59:13 AM5/4/22
to golang-nuts
On Wed, May 4, 2022 at 1:40 AM will....@gmail.com <will....@gmail.com> wrote:
I don't think controversy is a good counterargument. It's vague, unquantifiable, and subjective. I could easily claim the opposite, while also giving no proof.

Sure. It was not intended to be an argument. It was intended to be an explanation.
I can't proof to you whether or not it is a good idea to design the language as it is. I can only try to explain why it was.
 
Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

I think this is an interesting example, in that I've thought a couple of times in the past that it might have been a mistake to introduce the current semantics for strings. I think there are good arguments to make ranging over strings behave as ranging over []byte and consequently, there are arguments that we maybe should not have allowed either.

But the language is, at is is and we can't break compatibility.
 
The best path is to pick the best way for the common case, and make the other way possible. If slices are compared shallowly, we can still compare them deeply ourselves, or with `reflect.DeepEqual`

FTR, I don't think I ever found reflect.DeepEqual to give the semantics I want, when it comes to slices. In particular, it considers nil and empty slices to be different. Which is the right decision to make, probably, but it is almost never the semantics I want. Which is why I don't use reflect.DeepEqual, but use go-cmp, which gives me the option to configure that.

Note that it is totally possible to make comparable versions of slices as a library now. So at least the "make other ways possible" part is now done, with whatever semantics you want.

As a tangent, I don't understand why this wasn't made unambiguous in the language spec. Why not have `new(struct{})` always allocate a new pointer? Who's allocating all these empty structs on the heap where this is something that needs to be optimized for? Is that really worth complicating the language? 🤔

I don't know why that decision was made. I do believe there are some less obvious cases, where you at least have to add special casing in the implementation (e.g. make([]T, x) would have to check at runtime if x is 0). But I agree that it would probably be okay to change the spec here.
 
I would argue this isn't really a deficiency with pointer comparisons, but rather with `new`. If `new(struct{}) == new(struct{})` is true, then they point to the same value in memory; that's all it means. Pointer comparisons are still valid in that case, it's just that the behavior of `new` can vary.

Sure. That seems to be a distinction without a difference to me. Note that I didn't say it's a deficiency, quite the opposite. I said that pointer-comparisons work just fine, as they have (mostly) unambiguous and intuitive semantics. But the same is not true for slices and maps.
 
 
For slices, even with your definition, there are questions. For example, should s[0:0] == s[0:0:0], for non-empty slices? That is, should capacity matter? Should make([]T, 0) == make([]T, 0)? That is, what if the "pointer" in the slice header doesn't actually mean anything, as the slice has capacity 0?


I specified slice comparisons like this:

> Slices: Compare the corresponding `runtime.slice` (non-pointer struct) values. The time complexity is constant.

Yes. I understand what you suggested and I understood how it *would* work, if implemented that way. But why is that the best way to compare them? Doing it that way has a bunch of semantic implications, some of which are perhaps counterintuitive, which I tried to mention.

Note that the language doesn't mention "runtime.slice", BTW. So, even if we did it that way, we would have to phrase it as "two slices are equal, if they point to the same underlying array and have the same length and capacity", or somesuch. This would still not define whether make([]T, 0) == make([]T, 0), though.

So, even if we accepted that this was the "right" way to do it, it would still leave at least one question open.
 
I assume `make([]T, 0)` sets the array pointer to nil, because `reflect.DeepEqual` says two of those expressions are equal.

No, it does not. Otherwise, `make([]T, 0)` would be equal to `nil`. `make([]T, 0)` is allowed to use a constant element pointer or not - it must point to a zero-sized array. That pointer must be distinguishable from a nil-slice, but can be arbitrary apart from that. Note, BTW, that the language also does not *force* an implementation to even use that representation. A slice could well be represented as `struct{ Ptr *T; Len int; Cap int; NonNil bool }`, if the implementation want to. Or a miriad other ways.

reflect.DeepEqual does not compare the pointers for capacity 0 slices, BTW. To drive home how ambiguous this question actually is, given that you assumed it woudl.
 
We could do deep comparisons for pointers

Not without allowing for a bunch of unpleasant consequences, at least. For example, this code would hang forever:

type T *T
var t1, t2 T
t1, t2 = &t2, &t1
fmt.Println(t1 == t2)

Note that it is *very common* to have circular pointer structures. For example, in a doubly-linked list.
 

Will Faught

unread,
May 4, 2022, 2:01:44 AM5/4/22
to Ian Lance Taylor, Rob Pike, Kurtis Rader, golang-nuts
On Tue, May 3, 2022 at 7:27 PM Ian Lance Taylor <ia...@golang.org> wrote:
On Tue, May 3, 2022 at 6:08 PM Will Faught <will....@gmail.com> wrote:
>
> My apologies, it seems that "reply all" in the Google Groups UI doesn't send email to individuals like "reply all" in Gmail does, just the group. Response copied below.
>
>> Yes, it can. That's not the issue. The issue is that whether those
>> pointer values are equal for two func values is not intuitive at the
>> language level. When using shared libraries (-buildmode=shared) you
>> can get two different pointer values for references to the same
>> function. When using method values you can get the same pointer value
>> for references to method values of different expressions of the same
>> type. When using closures you will sometimes get the same value,
>> sometimes different values, depending on the implementation and the
>> escape analysis done by the compiler.
>
>
> Interesting. This certainly differs from how I pictured functions working (more like C function pointers with extra steps). I'd be curious to know more about the details. Do you know if that's documented somewhere?

I'm not aware of any specific documentation on the topic, sorry.

The fact that C function pointers are guaranteed to compare equal can
actually be a performance hit at program startup for dynamically
linked C programs.  I wrote up the details of the worst case several
years ago at https://www.airs.com/blog/archives/307.  There are other
lesser issues.


Thanks! :)
 

> Just curious, what would be the cost if things were rejiggered under the hood to make function comparisons work? Would any language features be impossible, or would it be worse compiler/runtime complexity/performance, or both?

Regardless of compiler/runtime issues, this would introduce language
complexity, similar to the issues with slices.  We would have to
precisely specify when two func values are equal and when they are
not.  There is no intuitive answer to that.

Does a program like this print true or false?

func F() func() int { return func() int { return 0 } }
func G() { fmt.Println(F() == F()) }


It would print false, because the function literal creates a new allocation (according to the rule I sketched out). I can see the desire to optimize that, but personally when I write code like that, I'm thinking, "and then return a new function." Causing an allocation isn't surprising behavior here, and so neither is uniqueness in terms of comparisons.
 
What about a program like this:

func H(i int) func() *int { return func() *int { return &i } }
func J() { fmt.Println(H(0) == H(1)) }


It would print false for the same reason.
 
Whatever we define for cases like this some people will be ready to
argue for a different choice.  The costs of forcing a decision exceed
the benefits.

So on the balance, the cost of making a decision is worth it for something big like dependencies or generics, but not function equality. Well, I guess that's fair enough, but it seems like one could use that kind of argument to undermine any language change, though, including dependencies and generics. It doesn't seem like the function equality rule I sketched out would add much, if any, language complexity. It's only one sentence: "Function values are equal if they were created by the same function literal or declaration."

> Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.

I don't agree.  Unexpected behavior is a footgun. 

I wrote that wrong expectations aren't foot-guns, not that unexpected behaviors aren't foot-guns. Wrong expectations, as in "I can call this pointer-receiver method on this unaddressable value," or "since zero values are useful, I can set keys and values in this zero-value map." Downloading a compiler for some new language I haven't bothered to learn, typing in Java-like stuff, and then being upset when it doesn't work isn't a problem of unexpected behavior, it's a problem of wrong expectations (usually misunderstandings or ignorance).
 
Go is intended to
be a simple language.  When special explanation is required, something
has gone wrong.


The Go language spec is full of special explanations. The section on comparisons is quite detailed and complicated. I recently had to ask here why the range operation doesn't work for type set unions of slices and maps, which you very kindly answered, if I remember correctly. How is slice equality different in terms of special explanation?

I've argued that slice comparisons make Go even simpler and more consistent with various examples, analogies, and so on. Please see my response to Axel, if you haven't already. Do you have a specific counter argument to any specific argument that I've made regarding simplicity or consistency?
 
In saying this I don't at all claim that Go is perfect.  There are
places where we made mistakes.  But I don't think that our decision to
not define == on slices or functions is one of them.


I'd like to be able to reach the same conclusions you have, in the same way you did, so that's what I'm trying to understand. It's difficult to understand your position when I probe your understanding with an argument, and I receive counter arguments like "this would introduce language complexity," "the costs of forcing a decision," "Go is intended to be a simple language," and "I don't at all claim that Go is perfect" that don't specifically address my points. Those arguments could be used to quash anything new, like dependencies or generics, so they don't seem to hold water by themselves, in my opinion.

Axel Wagner

unread,
May 4, 2022, 2:02:49 AM5/4/22
to golang-nuts
As for documentation for how `func` works BTW: This design doc for Go 1.1 is a good description and AFAIK mostly up to date. It doesn't mention how inlining decisions and dynamic linking affect the pointer values, though. That you would have to derive from first principles.

Jan Mercl

unread,
May 4, 2022, 2:14:04 AM5/4/22
to Ian Lance Taylor, Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Wed, May 4, 2022 at 4:27 AM Ian Lance Taylor <ia...@golang.org> wrote:

> In saying this I don't at all claim that Go is perfect. There are
> places where we made mistakes.

May I please ask you to share what you personally consider a mistake
and, if possible, what would you change if you can, say, travel back
in time?

I believe this could be a benefit to anyone thinking about/designing
nowadays a programming language that is similar to Go and/or C etc.

Thanks,

-j

Axel Wagner

unread,
May 4, 2022, 2:14:56 AM5/4/22
to golang-nuts
On Wed, May 4, 2022 at 8:01 AM Will Faught <will....@gmail.com> wrote:
Well, I guess that's fair enough, but it seems like one could use that kind of argument to undermine any language change, though, including dependencies and generics.

Yes. And people try, all the time. Which is exactly why Go's language design is not based on a committee with clear, mechanical rules of how decisions are made, but instead on consensus among a limited number of trusted designers, with input from the community - so that the designers can take feedback like that and evaluate it against their subjective (but hopefully somewhat consistent) views of what makes a good language.

I think one important aspect of all of that which maybe hasn't landed yet, is that language design is subjective. It's not done based on proofs, but mainly on the experience and preferences of the designers. That's why we have different programming languages (which is a good thing). If there was always a single "correct" choice, all programming languages would make that and all programming languages would be equal.

The argument is not "this would be ambiguous", the argument is "the designers of the language felt that it would be ambiguous and thought that in this case, the ambiguities outweighed the benefits of allowing it". It's a subjective decision, which can be explained, but it can't be be proven.
 
It doesn't seem like the function equality rule I sketched out would add much, if any, language complexity. It's only one sentence: "Function values are equal if they were created by the same function literal or declaration."

> Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.

I don't agree.  Unexpected behavior is a footgun. 

I wrote that wrong expectations aren't foot-guns, not that unexpected behaviors aren't foot-guns. Wrong expectations, as in "I can call this pointer-receiver method on this unaddressable value," or "since zero values are useful, I can set keys and values in this zero-value map." Downloading a compiler for some new language I haven't bothered to learn, typing in Java-like stuff, and then being upset when it doesn't work isn't a problem of unexpected behavior, it's a problem of wrong expectations (usually misunderstandings or ignorance).
 
Go is intended to
be a simple language.  When special explanation is required, something
has gone wrong.


The Go language spec is full of special explanations. The section on comparisons is quite detailed and complicated. I recently had to ask here why the range operation doesn't work for type set unions of slices and maps, which you very kindly answered, if I remember correctly. How is slice equality different in terms of special explanation?

I've argued that slice comparisons make Go even simpler and more consistent with various examples, analogies, and so on. Please see my response to Axel, if you haven't already. Do you have a specific counter argument to any specific argument that I've made regarding simplicity or consistency?
 
In saying this I don't at all claim that Go is perfect.  There are
places where we made mistakes.  But I don't think that our decision to
not define == on slices or functions is one of them.


I'd like to be able to reach the same conclusions you have, in the same way you did, so that's what I'm trying to understand. It's difficult to understand your position when I probe your understanding with an argument, and I receive counter arguments like "this would introduce language complexity," "the costs of forcing a decision," "Go is intended to be a simple language," and "I don't at all claim that Go is perfect" that don't specifically address my points. Those arguments could be used to quash anything new, like dependencies or generics, so they don't seem to hold water by themselves, in my opinion.
 

> >Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

The fact that range over []byte and string are different may well have
been a mistake.  It can certainly be convenient, but it trips people
up.  (Certainly others may disagree with me on this.)


> >The best path is to pick the best way for the common case, and make the other way possible

I do not agree.  The best path is to make no choice, and force the
program writer to be explicit about what they want.

Ian

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

Jan Mercl

unread,
May 4, 2022, 2:22:53 AM5/4/22
to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
On Wed, May 4, 2022 at 12:15 AM Will Faught <will....@gmail.com> wrote:

> I'm not sure we're on the same page in terminology. I meant shallow as opposed to deep. E.g. pointer equality in terms of `==` vs. `reflect.DeepEqual`. Unequal pointers can reference values that are equivalent.

I think we are on the same page. This thread is about comparisons on
the language level, not about some library functions. Thus we can
ignore reflect.DeepEqual and focus on the equality operator only. That
makes things simpler.

Now, what I wanted to point out is that the equality operator, as
defined in Go, always compares the values of its operands. It's a
simple and easy to remember rule. It could have been defined in a
different way, sure. Anyway, it was not and talking about deep vs
shallow comparison of the Go equality operator does not make the
discussion any clearer - because it does not, in our case, apply.

Will Faught

unread,
May 4, 2022, 2:43:10 AM5/4/22
to Axel Wagner, golang-nuts
On Tue, May 3, 2022 at 10:59 PM 'Axel Wagner' via golang-nuts <golan...@googlegroups.com> wrote:
On Wed, May 4, 2022 at 1:40 AM will....@gmail.com <will....@gmail.com> wrote:
I don't think controversy is a good counterargument. It's vague, unquantifiable, and subjective. I could easily claim the opposite, while also giving no proof.

Sure. It was not intended to be an argument. It was intended to be an explanation.
I can't proof to you whether or not it is a good idea to design the language as it is. I can only try to explain why it was.
 
Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

I think this is an interesting example, in that I've thought a couple of times in the past that it might have been a mistake to introduce the current semantics for strings. I think there are good arguments to make ranging over strings behave as ranging over []byte and consequently, there are arguments that we maybe should not have allowed either.

But the language is, at is is and we can't break compatibility.
 
The best path is to pick the best way for the common case, and make the other way possible. If slices are compared shallowly, we can still compare them deeply ourselves, or with `reflect.DeepEqual`

FTR, I don't think I ever found reflect.DeepEqual to give the semantics I want, when it comes to slices. In particular, it considers nil and empty slices to be different. Which is the right decision to make, probably, but it is almost never the semantics I want. Which is why I don't use reflect.DeepEqual, but use go-cmp, which gives me the option to configure that.

Note that it is totally possible to make comparable versions of slices as a library now. So at least the "make other ways possible" part is now done, with whatever semantics you want.

As a tangent, I don't understand why this wasn't made unambiguous in the language spec. Why not have `new(struct{})` always allocate a new pointer? Who's allocating all these empty structs on the heap where this is something that needs to be optimized for? Is that really worth complicating the language? 🤔

I don't know why that decision was made. I do believe there are some less obvious cases, where you at least have to add special casing in the implementation (e.g. make([]T, x) would have to check at runtime if x is 0). But I agree that it would probably be okay to change the spec here.
 
I would argue this isn't really a deficiency with pointer comparisons, but rather with `new`. If `new(struct{}) == new(struct{})` is true, then they point to the same value in memory; that's all it means. Pointer comparisons are still valid in that case, it's just that the behavior of `new` can vary.

Sure. That seems to be a distinction without a difference to me. Note that I didn't say it's a deficiency, quite the opposite. I said that pointer-comparisons work just fine, as they have (mostly) unambiguous and intuitive semantics. But the same is not true for slices and maps.
 
 
For slices, even with your definition, there are questions. For example, should s[0:0] == s[0:0:0], for non-empty slices? That is, should capacity matter? Should make([]T, 0) == make([]T, 0)? That is, what if the "pointer" in the slice header doesn't actually mean anything, as the slice has capacity 0?


I specified slice comparisons like this:

> Slices: Compare the corresponding `runtime.slice` (non-pointer struct) values. The time complexity is constant.

Yes. I understand what you suggested and I understood how it *would* work, if implemented that way. But why is that the best way to compare them? Doing it that way has a bunch of semantic implications, some of which are perhaps counterintuitive, which I tried to mention.


I explained that in detail in the subsequent paragraphs.
 
Note that the language doesn't mention "runtime.slice", BTW. So, even if we did it that way, we would have to phrase it as "two slices are equal, if they point to the same underlying array and have the same length and capacity", or somesuch. This would still not define whether make([]T, 0) == make([]T, 0), though.


Perhaps I'm at fault for not being precise in my wording, but the intent was to specify slice comparisons as encompassing the array pointer, the length, and the capacity. It doesn't matter if other fields are added later, or if the type is renamed.
 
So, even if we accepted that this was the "right" way to do it, it would still leave at least one question open.
 
I assume `make([]T, 0)` sets the array pointer to nil, because `reflect.DeepEqual` says two of those expressions are equal.

No, it does not. Otherwise, `make([]T, 0)` would be equal to `nil`. `make([]T, 0)` is allowed to use a constant element pointer or not - it must point to a zero-sized array. That pointer must be distinguishable from a nil-slice, but can be arbitrary apart from that. Note, BTW, that the language also does not *force* an implementation to even use that representation. A slice could well be represented as `struct{ Ptr *T; Len int; Cap int; NonNil bool }`, if the implementation want to. Or a miriad other ways.

reflect.DeepEqual does not compare the pointers for capacity 0 slices, BTW. To drive home how ambiguous this question actually is, given that you assumed it woudl.
 

It only depends on what the array pointer is. That behavior actually matches my original specification: "Slice values are equal if they have the same array pointer, length, and capacity." So, two of the `make([]T, 0)` expressions should not be equal if each one gets a unique array pointer, which is consistent with channel comparisons too. Apparently this behavior is detailed somewhere that you've found, I assume in the language spec. I never allocate zero-length slices anyway, and I doubt many others do either, so it's probably a rare case anyway.

It would be a bad idea to blindly compare two slices that came from disparate sources, because the odds of them being shallowly equal are infinitesimal. If I read a `[]int` off my hard drive, and another off the Internet, I shouldn't compare them blindly, because the odds are extremely good that their array pointers, lengths, or capacities aren't equal, so the comparison won't likely reflect the equality of the elements. The same is true for pointers: if I get a pointer type from hard drive data, and another from Internet data, what exactly do you think the odds are that those are actually going to be the same value in memory? Shallow comparisons are most useful when you stick a value somewhere, and then go fishing for it later, or encounter it later. In those cases, you don't care about equivalence, you just care about identity, because you know it's there. Deep comparisons are best for data from disparate sources.
 
We could do deep comparisons for pointers

Not without allowing for a bunch of unpleasant consequences, at least. For example, this code would hang forever:

type T *T
var t1, t2 T
t1, t2 = &t2, &t1
fmt.Println(t1 == t2)

Note that it is *very common* to have circular pointer structures. For example, in a doubly-linked list.

Right. I didn't mean it would be useful, or good. The same argument applies to slice comparisons.
 
You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/b-WtVh3H_oY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAEkBMfH%3Dy8re0yXYXr%3DUEYnBVsRxouM98ozyOH6uO9jbtueFSA%40mail.gmail.com.

Will Faught

unread,
May 4, 2022, 2:49:52 AM5/4/22
to Jan Mercl, Rob Pike, Kurtis Rader, golang-nuts
Well, I agree! :) Comparisons should be shallow where possible for every type, including slices and maps. That's my initial argument.

Axel Wagner

unread,
May 4, 2022, 3:14:11 AM5/4/22
to Will Faught, golang-nuts
On Wed, May 4, 2022 at 8:42 AM Will Faught <wi...@willfaught.com> wrote:
Yes. I understand what you suggested and I understood how it *would* work, if implemented that way. But why is that the best way to compare them? Doing it that way has a bunch of semantic implications, some of which are perhaps counterintuitive, which I tried to mention.
I explained that in detail in the subsequent paragraphs.

I don't believe you did. Looking over it again, those paragraphs explain *what* you are proposing. Not *why* those semantics are the right ones.

One very specific case is that your semantics consider a[0:0:0] == b[0:0:0], if a and b come from different slices even though the difference between them is unobservable without unsafe. You make the argument that capacity should matter (which is one of the questions I posed), because of observability. So, by the same token, that difference shouldn't matter here, should it?

But really, the point isn't "which semantics are right". The point is "there are many different questions which we could argue about in detail, therefore there doesn't appear to be a single right set of semantics".

 
Note that the language doesn't mention "runtime.slice", BTW. So, even if we did it that way, we would have to phrase it as "two slices are equal, if they point to the same underlying array and have the same length and capacity", or somesuch. This would still not define whether make([]T, 0) == make([]T, 0), though.


Perhaps I'm at fault for not being precise in my wording, but the intent was to specify slice comparisons as encompassing the array pointer, the length, and the capacity. It doesn't matter if other fields are added later, or if the type is renamed.

The language spec does not know about array pointers. I'm not saying "a field could be added later". I'm saying "one of the fields you are talking about does not exist, as it is a choice made by gc, not something prescribed by the spec".

reflect.DeepEqual does not compare the pointers for capacity 0 slices, BTW. To drive home how ambiguous this question actually is, given that you assumed it woudl.
 

It only depends on what the array pointer is. That behavior actually matches my original specification: "Slice values are equal if they have the same array pointer, length, and capacity."

No, it does not. The example I linked to uses different array pointers, but compares as equal by reflect.Equal.
 
So, two of the `make([]T, 0)` expressions should not be equal if each one gets a unique array pointer, which is consistent with channel comparisons too.

Channels compare as equal "if they were created by the same call to make or if both have value nil". We can't do the same for slices, as slices can be manipulated after calling make (by re-slicing).

Note, FTR, that the spec does *not* talk about channels being pointers. Which goes to my point above - we can't define slice comparisons in terms of what gc does, we have to define it based on what the language spec says.

It would be a bad idea to blindly compare two slices that came from disparate sources, because the odds of them being shallowly equal are infinitesimal. If I read a `[]int` off my hard drive, and another off the Internet, I shouldn't compare them blindly, because the odds are extremely good that their array pointers, lengths, or capacities aren't equal, so the comparison won't likely reflect the equality of the elements.

To me, this says that *in most cases* the semantics you are suggesting is really unhelpful. Imagine we would say "floats are comparable, but only if they where both created by mathematic operations from the same value - if you read one float from the internet and another from disk, don't compare them, they might have the same value, but compare unequal". That would be ridiculous, wouldn't it?
 
The same is true for pointers: if I get a pointer type from hard drive data, and another from Internet data, what exactly do you think the odds are that those are actually going to be the same value in memory? Shallow comparisons are most useful when you stick a value somewhere, and then go fishing for it later, or encounter it later. In those cases, you don't care about equivalence, you just care about identity, because you know it's there. Deep comparisons are best for data from disparate sources.

*Exactly*. There is no single notion of comparison, which is always (or even "most of the time") what you'd want. Therefore, it's best not to have any notion of comparison, lest people shoot themselves in the foot.
  
We could do deep comparisons for pointers

Not without allowing for a bunch of unpleasant consequences, at least. For example, this code would hang forever:

type T *T
var t1, t2 T
t1, t2 = &t2, &t1
fmt.Println(t1 == t2)

Note that it is *very common* to have circular pointer structures. For example, in a doubly-linked list.

Right. I didn't mean it would be useful, or good. The same argument applies to slice comparisons.

It seems to me, that you are exactly confirming our point here, which is that for pointers, there is a single, unambiguously good way to define comparisons.

Note that *originally*, you said that there wasn't - that if we say "slices can not be compared, because there is no unambiguously good way to do so", we should also disallow pointers, because there is no unambiguously good way to do so.

Well, there isn't. The alternative you suggested is not good.

For slices, there *are* many differently good ways to define that comparison - and the specific bounds you set, seem to allow for unintuitive results, such as a[0:0:0] != b[0:0:0], even if their difference can't be observed.

Axel Wagner

unread,
May 4, 2022, 3:23:20 AM5/4/22
to golang-nuts
There are a couple of typos in this mail, sorry:


On Wed, May 4, 2022 at 9:13 AM Axel Wagner <axel.wa...@googlemail.com> wrote:
One very specific case is that your semantics consider a[0:0:0] == b[0:0:0],

s/==/!=/ 

Well, there isn't. The alternative you suggested is not good.

s/isn't/is/

There are more, but these stood out as a sore thumb, as they inverted the meaning of what I was trying to say.

Dan Kortschak

unread,
May 4, 2022, 3:35:46 AM5/4/22
to golan...@googlegroups.com
On Wed, 2022-05-04 at 07:58 +0200, 'Axel Wagner' via golang-nuts wrote:
> > As a tangent, I don't understand why this wasn't made unambiguous
> > in the language spec. Why not have `new(struct{})` always allocate
> > a new pointer? Who's allocating all these empty structs on the heap
> > where this is something that needs to be optimized for? Is that
> > really worth complicating the language? 🤔
> >
>
> I don't know why that decision was made. I do believe there are some
> less obvious cases, where you at least have to add special casing in
> the implementation (e.g. make([]T, x) would have to check at runtime
> if x is 0). But I agree that it would probably be okay to change the
> spec here.

For zero-sized types (as opposed to zero-length types like zero-length
[]T), there is a good reason not to allocate a new zero-sized slot on
the heap; since zero-sized values are commonly used as tokens, the
language allows them and they don't store anything, it is fruitless
work to go through the motions of asking the allocator to get a pointer
when the same one can be used always with close to zero cost (does
sizeof equal zero).

For history, the treatment of this was introduced here
https://codereview.appspot.com/10136043 (in malloc.goc).


Robert Engels

unread,
May 4, 2022, 8:22:51 AM5/4/22
to Will Faught, Jan Mercl, Rob Pike, Kurtis Rader, golang-nuts
Seems easier to move to a Go without operators and do everything with functions and generics. This is essentially the Java model.

This is pretty much the approach that the sync, sort, etc packages took and with generics you can have type safety and less code duplication. 

On May 4, 2022, at 1:49 AM, Will Faught <will....@gmail.com> wrote:


--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

jake...@gmail.com

unread,
May 4, 2022, 12:34:10 PM5/4/22
to golang-nuts
For a discussion of this issue as it relates to slices you might find this thread worth reading through: https://groups.google.com/g/golang-nuts/c/ajXzEM6lqJI/m/BmSu1m9PAgAJ

That was 2016, but not much has really changed since then on this issue.

On Monday, May 2, 2022 at 10:43:53 PM UTC-4 will....@gmail.com wrote:
All types should have unrestricted comparisons (`==`, `!=`), but a few pre-declared types don't. Adding them would bridge a semantic gap between pre-declared and user-declared types, enabling all types to be used as map keys, and otherwise make reasoning about them more consistent and intuitive.

For the types that don't yet have unrestricted comparisons:
    • Functions: Compare the corresponding memory addresses. The time complexity is constant.
    • Maps: Compare the corresponding `*runtime.hmap` (pointer) values. The time complexity is constant.
    • Slices: Compare the corresponding `runtime.slice` (non-pointer struct) values. The time complexity is constant.
      Examples:

      ```
      // Functions

      func F1() {}
      func F2() {}

      var _ = F1 == F1 // True
      var _ = F1 != F2 // True

      // Maps

      var M1 = map[int]int{}
      var M2 = map[int]int{}

      var _ = M1 == M1 // True
      var _ = M1 != M2 // True

      // Slices


      var S1 = make([]int, 2)
      var S2 = make([]int, 2)

      var _ = S1 == S1 // True
      var _ = S1 != S2 // True

      var _ = S1 == S1[:] // True because the lengths, capacities, and pointers are equal
      var _ = S1 != S1[:1] // True because the lengths aren't equal
      var _ = S1[:1] != S1[:1:1] // True because the capacities aren't equal
      var _ = S1 != append(S1, 0)[:2:2] // True because the pointers aren't equal
      ```

      Function and map equality are consistent with channel equality, where non-nil channels are equal if they were created by the same call to `make`. Function values are equal if they were created by the same function literal or declaration. Map values are equal if they were created by the same map literal or the same call to `make`. Functions that are equal will always produce the same outputs and side effects given the same inputs and conditions; however, the reverse is not necessarily true. Maps that are equal will always contain the same keys and values; however, the reverse is not necessarily true.

      Slice equality is consistent with map equality. Slice values are equal if they have the same array pointer, length, and capacity. Slices that are equal will always have equal corresponding elements. However, like maps, slices that have equal corresponding elements are not necessarily equal.

      This approach to comparisons for functions, maps, and slices makes all values of those types immutable, and therefore usable as map keys.

      This would obviate the `comparable` constraint, since all type arguments would now satisfy it. In my opinion, this would make the language simpler and more consistent. Type variables could be used with comparison operations without needing to be constrained by `comparable`.

      If you think slice equality should incorporate element equality, here's an example for you:


      ```
      type Slice1000[T any] struct {
          xs *[1000]T
          len, cap int
      }

      func (s Slice1000[T]) Get(i int) T {
          // ...
          return s.xs[i]
      }

      func (s Slice1000[T]) Set(i int, x T) {
          // ...
          s.xs[i] = x
      }

      var xs1, xs2 [1000]int

      var a = Slice1000[int]{&xs1, 1000, 1000}
      var b = Slice1000[int]{&xs2, 1000, 1000}
      var c = a == b
      ```

      Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

      Any thoughts?

      jake...@gmail.com

      unread,
      May 4, 2022, 12:36:30 PM5/4/22
      to golang-nuts
      On Wednesday, May 4, 2022 at 12:34:10 PM UTC-4 jake...@gmail.com wrote:
      For a discussion of this issue as it relates to slices you might find this thread worth reading through: https://groups.google.com/g/golang-nuts/c/ajXzEM6lqJI/m/BmSu1m9PAgAJ

      That was 2016, but not much has really changed since then on this issue.
       
      Oops, that links to the middle of the thread. This is slightly better:  https://groups.google.com/g/golang-nuts/c/ajXzEM6lqJI

      Ian Lance Taylor

      unread,
      May 4, 2022, 3:46:51 PM5/4/22
      to Axel Wagner, golang-nuts
      On Tue, May 3, 2022 at 10:59 PM 'Axel Wagner' via golang-nuts
      <golan...@googlegroups.com> wrote:
      >
      >> As a tangent, I don't understand why this wasn't made unambiguous in the language spec. Why not have `new(struct{})` always allocate a new pointer? Who's allocating all these empty structs on the heap where this is something that needs to be optimized for? Is that really worth complicating the language? 🤔
      >
      >
      > I don't know why that decision was made. I do believe there are some less obvious cases, where you at least have to add special casing in the implementation (e.g. make([]T, x) would have to check at runtime if x is 0). But I agree that it would probably be okay to change the spec here.

      Zero sized values are useful in Go, because they can have methods and
      they can be stored in interfaces. But if the addresses of zero-sized
      values must be distinct, then although zero-sized values appear to
      have zero size they must in fact be implemented as taking up one byte.
      For example, given `struct { a, b struct{} }`, the addresses of `a`
      and `b` must be distinct, so that struct is actually two bytes. So,
      sure, we could change it. But there are surprising results either
      way.

      (Historically this was introduced for https://go.dev/issue/2620.)

      Ian

      Axel Wagner

      unread,
      May 4, 2022, 3:53:19 PM5/4/22
      to golang-nuts
      On Wed, May 4, 2022 at 9:46 PM Ian Lance Taylor <ia...@golang.org> wrote:
      Zero sized values are useful in Go, because they can have methods and
      they can be stored in interfaces. But if the addresses of zero-sized
      values must be distinct, then although zero-sized values appear to
      have zero size they must in fact be implemented as taking up one byte.
      For example, given `struct { a, b struct{} }`, the addresses of `a`
      and `b` must be distinct, so that struct is actually two bytes.  So,
      sure, we could change it.  But there are surprising results either
      way.

      (Historically this was introduced for https://go.dev/issue/2620.)

      The other side of the question is "why not define them to always be the same"? It seems a fairly simple optimization.
      I guess it can be argued that `a[0:0:0]` points to a zero-sized object, so should always be the same, but that's expensive to do, as every slicing operation has to check for 0? But that seems a weak argument.
      Is there a better one?
       

      Ian

      Ian Lance Taylor

      unread,
      May 4, 2022, 4:01:24 PM5/4/22
      to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
      On Tue, May 3, 2022 at 11:01 PM Will Faught <will....@gmail.com> wrote:
      >
      > On Tue, May 3, 2022 at 7:27 PM Ian Lance Taylor <ia...@golang.org> wrote:
      >>
      >> Does a program like this print true or false?
      >>
      >> func F() func() int { return func() int { return 0 } }
      >> func G() { fmt.Println(F() == F()) }
      >>
      >
      > It would print false, because the function literal creates a new allocation (according to the rule I sketched out). I can see the desire to optimize that, but personally when I write code like that, I'm thinking, "and then return a new function." Causing an allocation isn't surprising behavior here, and so neither is uniqueness in terms of comparisons.
      >
      >>
      >> What about a program like this:
      >>
      >> func H(i int) func() *int { return func() *int { return &i } }
      >> func J() { fmt.Println(H(0) == H(1)) }
      >>
      >
      > It would print false for the same reason.
      >
      >>
      >> Whatever we define for cases like this some people will be ready to
      >> argue for a different choice. The costs of forcing a decision exceed
      >> the benefits.
      >
      >
      > So on the balance, the cost of making a decision is worth it for something big like dependencies or generics, but not function equality. Well, I guess that's fair enough, but it seems like one could use that kind of argument to undermine any language change, though, including dependencies and generics. It doesn't seem like the function equality rule I sketched out would add much, if any, language complexity. It's only one sentence: "Function values are equal if they were created by the same function literal or declaration."

      I don't think that is clear, because it seems to me that each call to
      F(), above, returns the same function literal, yet you said that F()
      == F() is false.

      As an implementation note, currently F() does not allocate. If we
      require that F() != F(), then calling F() must allocate. Adding an
      allocation there is straightforward, but even if we permitted function
      comparisons I think that function literals will be used far more than
      they are compared, so adding an allocation seems like a poor use of
      resources.



      >> > Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.
      >>
      >> I don't agree. Unexpected behavior is a footgun.
      >
      >
      > I wrote that wrong expectations aren't foot-guns, not that unexpected behaviors aren't foot-guns. Wrong expectations, as in "I can call this pointer-receiver method on this unaddressable value," or "since zero values are useful, I can set keys and values in this zero-value map." Downloading a compiler for some new language I haven't bothered to learn, typing in Java-like stuff, and then being upset when it doesn't work isn't a problem of unexpected behavior, it's a problem of wrong expectations (usually misunderstandings or ignorance).

      I don't want to get into a terminology war, but I don't understand the
      distinction that you are drawing. If I have "wrong expectations,"
      then what actually happens when I try something is "unexpected
      behavior." It's literally not what I expected.

      It's reasonable to say that if you are using a new language you should
      read the friendly manual. But it's also reasonable to say that as
      much as possible languages should be unsurprising. Computer languages
      build on each other. Go has obvious debts to languages like C and
      Oberon, and it would be confusing if constructs in Go acted
      differently than the identical constructs in those languages.


      >> Go is intended to
      >> be a simple language. When special explanation is required, something
      >> has gone wrong.
      >>
      >
      > The Go language spec is full of special explanations. The section on comparisons is quite detailed and complicated. I recently had to ask here why the range operation doesn't work for type set unions of slices and maps, which you very kindly answered, if I remember correctly. How is slice equality different in terms of special explanation?

      The fact that Go is imperfect, which it is, is not an argument for
      adding further imperfections.

      > I've argued that slice comparisons make Go even simpler and more consistent with various examples, analogies, and so on. Please see my response to Axel, if you haven't already. Do you have a specific counter argument to any specific argument that I've made regarding simplicity or consistency?

      All changes to languages have costs and benefits. Your arguments
      about simplicity and consistency are benefits. The counter-arguments
      that I and several others have been making are costs. In deciding
      whether to change the language we must weigh those costs and benefits
      and decide which are more important.

      To put it another way, I don't have specific counter arguments to your
      specific arguments about simplicity and consistency. But that doesn't
      mean that I agree, it just means that I think that other consequences
      are more important.

      Ian

      Ian Lance Taylor

      unread,
      May 4, 2022, 4:10:52 PM5/4/22
      to Jan Mercl, Will Faught, Rob Pike, Kurtis Rader, golang-nuts
      On Tue, May 3, 2022 at 11:13 PM Jan Mercl <0xj...@gmail.com> wrote:
      >
      > On Wed, May 4, 2022 at 4:27 AM Ian Lance Taylor <ia...@golang.org> wrote:
      >
      > > In saying this I don't at all claim that Go is perfect. There are
      > > places where we made mistakes.
      >
      > May I please ask you to share what you personally consider a mistake
      > and, if possible, what would you change if you can, say, travel back
      > in time?

      I can only give some personal opinions. Others will disagree. And my
      opinions change over time, and could well be mistaken.

      I think that deciding that := declares a single variable in a
      for/range statement was a mistake (https://go.dev/issue/20733).

      I think that naked return statements in functions with named result
      parameters was a mistake (but named result parameters in themselves
      are useful).

      I think that ranging over strings may have been a mistake. I go back
      and forth on that.

      I think that wrapping on integer overflow was a mistake (it should panic).

      I think that making "var i = 1 / 0" a compile-time error was a
      mistake. Similarly for `"abc"[3]`.

      Let's not even get started on the standard library.

      Ian

      Ian Lance Taylor

      unread,
      May 4, 2022, 4:18:26 PM5/4/22
      to Axel Wagner, golang-nuts
      Using the reflect package we can walk through the fields of a struct
      and get the address based on the size and alignment of earlier fields
      in the struct. But if the address of all zero-sized values is the
      same, then that doesn't work.

      It's not a great argument, but it's not nothing.

      Also it's a little weird that given `var v1 v2 struct { f1 int; f2
      struct{}; f3 int }` then &v1.f2 == &v2.f2, but maybe that's not much
      weirder than how it works today.

      Ian

      Will Faught

      unread,
      May 4, 2022, 9:11:42 PM5/4/22
      to Axel Wagner, golang-nuts
      On Wed, May 4, 2022 at 12:13 AM Axel Wagner <axel.wa...@googlemail.com> wrote:
      On Wed, May 4, 2022 at 8:42 AM Will Faught <wi...@willfaught.com> wrote:
      Yes. I understand what you suggested and I understood how it *would* work, if implemented that way. But why is that the best way to compare them? Doing it that way has a bunch of semantic implications, some of which are perhaps counterintuitive, which I tried to mention.
      I explained that in detail in the subsequent paragraphs.

      I don't believe you did. Looking over it again, those paragraphs explain *what* you are proposing. Not *why* those semantics are the right ones.


      I wrote in those subsequent paragraphs:

      The reason to include capacity in comparisons, aside from it being convenient when doing comparisons, is that the capacity is an observable attribute of slices in regular code. Programmers are encouraged to reason about slice capacity, so it should be included in comparisons. `cap(S1[:1]) != cap(S1[:1:1])` is true, therefore `S1[:1] != S1[:1:1]` should be true, even though `len(S1[:1]) == len(S1[:1:1])` is true.

      Do you agree that is a good thing, yes or no, and if not, why?

      I wrote in the proposal:

      This approach to comparisons for functions, maps, and slices makes all values of those types immutable, and therefore usable as map keys.

      Do you agree that is a good thing, yes or no, and if not, why?

      I wrote in the proposal an example of how slices work in actual Go code, then asked:

      Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

      What was your answer? Yes or no? This isn't rhetorical at this point, I'm actually asking, so please answer unambiguously yes or no.

      If your answer was yes, then you don't understand Go at a basic level. If your answer was no, then my argument is that it would be consistent for comparisons of built-in slices to work the same way.

      Do you agree that is a good thing, yes or no, and if not, why?

      One very specific case is that your semantics consider a[0:0:0] == b[0:0:0], if a and b come from different slices even though the difference between them is unobservable without unsafe. You make the argument that capacity should matter (which is one of the questions I posed), because of observability. So, by the same token, that difference shouldn't matter here, should it?


      I don't follow why `a[0:0:0] == b[0:0:0]` would be true if they have different array pointers. I'm arguing that they shouldn't be equal because the array pointers are different. What are you saying is observable, but not being accounted for by this proposal? Note that `a[0] = 0; b[0] = 0; a[0] = 1; b[0] == 1` can observe whether the array pointers are the same.
       
      But really, the point isn't "which semantics are right". The point is "there are many different questions which we could argue about in detail, therefore there doesn't appear to be a single right set of semantics".


      I've already addressed this point directly, in a response to you. You commented on the particular example I'd given (iterating strings), but not on the general point. I'd be interested in your thoughts on that now. Here it is again:

      Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

       
      Note that the language doesn't mention "runtime.slice", BTW. So, even if we did it that way, we would have to phrase it as "two slices are equal, if they point to the same underlying array and have the same length and capacity", or somesuch. This would still not define whether make([]T, 0) == make([]T, 0), though.


      Perhaps I'm at fault for not being precise in my wording, but the intent was to specify slice comparisons as encompassing the array pointer, the length, and the capacity. It doesn't matter if other fields are added later, or if the type is renamed.

      The language spec does not know about array pointers. I'm not saying "a field could be added later". I'm saying "one of the fields you are talking about does not exist, as it is a choice made by gc, not something prescribed by the spec".



      A slice is a descriptor for a contiguous segment of an underlying array and provides access to a numbered sequence of elements from that array.

      It doesn't matter to me whether we refer to the slice's array as a "descriptor" of an array, or it "points" to an array. It refers to a specific array in memory, period. It's trivial enough to specify that slice comparisons have a way to distinguish between arrays in memory, whether that's through pointers or "just knowing" in the same way that they "just know" how to access and use the array in the first place. I refer to it as a pointer because I know that's how it works under the hood, and people know how pointers work.

      Does this fully address your point, yes or no, and if not, why?

      reflect.DeepEqual does not compare the pointers for capacity 0 slices, BTW. To drive home how ambiguous this question actually is, given that you assumed it woudl.
       


      The proposal doesn't depend on reflection or unsafe behavior. It was just a lazy way of mine to inspect what Go does in a corner case. I think making it clear what `make` does when the length is 0 is the solution to this, if it already isn't clear.
       
      It only depends on what the array pointer is. That behavior actually matches my original specification: "Slice values are equal if they have the same array pointer, length, and capacity."

      No, it does not. The example I linked to uses different array pointers, but compares as equal by reflect.Equal.
       

      Again, it doesn't matter what `reflect.DeepEqual` does. I referred to it as an example of deep comparisons, that's all. If `reflect.DeepEquals` doesn't consider array pointers in slices (which makes sense, since it does a deep/logical comparison, not a shallow/literal one), then so be it, that's not what's being proposed here.
       
      So, two of the `make([]T, 0)` expressions should not be equal if each one gets a unique array pointer, which is consistent with channel comparisons too.

      Channels compare as equal "if they were created by the same call to make or if both have value nil". We can't do the same for slices, as slices can be manipulated after calling make (by re-slicing).

      Note, FTR, that the spec does *not* talk about channels being pointers. Which goes to my point above - we can't define slice comparisons in terms of what gc does, we have to define it based on what the language spec says.



      Two channel values are equal if they were created by the same call to make
       
      Two of the `make([]T, 0)` expressions not being equal would be consistent with channels in that the slices they evaluate to were created by different calls to `make`.

      It would be a bad idea to blindly compare two slices that came from disparate sources, because the odds of them being shallowly equal are infinitesimal. If I read a `[]int` off my hard drive, and another off the Internet, I shouldn't compare them blindly, because the odds are extremely good that their array pointers, lengths, or capacities aren't equal, so the comparison won't likely reflect the equality of the elements.

      To me, this says that *in most cases* the semantics you are suggesting is really unhelpful. Imagine we would say "floats are comparable, but only if they where both created by mathematic operations from the same value - if you read one float from the internet and another from disk, don't compare them, they might have the same value, but compare unequal". That would be ridiculous, wouldn't it?
       

      It depends on what the floats mean. An example with integers is `token.Pos`: instead of having every token position stored in an AST contain the file name, file offset, line, and column, the offset range for every `token.File` is tracked in a `token.FileSet`, which tracks line endings separately. A Pos is a plain integer that points into the global offset range in the FileSet, which can trivially be mapped into the corresponding File, and a corresponding Position that contains the associated offset, line, and column. The upshot is we can store unambiguous file offsets in the AST using only a small integer, and use it to later retrieve more rich info when needed. For every file offset, there is an unambiguous Pos that points to it.

      Does it make sense to compare a Pos from one FileSet with a Pos from another FileSet? No. `pos1 == pos2` might well type-check as valid, but it's meaningless. It's like comparing different units of measurement.

      And so it is with slices coming from disparate sources.

      Floats don't contain pointers. Slices do. I addressed this in points elsewhere. Please address those there.
       
      The same is true for pointers: if I get a pointer type from hard drive data, and another from Internet data, what exactly do you think the odds are that those are actually going to be the same value in memory? Shallow comparisons are most useful when you stick a value somewhere, and then go fishing for it later, or encounter it later. In those cases, you don't care about equivalence, you just care about identity, because you know it's there. Deep comparisons are best for data from disparate sources.

      *Exactly*. There is no single notion of comparison, which is always (or even "most of the time") what you'd want. Therefore, it's best not to have any notion of comparison, lest people shoot themselves in the foot.
        

      The point is that there are two ways to compare them: shallow (pointer values themselves) and deep (comparing the dereferenced values). If we made `==` do deep comparisons for pointers, we'd have no way to do shallow comparisons. Shallow comparisons still allow for deep comparisons, but not the other way around.
       
      We could do deep comparisons for pointers

      Not without allowing for a bunch of unpleasant consequences, at least. For example, this code would hang forever:

      type T *T
      var t1, t2 T
      t1, t2 = &t2, &t1
      fmt.Println(t1 == t2)

      Note that it is *very common* to have circular pointer structures. For example, in a doubly-linked list.

      Right. I didn't mean it would be useful, or good. The same argument applies to slice comparisons.

      It seems to me, that you are exactly confirming our point here, which is that for pointers, there is a single, unambiguously good way to define comparisons.


      No, that doesn't confirm that for pointers, there is a single, unambiguous way to compare them. Your example shows why shallow comparisons for pointers is good. But there are still ways (perhaps more than one) to compare them deeply (e.g. as DeepEquals does). The same is true for slices.
       
      Note that *originally*, you said that there wasn't - that if we say "slices can not be compared, because there is no unambiguously good way to do so", we should also disallow pointers, because there is no unambiguously good way to do so.

      Well, there isn't. The alternative you suggested is not good.


      Originally I said there wasn't what? An unambiguously good way to define comparisons for pointers? I never said that. I've said that shallow comparisons, where possible (arrays don't have a shallow version, for example), are best. I probably argued at some point that slice comparisons should be shallow because pointer comparisons are shallow, so if you think slice comparisons should be deep, then you should also think that pointer comparisons should be deep, or something like that. Please provide a quotation for context.
       
      For slices, there *are* many differently good ways to define that comparison - and the specific bounds you set, seem to allow for unintuitive results, such as a[0:0:0] != b[0:0:0], even if their difference can't be observed.
       

      I don't understand what these a and b slices are supposed to show. If the arrays for a and b are the same, you can observe that. If they're different, you can observe that. Everything is observable about those three fields in slices.

      Will Faught

      unread,
      May 4, 2022, 9:17:42 PM5/4/22
      to Dan Kortschak, golang-nuts
      Can you explain what you mean by tokens? Do you mean something like:

      ```
      var red, green, blue = new(struct{}), new(struct{}), new(struct{})
      ```

      If so, I don't see how that's useful. Why not use integers or bytes instead?

      --
      You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
      To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/b-WtVh3H_oY/unsubscribe.
      To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/e3cff142d482ad17973538256a6ded7923d9a2e5.camel%40kortschak.io.

      Will Faught

      unread,
      May 4, 2022, 9:19:40 PM5/4/22
      to Robert Engels, Jan Mercl, Rob Pike, Kurtis Rader, golang-nuts
      I don't follow how that's related to slice comparisons, aside from that fact that slice comparison could be a method, but an Equals() method would be left out to be consistent with how it is today.

      Robert Engels

      unread,
      May 4, 2022, 9:53:49 PM5/4/22
      to Will Faught, Jan Mercl, Rob Pike, Kurtis Rader, golang-nuts
      I was only pointing out that if you use method based operations you can do whatever comparisons you find appropriate. 

      On May 4, 2022, at 8:19 PM, Will Faught <will....@gmail.com> wrote:

      

      Will Faught

      unread,
      May 5, 2022, 12:21:50 AM5/5/22
      to Ian Lance Taylor, Axel Wagner, golang-nuts
      Makes sense, although I chuckled when I realized that the creator of that issue didn't actually have his issue resolved.

      Creator: This behavior is undefined. Which one should it be?
      Go Team: Both.

      😆

      --
      You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
      To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/b-WtVh3H_oY/unsubscribe.
      To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAOyqgcU%3DTdSF1co2jyeQuUhk2P7giwf33tdK6p%2Bd8UgaXO4Tng%40mail.gmail.com.

      Ian Lance Taylor

      unread,
      May 5, 2022, 1:41:24 AM5/5/22
      to Will Faught, Axel Wagner, golang-nuts
      On Wed, May 4, 2022 at 9:21 PM Will Faught <wi...@willfaught.com> wrote:
      >
      > Makes sense, although I chuckled when I realized that the creator of that issue didn't actually have his issue resolved.

      I'll just note that I was the creator of that issue.

      Ian

      Jan Mercl

      unread,
      May 5, 2022, 2:39:19 AM5/5/22
      to Ian Lance Taylor, Will Faught, Rob Pike, Kurtis Rader, golang-nuts
      On Wed, May 4, 2022 at 10:10 PM Ian Lance Taylor <ia...@golang.org> wrote:

      > I can only give some personal opinions. Others will disagree. And my
      > opinions change over time, and could well be mistaken.
      >
      > I think that deciding that := declares a single variable in a
      > for/range statement was a mistake (https://go.dev/issue/20733).
      >
      > I think that naked return statements in functions with named result
      > parameters was a mistake (but named result parameters in themselves
      > are useful).
      >
      > I think that ranging over strings may have been a mistake. I go back
      > and forth on that.
      >
      > I think that wrapping on integer overflow was a mistake (it should panic).
      >
      > I think that making "var i = 1 / 0" a compile-time error was a
      > mistake. Similarly for `"abc"[3]`.
      >
      > Let's not even get started on the standard library.

      Thanks!

      Axel Wagner

      unread,
      May 5, 2022, 5:54:03 AM5/5/22
      to Will Faught, golang-nuts
      On Thu, May 5, 2022 at 3:11 AM Will Faught <wi...@willfaught.com> wrote:
      The reason to include capacity in comparisons, aside from it being convenient when doing comparisons, is that the capacity is an observable attribute of slices in regular code. Programmers are encouraged to reason about slice capacity, so it should be included in comparisons. `cap(S1[:1]) != cap(S1[:1:1])` is true, therefore `S1[:1] != S1[:1:1]` should be true, even though `len(S1[:1]) == len(S1[:1:1])` is true.

      Do you agree that is a good thing, yes or no, and if not, why?

      Sure.
       
      This approach to comparisons for functions, maps, and slices makes all values of those types immutable, and therefore usable as map keys.

      Do you agree that is a good thing, yes or no, and if not, why?

      Sure.

      I wrote in the proposal an example of how slices work in actual Go code, then asked:

      Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

      What was your answer? Yes or no? This isn't rhetorical at this point, I'm actually asking, so please answer unambiguously yes or no.

      To be clear, demanding an unambiguous answer doesn't make a question unambiguous. If you'd ask "is light a wave or a particle, please answer yes or no", my response would be to stand up and leave the room, because there is no way to converse within the rules you are setting. So, if you insist on these rules, I will try my best to leave the room, metaphorically speaking and to write you off as impossible to have a conversation with.

      My position is that the comparison should be disallowed. Therefore, I can't answer yes or no.

      I think "both slides in the comparison contain the same elements in the same order" is a strong argument in favor of making the comparison be true.
      I think "this would make it possible for comparisons to hang the program" is a strong argument in favor of making the comparison be false.
      I think that the fact that there are strong arguments in favor of it being true and strong arguments in favor of it being false, is itself a strong argument in not allowing it.

      If your answer was yes, then you don't understand Go at a basic level.

      Please don't say things like this. You don't know me well enough to judge my understanding of Go. If you did, I feel confident that you wouldn't say this. It is just a No True Scotsman fallacy at best and a baseless insult at worst.
       
      I don't follow why `a[0:0:0] == b[0:0:0]` would be true if they have different array pointers.

      Because above, you made the argument that focusing the definition of equality on observable differences is a good thing. The difference between a[0:0:0] and b[0:0:0] is unobservable (without using unsafe), therefore they should be considered equal.

      Note that `a[0] = 0; b[0] = 0; a[0] = 1; b[0] == 1` can observe whether the array pointers are the same.

      No. This code panics, if the capacity of a and b is 0 - which it is for a[0:0:0] and b[0:0:0]. There is no way to observe if two capacity 0 slices point at the same underlying array, without using unsafe.

      Feel free to prove me wrong, by filling in Eq so this program prints "true false", without using unsafe: https://go.dev/play/p/xqj_DhBi392

       
      But really, the point isn't "which semantics are right". The point is "there are many different questions which we could argue about in detail, therefore there doesn't appear to be a single right set of semantics".

      I've already addressed this point directly, in a response to you. You commented on the particular example I'd given (iterating strings), but not on the general point. I'd be interested in your thoughts on that now. 
      Here it is again:

      Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

      I don't understand what your unaddressed point is.

      It seems to be that we fundamentally disagree. Where Ian and I say "if we can't clearly decide what to do, we should do nothing", you say "doing anything is better than doing nothing" (I'm paraphrasing, because pure repetition doesn't move things forward).

      In that case, I don't see how I could possibly address that, apart from noticing that we disagree (which seems obvious).
       
      To repeat what I said upthread: My goal here is not to convince you, or to prove that Go's design is good. It's to explain the design criteria going into Go and how the decisions made follow from them. "If no option is clearly good, err on the side of doing nothing" is a design criterion of Go. You can think it's a bad design criterion and that's fine and I won't try to convince you otherwise. But it is how the language was always developed (which is, among other things, why we didn't get generics for over ten years).


      A slice is a descriptor for a contiguous segment of an underlying array and provides access to a numbered sequence of elements from that array.

      It doesn't matter to me whether we refer to the slice's array as a "descriptor" of an array, or it "points" to an array. It refers to a specific array in memory, period.

      By this notion, we don't arrive at the comparison you proposed, though. For example, if we said "two slices are equal, if they have the same length and capacity and point at the same array", then

      a := make([]int, 10)
      fmt.Println(a[0:1:1] == a[1:2:2])

      should print "true", as both point at the same array.

      We could say "two slices are equal, if they provide access to the same sequence of elements from an array". But in that case, we wouldn't define what a capacity 0 slice equals, as it does not provide access to any sequence of elements.

      Or maybe we say "a non-nil slice of capacity zero provides access to an empty sequence of elements" in which case this should print "true", as the empty set is equal to the empty set:

      a := make([]int, 10)
      fmt.Println(a[0:0:0] == a[1:1:1])

      But, for your proposal to work, we would then have to make sure that any slicing operation which results in a capacity zero slice resets the element pointer, so they are equal.

      Or we could change your proposal, to say "two slices are equal, if they are both nil, or if they are both non-nil and have capacity zero, or if they are both non-nil and give access to the same sequence of elements".

      FWIW, I think this last one is the most workable solution for the "slices are equal, if the use the same underlying array" concept of comparability (whose main contender is the "slices are equal, if the contain the same elements in the same order" concept of comparability).

      But I hope that this can demonstrate that there is complexity here, which you have not seen so far.

      The proposal doesn't depend on reflection or unsafe behavior. It was just a lazy way of mine to inspect what Go does in a corner case. I think making it clear what `make` does when the length is 0 is the solution to this, if it already isn't clear.

      I disagree. There are more ways to get capacity zero slices, than just calling `make` with a length of 0. If you want to create the invariant that all capacity 0 slices use the same element pointer, this would incur an IMO prohibitive runtime impact on any slicing operation.

      The point is that there are two ways to compare them: shallow (pointer values themselves) and deep (comparing the dereferenced values). If we made `==` do deep comparisons for pointers, we'd have no way to do shallow comparisons. Shallow comparisons still allow for deep comparisons, but not the other way around.

      That's simply false. If anything, the exact opposite is true.

      For example, here is code you can write today, to get the "shallow comparison" semantics I outlined above:
      It does require unsafe to re-create the slice, but it works fine and has the same performance characteristics as if we made it a language feature. It allows storing slices in maps (as a Slice[T] intermediary) and comparing them directly. So, if you *need* these semantics, you can get them, even if a bit inconvenient.
      This code would obviously remain valid, even if we introduced a == operator for slices, even if that does a "deep comparison".

      However, this is AFAIK the only way to implement a "deep comparison" (I'm ignoring capacity both for simplicity and because it seems the better semantic for this comparison) is this: https://go.dev/play/p/I1daD-KNc5Y
      That works as well, but note that it is *vastly* more expensive than the equivalent language feature would be, as it allocates and copies all over the place.

      So, it seems to me, that "deep comparisons" benefit much more from being a language feature, than "shallow comparisons". Though to be clear, my position is still, that neither should be one.

      David Arroyo

      unread,
      May 5, 2022, 9:39:38 AM5/5/22
      to golang-nuts
      On Mon, May 2, 2022, at 22:43, will....@gmail.com wrote:
      > - Functions: Compare the corresponding memory addresses. The time
      > complexity is constant.

      How would this apply to inlined functions? supporting equality would essentially force the compiler to keep a the function in the output binary even if it's inlined everywhere it's called, just for comparisons. It would also complicate

      Here's another example:

      type worker struct { ... }
      func shutDown() { ... }

      func (w *worker) run(orders <-chan func()) {
      for f := range orders {
      if(f == shutDown) {
      log.Printf("got a shutdown command")
      w.Cleanup()
      }
      f()
      }
      }

      currently, Go programs run on a single computer. What if a Go runtime was built that ran Go programs across many computers? Or, put another way, what if a system architecture emerged where the instruction access time varied so drastically across CPU cores that it made sense to duplicate functions across cores' fast memory regions, so that the receive operation in the above example actually received a duplicate copy of a function? I will admit that closures with mutable data segments already complicate such an optimization, but function equality would thwart such an optimization altogether.

      David

      Axel Wagner

      unread,
      May 5, 2022, 9:52:28 AM5/5/22
      to David Arroyo, golang-nuts
      On Thu, May 5, 2022 at 3:39 PM David Arroyo <dr...@aqwari.net> wrote:
      Go programs run on a single computer. What if a Go runtime was built that ran Go programs across many computers?

      I don't think this would pose any problems to this topic specifically. It would pose many problems, but once you have them all solved, this topic wouldn't be any harder than it is today.
      In particular, in this scenario you'd already have to solve pointer-comparison and dereference and itables and function calls and once all of those are solved, you can just apply those solution to the arguments here.

      Which is to say, the Go spec already basically assumes that your program runs in a single memory space.
       
      Or, put another way, what if a system architecture emerged where the instruction access time varied so drastically across CPU cores that it made sense to duplicate functions across cores' fast memory regions, so that the receive operation in the above example actually received a duplicate copy of a function? I will admit that closures with mutable data segments already complicate such an optimization, but function equality would thwart such an optimization altogether.

      David

      --
      You received this message because you are subscribed to the Google Groups "golang-nuts" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/f566007f-aa47-4aaf-bc9e-a0fa9dab1e62%40www.fastmail.com.

      Will Faught

      unread,
      May 5, 2022, 1:32:34 PM5/5/22
      to Ian Lance Taylor, Rob Pike, Kurtis Rader, golang-nuts
      On Wed, May 4, 2022 at 1:00 PM Ian Lance Taylor <ia...@golang.org> wrote:
      On Tue, May 3, 2022 at 11:01 PM Will Faught <will....@gmail.com> wrote:
      >
      > On Tue, May 3, 2022 at 7:27 PM Ian Lance Taylor <ia...@golang.org> wrote:
      >>
      >> Does a program like this print true or false?
      >>
      >> func F() func() int { return func() int { return 0 } }
      >> func G() { fmt.Println(F() == F()) }
      >>
      >
      > It would print false, because the function literal creates a new allocation (according to the rule I sketched out). I can see the desire to optimize that, but personally when I write code like that, I'm thinking, "and then return a new function." Causing an allocation isn't surprising behavior here, and so neither is uniqueness in terms of comparisons.
      >
      >>
      >> What about a program like this:
      >>
      >> func H(i int) func() *int { return func() *int { return &i } }
      >> func J() { fmt.Println(H(0) == H(1)) }
      >>
      >
      > It would print false for the same reason.
      >
      >>
      >> Whatever we define for cases like this some people will be ready to
      >> argue for a different choice.  The costs of forcing a decision exceed
      >> the benefits.
      >
      >
      > So on the balance, the cost of making a decision is worth it for something big like dependencies or generics, but not function equality. Well, I guess that's fair enough, but it seems like one could use that kind of argument to undermine any language change, though, including dependencies and generics. It doesn't seem like the function equality rule I sketched out would add much, if any, language complexity. It's only one sentence: "Function values are equal if they were created by the same function literal or declaration."

      I don't think that is clear, because it seems to me that each call to
      F(), above, returns the same function literal, yet you said that F()
      == F() is false.


      It doesn't return a function literal, it returns a function value. "Function values are equal if they were created by the same function literal or declaration." A function literal creates a function value conceptually in this wording, although perhaps that's not how it actually works under the hood.
       
      As an implementation note, currently F() does not allocate.  If we
      require that F() != F(), then calling F() must allocate.  Adding an
      allocation there is straightforward, but even if we permitted function
      comparisons I think that function literals will be used far more than
      they are compared, so adding an allocation seems like a poor use of
      resources.



      Yeah, it's a trade-off. I can understand that it's not worth it.

      This is all a moot point anyway, since functions don't have unique addresses. Interesting discussion, though! :)
       

      >> > Regarding expectations, many new Java programmers came from JavaScript (like myself), so the confusion is understandable, but it's not something that necessarily needs to be considered. Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references. Bad assumptions are best prevented by proper education and training, not by omitting language features. Wrong expectations aren't the same as foot-guns.
      >>
      >> I don't agree.  Unexpected behavior is a footgun.
      >
      >
      > I wrote that wrong expectations aren't foot-guns, not that unexpected behaviors aren't foot-guns. Wrong expectations, as in "I can call this pointer-receiver method on this unaddressable value," or "since zero values are useful, I can set keys and values in this zero-value map." Downloading a compiler for some new language I haven't bothered to learn, typing in Java-like stuff, and then being upset when it doesn't work isn't a problem of unexpected behavior, it's a problem of wrong expectations (usually misunderstandings or ignorance).

      I don't want to get into a terminology war, but I don't understand the
      distinction that you are drawing.  If I have "wrong expectations,"
      then what actually happens when I try something is "unexpected
      behavior."  It's literally not what I expected.


      I meant to draw a distinction between unexpected behavior due to ignorance or mistakes ("wrong expectations"), and unexpected behavior due to complexity, undefined behavior, etc.
       
      It's reasonable to say that if you are using a new language you should
      read the friendly manual.  But it's also reasonable to say that as
      much as possible languages should be unsurprising.  Computer languages
      build on each other.  Go has obvious debts to languages like C and
      Oberon, and it would be confusing if constructs in Go acted
      differently than the identical constructs in those languages.


      I can't speak to Oberon, but C has arrays that are usually passed around as a pointer with a length (or an equivalent terminal value). Comparing those arrays can be done shallowly with those pointers, or deeply by iterating the array manually and comparing elements. One can even declare their own slice-like struct with an array pointer and a length, if one were so inclined. What I'm proposing is straight out of C.

      Other languages, like Rust, Ruby, and Python, enable slices/lists to be compared. It's unusual that Go doesn't have any way to compare slices out of the box. I'm not a genius who invented the idea that slices can or should be comparable. I'm bringing this idea here in part because I think it is consistent with other languages.

      What is your counter argument to this point I made earlier, the context being that new Java programmers found == behavior with strings confusing:

      Arguably, old Java programmers would find `==` confusing for structs, since it doesn't compare references.

      Why allow == to compare structs, when C didn't let you? Why not allow == to compare slices, when C let you do the equivalent?


      >> Go is intended to
      >> be a simple language.  When special explanation is required, something
      >> has gone wrong.
      >>
      >
      > The Go language spec is full of special explanations. The section on comparisons is quite detailed and complicated. I recently had to ask here why the range operation doesn't work for type set unions of slices and maps, which you very kindly answered, if I remember correctly. How is slice equality different in terms of special explanation?

      The fact that Go is imperfect, which it is, is not an argument for
      adding further imperfections.


      I didn't argue that. I argued that the change to the language spec would be done in a typical way. "Special explanation" as defined as literally any wording in the language spec could be used to block anything. It could have been used to block the addition of underscores in number literals.

      Do you agree, yes or no, and if not, why?
       
      > I've argued that slice comparisons make Go even simpler and more consistent with various examples, analogies, and so on. Please see my response to Axel, if you haven't already. Do you have a specific counter argument to any specific argument that I've made regarding simplicity or consistency?

      All changes to languages have costs and benefits.  Your arguments
      about simplicity and consistency are benefits.  The counter-arguments
      that I and several others have been making are costs.  In deciding
      whether to change the language we must weigh those costs and benefits
      and decide which are more important.

      To put it another way, I don't have specific counter arguments to your
      specific arguments about simplicity and consistency.  But that doesn't
      mean that I agree, it just means that I think that other consequences
      are more important.


      I think what you've put forth as counter arguments are instead principles, like "simplicity is good," "complexity is bad," "changes have cost," and so on. Principles aren't themselves arguments, so they can't be used as premises in an argument.

      1. Complexity is bad.
      2. This change increases complexity in one area.
      Therefore, 3. This change is bad.

      is not a good argument because "complexity is bad" would rule out all changes, and it would be inconsistent with the past and recent behavior of the Go Team.

      Principles are good for guiding us in forming premises for arguments. For example, principles like "freedom is good" and "safety is good" are often applied when devising new laws, but they aren't themselves reasons for accepting or rejecting an idea. "Freedom is good" alone would rule out all laws, police, protections, order, and so on. "Safety is good" would permit nothing potentially interesting, fun, or meaningful to ever happen. So people use these principles to guide them in forming arguments, weighing the pros and cons for a particular idea in light of each principle, to find the best trade-off among them. Like:

      1. Not wearing a seat belt is dangerous.
      2. Wearing a seat belt takes little effort and the discomfort is minimal to none.
      Therefore, 3. A seat belt law is a good idea.

      Premise 1 took into account "safety is good." Premise 2 took into account "freedom is good." The conclusion (3) reflects both principles, but doesn't rely on them directly.

      1. Safety is good.
      Therefore, 2. A seat belt law is a good idea.

      is a bad argument because you can justify almost anything that way:

      1. Safety is good.
      Therefore, 2. A law that requires everyone to wear a helmet 24/7 is a good idea.

      If you want to rule out all change in Go, then just use the premise/axiom "nothing can be changed:"

      1. Nothing can be changed.
      2. This idea will change something.
      Therefore, 3. This is a bad idea.

      I would agree with that argument, assuming the first premise was official policy. But

      1. Simplicity is good.
      2. Complexity is bad.
      3. Change has cost.
      Therefore, 4. This is a bad idea.

      is a bad argument, because you can say no to almost anything that way:

      1. Simplicity is good.
      2. Complexity is bad.
      3. Change has cost.
      Therefore, 4. No modules. No generics. No enhanced number literals. No embed package. No io/fs. No nothing.

      You haven't used these principles to put forth any kind of argument with concrete premises that are guided by them. Your and Rob's counter argument for function comparisons was great because it made sense from an implementation perspective, a concrete premise, which I accepted.

      This issue is related to the "no new rationale" issue that Russ Cox has discussed. I don't know how the Go Team as a whole thinks about that issue, but it's something I wholeheartedly agree with. This issue could perhaps be called something like "no vague rationale." Principles are vague rationale; arguments are not. Arguments that are shaped by principles are stronger for it.

      In my humble opinion, I've seen the Go Team often use vague rationale. In my humble opinion, it's why they were so wrong for over a decade about generics, despite many, many people making the argument for adding generics over that time. Even when generics were added, the Go Team only did it (according to their statements that I've seen regarding this) due to a poll, not because of any rational conclusion. It's a blind spot. I say this with love, because I want Go to be the best it can be, and I support the Go Team's efforts to make it so, but I believe the best way to get there is by the adversarial process of argumentation, a battle of ideas, just like what is used in the U.S. justice system to determine truth. In order for that to work, we have to have a fair "fight," using arguments that can be deconstructed and judged/weighed concretely.

      If we can't agree on this, then there isn't any utility in debate. If you're not interested in debate, then ultimately that's fine, but please make that clear up front. The Go proposal process now specifies that proposals should start as golang-nuts discussions, so that's why I'm here. This isn't intended to be a casual discussion thread.

      All that said, assuming you agree, what is your full counter argument, or set of counter arguments, to my initial argument for slice and map comparisons, shaped by the principles you've listed? It doesn't need to be in a rigid numbered list format, but it should be obvious how your counter arguments could logically fit into that format.

      Ian

      Axel Wagner

      unread,
      May 5, 2022, 2:39:53 PM5/5/22
      to golang-nuts
      On Thu, May 5, 2022 at 7:32 PM Will Faught <will....@gmail.com> wrote:
      Yeah, it's a trade-off. I can understand that it's not worth it.

      I find it confusing to me that you seem to be willing to allow the existence of tradeoffs on the one hand. And on the other you treat most arguments as all-or-nothing questions. Like:

      I think what you've put forth as counter arguments are instead principles, like "simplicity is good," "complexity is bad," "changes have cost," and so on. Principles aren't themselves arguments, so they can't be used as premises in an argument.

      1. Complexity is bad.
      2. This change increases complexity in one area.
      Therefore, 3. This change is bad.

      is not a good argument because "complexity is bad" would rule out all changes, and it would be inconsistent with the past and recent behavior of the Go Team.

      "Complexity is bad" is indeed in the "contra" column for most changes. That doesn't rule them out though. It just means they have to justify their complexity.

      There are many, individual arguments at play here. Each one individually doesn't lead to the conclusion. And taking each one individually out of context and applying it in totality as the only deciding factor leads to ridiculous results like this. But *taken together*, they can paint a picture that the downsides of adding comparison for certain types outweigh their benefits. And that it's different for other types.

      Pointers and Slices have commonalities, true. But that doesn't mean "if you can compare pointers, you should be able to compare slices". They also have differences. And it's entirely reasonable, that the downsides for adding a comparison for slices do not apply for pointers, or apply to a lesser degree. And similar for benefits.

      All that said, assuming you agree, what is your full counter argument, or set of counter arguments, to my initial argument for slice and map comparisons, shaped by the principles you've listed? It doesn't need to be in a rigid numbered list format, but it should be obvious how your counter arguments could logically fit into that format.

      One argument goes roughly like

      1. We believe most people would expect comparison of slices to compare the contents of slices/maps, not the "shallow" comparison you suggest¹.
      2. Doing that has technical problems making it prohibitive (e.g. the existence of cyclic data structures).
      3. Even *if* we would add a "shallow" comparison instead, there are still open questions where it is hard to say what the "right" answer would be² .
      4. It is better not to have any comparison, than one which behaves badly. The programmer can always be explicit about their intentions, if need be.

      [1] Note that you mentioned Rust, Ruby and Python to support comparisons on their respective equivalents. Their comparisons all behave this way.
      [2] This alone would probably not prevent us from doing it, but given that we'd want a different notion of comparability anyways, it still matters.


      Ian

      --
      You received this message because you are subscribed to the Google Groups "golang-nuts" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.

      Ian Lance Taylor

      unread,
      May 5, 2022, 7:38:42 PM5/5/22
      to Will Faught, Rob Pike, Kurtis Rader, golang-nuts
      On Thu, May 5, 2022 at 10:32 AM Will Faught <will....@gmail.com> wrote:
      >
      > I think what you've put forth as counter arguments are instead principles, like "simplicity is good," "complexity is bad," "changes have cost," and so on. Principles aren't themselves arguments, so they can't be used as premises in an argument.

      We are evidently completely failing to communicate. I'm sorry, but I
      don't see any point in continuing to engage in this discussion.


      > In my humble opinion, I've seen the Go Team often use vague rationale. In my humble opinion, it's why they were so wrong for over a decade about generics, despite many, many people making the argument for adding generics over that time. Even when generics were added, the Go Team only did it (according to their statements that I've seen regarding this) due to a poll, not because of any rational conclusion. It's a blind spot. I say this with love, because I want Go to be the best it can be, and I support the Go Team's efforts to make it so, but I believe the best way to get there is by the adversarial process of argumentation, a battle of ideas, just like what is used in the U.S. justice system to determine truth. In order for that to work, we have to have a fair "fight," using arguments that can be deconstructed and judged/weighed concretely.

      I'm sorry, but your comments about why generics were added to Go are incorrect.


      > If we can't agree on this, then there isn't any utility in debate. If you're not interested in debate, then ultimately that's fine, but please make that clear up front. The Go proposal process now specifies that proposals should start as golang-nuts discussions, so that's why I'm here. This isn't intended to be a casual discussion thread.
      >
      > All that said, assuming you agree, what is your full counter argument, or set of counter arguments, to my initial argument for slice and map comparisons, shaped by the principles you've listed? It doesn't need to be in a rigid numbered list format, but it should be obvious how your counter arguments could logically fit into that format.

      I and others have already presented the counter-arguments. Somebody
      also pointed to an earlier discussion thread which made the same
      counter-arguments. The fact that you don't agree with those
      counter-arguments does not mean that they haven't been presented.

      Language design is not a matter of precise arguments. It is a matter
      of weighing costs and benefits. If there were precise arguments for
      language design, we would all be using the same language (or perhaps
      one dynamically typed language, one statically typed language, etc.).

      In my opinion, the benefits of permitting slice and map comparisons in
      Go do not outweigh the costs.

      Ian

      Will Faught

      unread,
      May 5, 2022, 7:57:56 PM5/5/22
      to Axel Wagner, golang-nuts, Ian Lance Taylor
      On Thu, May 5, 2022 at 2:53 AM Axel Wagner <axel.wa...@googlemail.com> wrote:
      On Thu, May 5, 2022 at 3:11 AM Will Faught <wi...@willfaught.com> wrote:
      The reason to include capacity in comparisons, aside from it being convenient when doing comparisons, is that the capacity is an observable attribute of slices in regular code. Programmers are encouraged to reason about slice capacity, so it should be included in comparisons. `cap(S1[:1]) != cap(S1[:1:1])` is true, therefore `S1[:1] != S1[:1:1]` should be true, even though `len(S1[:1]) == len(S1[:1:1])` is true.

      Do you agree that is a good thing, yes or no, and if not, why?

      Sure.
       
      This approach to comparisons for functions, maps, and slices makes all values of those types immutable, and therefore usable as map keys.

      Do you agree that is a good thing, yes or no, and if not, why?

      Sure.

      I wrote in the proposal an example of how slices work in actual Go code, then asked:

      Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

      What was your answer? Yes or no? This isn't rhetorical at this point, I'm actually asking, so please answer unambiguously yes or no.

      To be clear, demanding an unambiguous answer doesn't make a question unambiguous. If you'd ask "is light a wave or a particle, please answer yes or no", my response would be to stand up and leave the room, because there is no way to converse within the rules you are setting. So, if you insist on these rules, I will try my best to leave the room, metaphorically speaking and to write you off as impossible to have a conversation with.

      My position is that the comparison should be disallowed. Therefore, I can't answer yes or no.

      I think "both slides in the comparison contain the same elements in the same order" is a strong argument in favor of making the comparison be true.
      I think "this would make it possible for comparisons to hang the program" is a strong argument in favor of making the comparison be false.
      I think that the fact that there are strong arguments in favor of it being true and strong arguments in favor of it being false, is itself a strong argument in not allowing it.

      If your answer was yes, then you don't understand Go at a basic level.

      Please don't say things like this. You don't know me well enough to judge my understanding of Go. If you did, I feel confident that you wouldn't say this. It is just a No True Scotsman fallacy at best and a baseless insult at worst.
       

      The reason why I've been explicitly asking you and Ian whether you agree with my points is because you've been ignoring or skipping over them in your responses. The points I make in response to yours are meant to synchronize us through agreement (if we agree), and ensure we are on the same page. When you don't respond with something equivalent to "agree" or "disagree because" to each point, it's easy to lose track of where each of us is, and what ground is left to cover or explore. We're already 3-5 levels deep in email quotations at this point. Debate is unproductive and pointless if we can't even agree on what an argument means.

      I say all this because it's clear from what you've written here that you fundamentally misunderstood my initial argument for why slice comparisons should be shallow. I didn't write the Slice1000 example because I enjoy typing, I wrote it because the synchronization forced by the question of what `c` evaluates to ensures that you and I are on the same page of what my argument means. The statement about not understanding Go at a basic level was phrased very specifically to make it clear whether you understood what I was saying. If something comes across as insulting, the odds are good it's because you don't understand the point. The first thing we should ask ourselves about an argument is, "Is this true?" The second is, "How can this be true?" By ignoring the two questions about Slice1000 in the initial argument, you might have constructed in your mind a strawman argument, and been arguing against that ever since. If there's a single sentence, a single word, in an argument that you don't understand, the first step is to ask clarifying questions to understand it, not ignore it and hope for the best. This entire time, I thought you had answered that first question as no. I didn't start off requiring every initial response to include the answers to those questions because I, you know, assumed people would thoroughly read and understand the argument, and point out basic comprehension problems with it, and otherwise base their responses on it.

      Again, this was in the initial argument:

      If you think slice equality should incorporate element equality, here's an example for you:

      ```
      type Slice1000[T any] struct {
          xs *[1000]T
          len, cap int
      }

      func (s Slice1000[T]) Get(i int) T {
          // ...
          return s.xs[i]
      }

      func (s Slice1000[T]) Set(i int, x T) {
          // ...
          s.xs[i] = x
      }

      var xs1, xs2 [1000]int

      var a = Slice1000[int]{&xs1, 1000, 1000}
      var b = Slice1000[int]{&xs2, 1000, 1000}
      var c = a == b
      ```

      Do you expect `c` to be true? If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?

      "If you think slice equality should incorporate element equality, here's an example for you" was a clue that it was important to understand this argument before responding with disagreement about shallow slice comparisons.

      Plug that code into go.dev/play to observe how it works. Go here and click Run for yourself. It prints `false`. This is basic, existing Go behavior. If you do not understand how arrays work in Go, or how pointers to arrays work in Go, or how comparisons of pointers to arrays work in Go, then you do not understand Go at a basic level. I would challenge you to find any experienced Go programmer that would disagree with that statement.

      This is basically the argument made by "If not (it's false, by the way), then why would you expect `make([]int, 2) == make([]int, 2)` to be true?":

      1. The A and B values of type Slice1000 have different array pointers, so they don't compare as equal.
      2. `make` allocates new arrays for each slice, so the array pointers are unique.
      Therefore, 3. Two calls of `make` with the same type, length, and capacity shouldn't compare as equal, for the same reason.

      Potentially productive avenues of attack against this argument might be (1) to argue that I'm incorrect about how the Slice1000 code functions by running it yourself; (2) that `make` doesn't allocate new arrays for each call for slice types; (3) that shallow comparisons shouldn't take into account the array pointer, for some reason; (4) that slices are different than other types like arrays or structs, either conceptually, or in some aspect of common implementation, or something like that, and therefore another way of comparing them, or changing nothing at all, is more intuitive/simple/cheap/whatever; and so on. I dunno. If I had thought of a good counter argument, I wouldn't have started this thread.

      Do you agree, yes or no, and if not, why?

      I don't follow why `a[0:0:0] == b[0:0:0]` would be true if they have different array pointers.

      Because above, you made the argument that focusing the definition of equality on observable differences is a good thing. The difference between a[0:0:0] and b[0:0:0] is unobservable (without using unsafe), therefore they should be considered equal.


      It's observable because it's observable in `a` and `b` (assuming they have different arrays). If two slices are equal, then so are their corresponding sub-slices. If you somehow encounter two slices with length/capacity 0 of unknown origin, and compare them, and they compare as unequal, then the conclusion is that they point to different arrays. It would be an error to conclude that two slices with length/capacity 0 are necessarily equal, where slice comparisons are defined as proposed here.

      Do you agree, yes or no, and if not, why?
       
      Note that `a[0] = 0; b[0] = 0; a[0] = 1; b[0] == 1` can observe whether the array pointers are the same.

      No. This code panics, if the capacity of a and b is 0 - which it is for a[0:0:0] and b[0:0:0]. There is no way to observe if two capacity 0 slices point at the same underlying array, without using unsafe.


      No, the length/capacity of `a[0:0:0]` is 0; the length/capacity of `a` is not 0, if I remember the example correctly.

      Do you agree, yes or no, and if not, why?
       
      Feel free to prove me wrong, by filling in Eq so this program prints "true false", without using unsafe: https://go.dev/play/p/xqj_DhBi392


      I'm unclear whether this rests on your possibly misunderstanding the initial argument, so I'll hold off on responding to these points for now.
       
       
      But really, the point isn't "which semantics are right". The point is "there are many different questions which we could argue about in detail, therefore there doesn't appear to be a single right set of semantics".

      I've already addressed this point directly, in a response to you. You commented on the particular example I'd given (iterating strings), but not on the general point. I'd be interested in your thoughts on that now. 
      Here it is again:

      Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible. For example, the range operation can produce per iteration an element index and an element value for slices, but a byte index and a rune value for strings. Personally, I found the byte index counterintuitive, as I expected the value to count runes like slice elements, but upon reflection, it makes sense, because you can easily count iterations yourself to have both byte indexes and rune counts, but you can't so trivially do the opposite. Should we omit ranging over strings entirely just because someone, somewhere, somehow might have a minority intuition, or if something is generally counterintuitive, but still the best approach?

      I don't understand what your unaddressed point is.

      The point:

      Just because there are two ways to do something, and people tend to lean different ways, doesn't mean we shouldn't pick a default way, and make the other way still possible.
       
      Do you agree, yes or no, and if not, why?


      It seems to be that we fundamentally disagree. Where Ian and I say "if we can't clearly decide what to do, we should do nothing", you say "doing anything is better than doing nothing" (I'm paraphrasing, because pure repetition doesn't move things forward).


      "Doing anything is better than nothing" is not my argument.
       
      In that case, I don't see how I could possibly address that, apart from noticing that we disagree (which seems obvious).
       
      To repeat what I said upthread: My goal here is not to convince you, or to prove that Go's design is good. It's to explain the design criteria going into Go and how the decisions made follow from them. "If no option is clearly good, err on the side of doing nothing" is a design criterion of Go. You can think it's a bad design criterion and that's fine and I won't try to convince you otherwise. But it is how the language was always developed (which is, among other things, why we didn't get generics for over ten years).


      Again, my argument is that "no option is clearly good" doesn't apply in this case. I've typed a lot of words making that argument to you and/or Ian, which you haven't specifically responded to yet, if I remember correctly. This is why I'm starting to explicitly force you and Ian to agree or disagree on each point that I make in response to you, because these things are getting lost, and now you're trying to use that loss to justify using basic principles as counter arguments that are countered by those lost points. Go back through everything I've written in response to you and respond to every point, every sentence where applicable, with "agree" or "disagree because," and I think that should clear this up.


      A slice is a descriptor for a contiguous segment of an underlying array and provides access to a numbered sequence of elements from that array.

      It doesn't matter to me whether we refer to the slice's array as a "descriptor" of an array, or it "points" to an array. It refers to a specific array in memory, period.

      By this notion, we don't arrive at the comparison you proposed, though. For example, if we said "two slices are equal, if they have the same length and capacity and point at the same array", then

      a := make([]int, 10)
      fmt.Println(a[0:1:1] == a[1:2:2])

      should print "true", as both point at the same array.

      No, it should print false, because `a[1:2:2]` has an array pointer that is `sizeof(int)` bytes offset from the array pointer of `a`. Slices do not keep track of offsets, just pointers, lengths, and capacities.

      Do you agree, yes or no, and if not, why?
       

      We could say "two slices are equal, if they provide access to the same sequence of elements from an array". But in that case, we wouldn't define what a capacity 0 slice equals, as it does not provide access to any sequence of elements.

      Or maybe we say "a non-nil slice of capacity zero provides access to an empty sequence of elements" in which case this should print "true", as the empty set is equal to the empty set:

      a := make([]int, 10)
      fmt.Println(a[0:0:0] == a[1:1:1])

      But, for your proposal to work, we would then have to make sure that any slicing operation which results in a capacity zero slice resets the element pointer, so they are equal.

      Or we could change your proposal, to say "two slices are equal, if they are both nil, or if they are both non-nil and have capacity zero, or if they are both non-nil and give access to the same sequence of elements".

      FWIW, I think this last one is the most workable solution for the "slices are equal, if the use the same underlying array" concept of comparability (whose main contender is the "slices are equal, if the contain the same elements in the same order" concept of comparability).

      But I hope that this can demonstrate that there is complexity here, which you have not seen so far.

      These all seem to build off the last point that I addressed, so I'll hold off on responding to these for now.
       

      The proposal doesn't depend on reflection or unsafe behavior. It was just a lazy way of mine to inspect what Go does in a corner case. I think making it clear what `make` does when the length is 0 is the solution to this, if it already isn't clear.

      I disagree. There are more ways to get capacity zero slices, than just calling `make` with a length of 0. If you want to create the invariant that all capacity 0 slices use the same element pointer, this would incur an IMO prohibitive runtime impact on any slicing operation.

      I don't understand what that has to do with reflect or unsafe, though.

      Are you saying zero-capacity slices can come from reflect or unsafe, and that they wouldn't work with this comparison scheme? If so, how would they not work?

      Are you saying that slicing with a new zero capacity would produce slices that wouldn't work with this comparison scheme? If so, how would they not work?
       

      The point is that there are two ways to compare them: shallow (pointer values themselves) and deep (comparing the dereferenced values). If we made `==` do deep comparisons for pointers, we'd have no way to do shallow comparisons. Shallow comparisons still allow for deep comparisons, but not the other way around.

      That's simply false. If anything, the exact opposite is true.


      Man, we seem destined to not see eye to eye for some reason, lol. I really don't know what to say to that. I guess here's a sort-of proof by construction for my claim:

      With shallow pointer comparisons:

      ```
      var p1, p2 *int = // ...

      // Shallow comparison
      var equal = p1 == p2 // only compares pointer addresses

      // Deep comparison
      var equal = *p1 == *p2 // not very "deep", though
      var equal = reflect.DeepEqual(p1, p2) // much deeper
      ```

      With deep pointer comparisons:

      ```
      var p1, p2 *int = // ...

      // Deep comparison
      var equal = p1 == p2 // same as *p1 == *p2 above; not very "deep", though

      // Shallow comparison
      var equal = ??? // impossible
      ```

      Do you have a refutation for that? What can we do for `???`?
       
      For example, here is code you can write today, to get the "shallow comparison" semantics I outlined above:
      It does require unsafe to re-create the slice, but it works fine and has the same performance characteristics as if we made it a language feature. It allows storing slices in maps (as a Slice[T] intermediary) and comparing them directly. So, if you *need* these semantics, you can get them, even if a bit inconvenient.
      This code would obviously remain valid, even if we introduced a == operator for slices, even if that does a "deep comparison".


      How does this connect to my point above about how if pointers are compared shallowly, we can still compare them deeply, but the reverse isn't true?

      Sure, your Slice seems to embody most of what I've proposed here, but it wouldn't be standard, built-in behavior. I wouldn't be surprised if you could accomplish the same with reflection or Cgo or serialization. Why have == for structs if we can compare fields individually in user code? You're ignoring the utility of having == built in, and the simplicity and intuitiveness that comes from consistency, both of which are arguments that I've made before to you and Ian, if I remember correctly.
       
      However, this is AFAIK the only way to implement a "deep comparison" (I'm ignoring capacity both for simplicity and because it seems the better semantic for this comparison) is this: https://go.dev/play/p/I1daD-KNc5Y
      That works as well, but note that it is *vastly* more expensive than the equivalent language feature would be, as it allocates and copies all over the place.


      Eq would need to be called recursively on elements that are slices, but otherwise yes, although this boxes all the slice elements and allocates an entire singly-linked list for them.

      However, these examples seem to be about shallow vs. deep slice comparisons, not pointer comparisons, which seems to be the point you're responding to above, so I'm not following your point.

      Will Faught

      unread,
      May 5, 2022, 8:00:01 PM5/5/22
      to David Arroyo, golang-nuts
      Yeah, it would have performance implications, which is a good counter argument. Regardless, it seems from the discussion that function values can have multiple addresses, so it wouldn't work currently anyway.

      --
      You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
      To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/b-WtVh3H_oY/unsubscribe.
      To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/f566007f-aa47-4aaf-bc9e-a0fa9dab1e62%40www.fastmail.com.

      Will Faught

      unread,
      May 5, 2022, 9:25:45 PM5/5/22
      to Axel Wagner, golang-nuts
      The proposal was declined, so no need to respond. Some responses inline below. Thanks so much for your feedback, Axel. :)



      On Thu, May 5, 2022 at 11:39 AM 'Axel Wagner' via golang-nuts <golan...@googlegroups.com> wrote:
      On Thu, May 5, 2022 at 7:32 PM Will Faught <will....@gmail.com> wrote:
      Yeah, it's a trade-off. I can understand that it's not worth it.

      I find it confusing to me that you seem to be willing to allow the existence of tradeoffs on the one hand. And on the other you treat most arguments as all-or-nothing questions. Like:


      I acknowledged that a trade-off exists, not that I agreed that performance outweighed all other considerations, although I would weigh performance costs very highly. The point was moot since function values can have multiple addresses.

      In the email you're responding to, I explained that arguments are supposed to balance conflicting principles. Performance vs. consistency vs. simplicity vs etc. is an example.
       
      I think what you've put forth as counter arguments are instead principles, like "simplicity is good," "complexity is bad," "changes have cost," and so on. Principles aren't themselves arguments, so they can't be used as premises in an argument.

      1. Complexity is bad.
      2. This change increases complexity in one area.
      Therefore, 3. This change is bad.

      is not a good argument because "complexity is bad" would rule out all changes, and it would be inconsistent with the past and recent behavior of the Go Team.

      "Complexity is bad" is indeed in the "contra" column for most changes. That doesn't rule them out though. It just means they have to justify their complexity.

      There are many, individual arguments at play here. Each one individually doesn't lead to the conclusion. And taking each one individually out of context and applying it in totality as the only deciding factor leads to ridiculous results like this. But *taken together*, they can paint a picture that the downsides of adding comparison for certain types outweigh their benefits. And that it's different for other types.


      Sure. You have to weigh various aspects according to various principles, and make a decision. The explicit, concrete weighing of these aspects is what has been missing from the counter arguments. That's been my point. "Complexity is bad" is useless on its own. "This would require XYZ changes to the spec, PQR changes to the implementation, users would have to keep ABC in their head now as opposed to DEF when having to roll their own comparisons..." is an actual analysis of complexity, which is actually tractable as a premise in an argument. "Users don't get compiler errors when they try to compare slices, we don't have to maintain a FAQ entry about it, we don't have to point users to it, they learn how it works when they learn how slices work and how == works for all types, it's constant time and clearly defined..." is something that can be weighed too. Now we can try to pin down specific aspects or constraints that force the decision one way or another, or feel out which adds up to more value, etc. It's not a simple "my principle vs. your principle" slugout.
       
      Pointers and Slices have commonalities, true. But that doesn't mean "if you can compare pointers, you should be able to compare slices". They also have differences. And it's entirely reasonable, that the downsides for adding a comparison for slices do not apply for pointers, or apply to a lesser degree. And similar for benefits.


      That sounds like it would have been a promising counter argument :), although I wonder if it would have survived the point that Go programmers are expected to learn what == does for each type.
       
      All that said, assuming you agree, what is your full counter argument, or set of counter arguments, to my initial argument for slice and map comparisons, shaped by the principles you've listed? It doesn't need to be in a rigid numbered list format, but it should be obvious how your counter arguments could logically fit into that format.

      One argument goes roughly like

      1. We believe most people would expect comparison of slices to compare the contents of slices/maps, not the "shallow" comparison you suggest¹.

      Only insofar as they don't understand slices at all. If you know they point to an array, and copying them around is a lightweight operation that doesn't copy the actual array, then IMO it becomes natural to think of slice comparisons working like they do for Slice1000. Hence why I used that example.
       
      2. Doing that has technical problems making it prohibitive (e.g. the existence of cyclic data structures).

      Deep comparisons do, not shallow ones.
       
      3. Even *if* we would add a "shallow" comparison instead, there are still open questions where it is hard to say what the "right" answer would be² .

      Shallow comparisons :), for reasons like what you just listed, e.g. cyclic data structures. Shallow comparisons are constant time. Map lookups won't potentially take a long time. Deep comparisons make slices in effect mutable.
       
      4. It is better not to have any comparison, than one which behaves badly. The programmer can always be explicit about their intentions, if need be.


      Define badly.

      [1] Note that you mentioned Rust, Ruby and Python to support comparisons on their respective equivalents. Their comparisons all behave this way.

      This is an excellent point! I would have wondered if there's room for a difference in shallowness/deepness while still being consistent by providing some kind of comparison.
       
      [2] This alone would probably not prevent us from doing it, but given that we'd want a different notion of comparability anyways, it still matters.


      Sure. reflect.DeepEquals. Some kind of equivalence === operation. Arbitrary code. I wouldn't want to limit users.
       

      Ian

      --
      You received this message because you are subscribed to the Google Groups "golang-nuts" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAKbcuKh_vGQTjtuBECxDJM%2BuHs5eiRpw2iq-%3DK77YyeUpcR%2BfQ%40mail.gmail.com.

      --
      You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
      To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/b-WtVh3H_oY/unsubscribe.
      To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
      To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/CAEkBMfFYJ%3DHFYCBuaqRbjmD7H39gdihYObrwcdBTidqfJt1PJg%40mail.gmail.com.
      Reply all
      Reply to author
      Forward
      0 new messages