Hi gophers, I want to know the reason behind the decision of using append(s[:0:0], s...)
over the previous code since the two code return different slice when
dealing slice with zero len. The previous code will return brand new
slice with size zero, while the current code return an empty slice that's
still pointing to the previous array. And also, why not maybe using append(S(nil), s...) instead? This will return nil when dealing with zero len slice though, but what's the potential problem that it will cause?
I don't know if this can be considered for a problem, but here is my concern for the current code, append(s[:0:0], s...) :
If
we try to create slices from an array pool to reduce allocation by
using append, and many our slices turned out to be zero, slices.Clone
will return slice that still pointing to array in the pool. If we try creating many
of them concurrently, (if I understand it correctly) the pool may try to create many array objects as the object retrieved from Get may haven't been Put back to the pool. Those array objects can only be garbage-collected after those slices are no longer used / reachable and if it's an array of a big struct, wouldn't it might potentially pressure the memory?
Here is just a pseudo-code for illustration only. I think the array generated by pool will only be garbage-collected once ch is consumed and the slices are no longer used:
var pool = sync.Pool{New: func() any { return &[255]bigstruct{} }}
var ch = make(chan []bigstruct, 1000)
for i := 0; i < 1000; i++ {
go func() {
arr := pool.Get().(*[255]bigstruct)
defer pool.Put(arr)
s := arr[:0]
ch <- slices.Clone(s) // slice points to arr
}()
}
CMIIW and thank you!