Is the GC recovering storage when full slice expression reduce capacity ?

170 views
Skip to first unread message

Christophe Meessen

unread,
Jan 10, 2020, 3:51:45 AM1/10/20
to golang-nuts
It is possible to reduce the capacity of a slice by using the full slice expression (https://golang.org/ref/spec#Slice_expressions).

Now consider the following code where a is a 1MB slice. I then create b, a slice of a, but with a much smaller capacity. Finally, I change the value of a so that only b refers to the slice element storage. 

a := make([]byte, 1000000)
b := b[:3:3]
a = nil

Will the garbage collector be able to recover the inaccessible space in the slice element storage ?

Jan Mercl

unread,
Jan 10, 2020, 4:02:22 AM1/10/20
to Christophe Meessen, golang-nuts
Specification does not directly define this, so the answer is -
possibly. AFAICT, no current Go compiler can free partial heap
objects. But there's one guarantee - the "dead" slice portion after
cap will not be scanned by the collector if no other live object uses
it.

keith....@gmail.com

unread,
Jan 10, 2020, 10:43:50 PM1/10/20
to golang-nuts


On Friday, January 10, 2020 at 1:02:22 AM UTC-8, Jan Mercl wrote:
On Fri, Jan 10, 2020 at 9:52 AM Christophe Meessen
<christop...@gmail.com> wrote:
>
> It is possible to reduce the capacity of a slice by using the full slice expression (https://golang.org/ref/spec#Slice_expressions).
>
> Now consider the following code where a is a 1MB slice. I then create b, a slice of a, but with a much smaller capacity. Finally, I change the value of a so that only b refers to the slice element storage.
>
> a := make([]byte, 1000000)
> b := b[:3:3]
> a = nil
>
> Will the garbage collector be able to recover the inaccessible space in the slice element storage ?

Specification does not directly define this, so the answer is -
possibly. AFAICT, no current Go compiler can free partial heap
objects.

Correct.
 
But there's one guarantee - the "dead" slice portion after
cap will not be scanned by the collector if no other live object uses
it.

That's not correct. If there is a reference to an object, the entire object is live and is scanned, regardless of what the reference(s) looks like (ptr, slice with small cap, slice with large cap).
Again, this isn't in the spec - in principle the GC could not scan past the largest capacity. But we don't currently do that.

Jan Mercl

unread,
Jan 13, 2020, 6:17:49 AM1/13/20
to keith....@gmail.com, golang-nuts
On Sat, Jan 11, 2020 at 4:43 AM <keith....@gmail.com> wrote:

>> But there's one guarantee - the "dead" slice portion after
>> cap will not be scanned by the collector if no other live object uses
>> it.
>
>
> That's not correct. If there is a reference to an object, the entire object is live and is scanned, regardless of what the reference(s) looks like (ptr, slice with small cap, slice with large cap).
> Again, this isn't in the spec - in principle the GC could not scan past the largest capacity. But we don't currently do that.

Might be a low hanging optimization opportunity. But proving no other
references exist is possibly the blocking issue.

Keith Randall

unread,
Jan 13, 2020, 10:27:00 AM1/13/20
to Jan Mercl, golang-nuts
It can get expensive to do that. Instead of just a mark bit per object, and a queue of pointers to mark, you need a mark bit per word and a queue of ptr+len. You can also end up doing more than constant work per mark.

x := [10]*int{ ... 10 pointers ... }
a := x[:3:3]
b := x[7::]

x is now dead, but a and b are live. GC needs to scan a[0:3] and a[7:10]. What about a[3:7]?

x := [10]*int{ ... 10 pointers ... }
a1 := x[:1:1]
a2 := x[:2:2]
a3 := x[:3:3]
...
a10 := x[:10:10]

If the GC encounters the references in order a1, a2, ... , a10, then at each step it has to scan one more word of x. Using just a mark bit per word, it will end up taking quadratic time to process these references.

Reply all
Reply to author
Forward
0 new messages