why sync.pool don't release the item that it holds the only reference unless the first poolCleanup() call

190 views
Skip to first unread message

Yongjian Lian

unread,
Oct 21, 2014, 11:58:26 PM10/21/14
to golan...@googlegroups.com
in go1.3.2 sync/pool.go

186 func poolCleanup() {
187     // This function is called with the world stopped, at the beginning of a garbage collection.
188     // It must not allocate and probably should not call any runtime functions.
189     // Defensively zero out everything, 2 reasons:
190     // 1. To prevent false retention of whole Pools.
191     // 2. If GC happens while a goroutine works with l.shared in Put/Get,
192     //    it will retain whole Pool. So next cycle memory consumption would be doubled.
193     for i, p := range allPools {
194         allPools[i] = nil
195         for i := 0; i < int(p.localSize); i++ {
196             l := indexLocal(p.local, i)
197             l.private = nil
198             for j := range l.shared {
199                 l.shared[j] = nil
200             }
201             l.shared = nil
202         }
203     }
204     allPools = []*Pool{}
205 }

here after the first poolCleanup() call, allPools is set to empty,and the pool is no longer put into allPools again?

162 func (p *Pool) pinSlow() *poolLocal {
163     // Retry under the mutex.
164     // Can not lock the mutex while pinned.
165     runtime_procUnpin()
166     allPoolsMu.Lock()
167     defer allPoolsMu.Unlock()
168     pid := runtime_procPin()
169     // poolCleanup won't be called while we are pinned.
170     s := p.localSize
171     l := p.local
172     if uintptr(pid) < s {
173         return indexLocal(l, pid)
174     }
175     if p.local == nil {
176         allPools = append(allPools, p)
177     }   
178     // If GOMAXPROCS changes between GCs, we re-allocate the array and lose the old one.
179     size := runtime.GOMAXPROCS(0)
180     local := make([]poolLocal, size)
181     atomic.StorePointer((*unsafe.Pointer)(&p.local), unsafe.Pointer(&local[0])) // store-release
182     atomic.StoreUintptr(&p.localSize, uintptr(size))                            // store-release
183     return &local[pid]
184 }         

here only p.local is nil will put p into allPools

Yongjian Lian

unread,
Oct 22, 2014, 12:18:18 AM10/22/14
to golan...@googlegroups.com
and here is my test code

package main

import (
    "sync"
    "time"
    "fmt"
    "runtime"
)

func main() {
    p := sync.Pool{}

    for i := 0; i < 200000; i++ {
        a := make([]byte, 200)
        p.Put(a)
    }

    fmt.Println("!!!!!!!!!!put over")
    for i:=0; i< 10; i++ {
        runtime.GC()
        time.Sleep(time.Second*1)
    }
    time.Sleep(1*time.Hour)
    // just hold p reference
    tmp := p.Get().([]byte)
    fmt.Println(len(tmp))
}

and output
gc1(1): 14+7+189+8 us, 0 -> 0 MB, 18 (19-1) objects, 0/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc2(1): 10+6+86+7 us, 0 -> 0 MB, 628 (629-1) objects, 12/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc3(1): 8+6+199+7 us, 0 -> 0 MB, 711 (1268-557) objects, 37/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc4(1): 8+7+159+11 us, 0 -> 0 MB, 2025 (2598-573) objects, 38/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc5(1): 8+8+505+7 us, 0 -> 0 MB, 4134 (4713-579) objects, 53/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc6(1): 6+4+565+5 us, 0 -> 1 MB, 8196 (8779-583) objects, 85/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc7(1): 10+7+756+4 us, 1 -> 2 MB, 15462 (16050-588) objects, 144/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc8(1): 4+3+960+3 us, 2 -> 4 MB, 29925 (30515-590) objects, 253/1/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc9(1): 4+3+1444+3 us, 3 -> 7 MB, 52901 (53494-593) objects, 469/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc10(1): 3+3+2655+3 us, 6 -> 13 MB, 96520 (97116-596) objects, 809/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc11(1): 5+3+4023+5 us, 12 -> 25 MB, 185760 (186358-598) objects, 1456/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc12(1): 5+4+8020+5 us, 22 -> 45 MB, 324872 (325473-601) objects, 2778/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
!!!!!!!!!!put over
gc13(1): 4+2+9824+5 us, 40 -> 52 MB, 400315 (400919-604) objects, 4841/0/0 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc14(1): 7+4+19395+8 us, 49 -> 49 MB, 400315 (400925-610) objects, 5957/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc15(1): 6+4+18528+8 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc16(1): 6+4+19114+8 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc17(1): 7+4+17502+7 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc18(1): 7+4+18481+27 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc19(1): 6+5+18261+6 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc20(1): 9+7+18167+8 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc21(1): 7+4+19135+22 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
gc22(1): 6+5+18997+9 us, 49 -> 49 MB, 400315 (400925-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
scvg0: inuse: 50, idle: 3, sys: 53, released: 0, consumed: 53 (MB)
scvg1: inuse: 50, idle: 3, sys: 53, released: 0, consumed: 53 (MB)
gc23(1): 11+8+18890+15 us, 49 -> 49 MB, 400316 (400926-610) objects, 5958/0/5956 sweeps, 0(0) handoff, 0(0) steal, 0/0/0 yields
scvg2: GC forced
scvg2: inuse: 50, idle: 3, sys: 53, released: 0, consumed: 53 (MB)
scvg3: inuse: 50, idle: 3, sys: 53, released: 0, consumed: 53 (MB)

we can see that most object is inused.

在 2014年10月22日星期三UTC+8上午11时58分26秒,Yongjian Lian写道:

Dave Cheney

unread,
Oct 22, 2014, 12:41:52 AM10/22/14
to golan...@googlegroups.com
Hello,

Can you please explain the problem you are seeing? It is not clear from your post what you expect to happen.

Dave.

Yongjian Lian

unread,
Oct 22, 2014, 1:57:23 AM10/22/14
to golan...@googlegroups.com
I put the item into sync.Pool, and I don't keep the reference to the item again, and I do not get item from pool any more. Shouldn't sync.pool deallocate all the items it holds? 
And the problem in my project is that, I reuse bytes buffer with sync.pool. White the high pressure the pool grow up (memory used is high), then at the low pressure I found memory used is not come down. I guess pool
holds the objects anyway. Should I manually get them from pool and then release them?

My point is that , although I put all objects back to pool and not use them any more, the pool always hold the objects' reference, so that they can't be gc .

在 2014年10月22日星期三UTC+8下午12时41分52秒,Dave Cheney写道:

Jan Mercl

unread,
Oct 22, 2014, 2:19:26 AM10/22/14
to Yongjian Lian, golang-nuts
On Wed, Oct 22, 2014 at 7:57 AM, Yongjian Lian <yongji...@gmail.com> wrote:
> I put the item into sync.Pool, and I don't keep the reference to the item
> again, and I do not get item from pool any more. Shouldn't sync.pool
> deallocate all the items it holds?
> And the problem in my project is that, I reuse bytes buffer with sync.pool.
> White the high pressure the pool grow up (memory used is high), then at the
> low pressure I found memory used is not come down. I guess pool
> holds the objects anyway. Should I manually get them from pool and then
> release them?
>
> My point is that , although I put all objects back to pool and not use them
> any more, the pool always hold the objects' reference, so that they can't be
> gc .

That's not the case ("always hold ...") and the program shown doesn't
prove the claim.

sync.Pool will purge unreachable items when "appropriate" where that
condition is intentionally not well/exactly defined ("Any item stored
in the Pool may be removed automatically at any time without
notification."). A program consuming no CPU and doing no memory
pressure is possibly an example which just doesn't make the cut - in
hope the 'pooled' items might get reused later. I suggest to attempt
to trigger the purging by allocating a lot of memory instead of the GC
exercising and 1 hour sleeping. Then the pool _may_ eventually free
its items.

-j

Dave Cheney

unread,
Oct 22, 2014, 5:24:13 AM10/22/14
to Jan Mercl, Yongjian Lian, golang-nuts
Wow. Please accept my appologies, it seems you were right to point
this out and I dismissed your comments.

https://codereview.appspot.com/162980043/

I'm very sorry.
> --
> You received this message because you are subscribed to a topic in the Google Groups "golang-nuts" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/topic/golang-nuts/6vxv4xRPT38/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to golang-nuts...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Dmitry Vyukov

unread,
Oct 22, 2014, 5:25:50 AM10/22/14
to Yongjian Lian, golang-nuts
Hi,

Your observations are correct. There is just a plain bug in Pool logic
that somehow slipped through testing, release evaluation, user testing
and everything else.

I've mailed the change to fix it:
https://codereview.appspot.com/162980043
With a test that ensures that your use case works.

Thanks reporting this!
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
Reply all
Reply to author
Forward
0 new messages