sync.Pool

2,372 views
Skip to first unread message

Dmitry Vyukov

unread,
May 20, 2013, 12:00:22 PM5/20/13
to golang-dev
This is about adding sync.Pool component, see
https://code.google.com/p/go/issues/detail?id=4720 for context. There
is a number of places in the standard library where we can use it, and
it looks generally useful.

The Pool should be simple, i.e. no LRU/time-based eviction, etc.
User must be able to "close" resources, i.e. no silent discard of resources.
The Pool operates with interchangeable resources represented as interface{}.
The Pool prefers LIFO reuse for efficiency.

I have a proof-of-concept implementation based on per-P (GOMAXPROCS) caches:
https://codereview.appspot.com/7686043/
and an older one:
https://codereview.appspot.com/4928043/diff/12001/src/pkg/sync/cache.go

The proposed interface is:

type Pool struct {
...
}

14 // Get gets a resource from the cache.
15 // Returns nil on failure.
16 func (c *Cache) Get() interface{} {


29 // Put puts the object into the cache.
30 // Returns false if the case cache is full.
bradfitz 2013/03/11 00:41:23 case cache?
31 func (c *Cache) Put(v interface{}) bool {

Dmitry Vyukov

unread,
May 20, 2013, 12:04:35 PM5/20/13
to golang-dev
Sorry, the last message sent unfinished.


This is about adding sync.Pool component, see
https://code.google.com/p/go/issues/detail?id=4720 for context. There
is a number of places in the standard library where we can use it, and
it looks generally useful.

The Pool should be simple, i.e. no LRU/time-based eviction, etc.
User must be able to "close" resources, i.e. no silent discard of resources.
The Pool operates with interchangeable resources represented as interface{}.
The Pool prefers LIFO reuse for efficiency.

I have a proof-of-concept implementation based on per-P (GOMAXPROCS) caches:
https://codereview.appspot.com/7686043/
and an older one:
https://codereview.appspot.com/4928043/diff/12001/src/pkg/sync/cache.go

The proposed interface is:

type Pool struct {
...
}

// Get gets a resource from the pool.
// Returns nil on failure.
func (p *Pool) Get() interface{}

// Put puts the resource into the pool.
// Returns false if the pool is full.
func (p *Pool) Put(v interface{}) bool

// SetCapacity sets capacity of the pool.
// The method must not be invoked concurrently with other methods on
the same pool object.
// Local capacity refers to a private per-CPU pool.
// Global capacity refers to a centralized shared pool.
func (p *Pool) SetCapacity(local, global int)

// Drain removes from the pool and returns all cached objects.
// The method must not be invoked concurrently with other methods on
the same pool object.
func (p *Pool) Drain() []interface{}

Dmitry Vyukov

unread,
May 20, 2013, 12:11:40 PM5/20/13
to golang-dev
SetCapacity is a bit hairy, but I don't how to do it otherwise.
The problem is that several goroutines can request arbitrary amount of
resources on the same P, so local cache of 1 won't do. Moreover, the
resources can "migrate" from one P to another (allocated by one
goroutine and freed by another, or a single goroutine migrates between
P's), so global cache is necessary as well.
The capacity also depends on the "cost" of the resource, which is don't know.
We can set reasonable default for local/global capacity for
not-so-costly resources and reasonable migration (e.g. say local=4,
global=8 or something like that). But it may be not suitable for all
situations.
Brad, can you analyze some potential uses in std lib with regard to
how many resources we would like to cache there?

Dmitry Vyukov

unread,
May 20, 2013, 12:26:15 PM5/20/13
to golang-dev
On Mon, May 20, 2013 at 8:04 PM, Dmitry Vyukov <dvy...@google.com> wrote:
Regarding the proclocal facilty.
Long time ago Russ proposed to put it into sync/proclocal package so
that users can use it to create similar components (e.g. a more
complex pool).
I don't think it's a good idea. At least now.
It's quite subtle and I do not see lots of other use cases for now.
I wanted to implement sync.Counter with it, but with current
implementation it's not faster than a counter that uses atomic
operations (too many indirections and overheads).
I also have a prototype of sync.DistributedRWMutex, but it's quite
confusing in the presence of sync.RWMutex (what should I choose?).
I would leave proclocal interface as sync private for now.

Jan Mercl

unread,
May 20, 2013, 12:32:58 PM5/20/13
to Dmitry Vyukov, golang-dev
On Mon, May 20, 2013 at 6:11 PM, Dmitry Vyukov <dvy...@google.com> wrote:

The Get method could also be alternatively

func (c *Cache) Get(func(x interface{}) bool) interface{}

where the passed function:
- if nil -> ignored
- otherwise it's passed cached (candidate) items until returns true.
Then that item is returned from Get. If the function never returns
true the Get returns interface{}(nil).

The idea is that cached items can have properties and not every item
have properties which the client of Get needs. For example, think of a
Cache of []byte: the client may need only buffers of some minimal len
or cap. Without such function the client would have to repeat .Get
until satisfied, then return all the below-the-cut buffers back.

Non-technical: I prefer Cache, the originally proposed name, instead of Pool.

-j

Dmitry Vyukov

unread,
May 20, 2013, 12:39:38 PM5/20/13
to Jan Mercl, golang-dev
In the issue tylor@ wrote:
"Please call this construct a Pool (or FixedPool) rather than a Cache
if it comes into existence. I can already imagine myself explaining to
new people over and over again on #go-nuts that sync.Cache is actually
a pool, etc etc..."

I also find it somewhat confusing. I think Cache is a more overloaded
term than Pool, there are LRUCache's, TimedCache's, etc.

Dmitry Vyukov

unread,
May 20, 2013, 12:40:54 PM5/20/13
to Jan Mercl, golang-dev
On Mon, May 20, 2013 at 8:32 PM, Jan Mercl <0xj...@gmail.com> wrote:
That's what I meant by
"The Pool should be simple, i.e. no LRU/time-based eviction, etc"
;)

There are thousands ways to make it arbitrary complex.

Brad Fitzpatrick

unread,
May 20, 2013, 1:35:58 PM5/20/13
to Dmitry Vyukov, golang-dev
I'm nervous about having anything more than Get and Put for now.

Also, if SetCapacity and Drain can't be used concurrently, how can I ever use it without a RWMutex around all the Get/Put calls, which somewhat negates the point of whole thing in the first place.

If we're going to have Drain, it should be allowed to be concurrent.

But do we need Drain? A Get loop could work too. The pools should be small enough that a loop of maxprocs would be fine.

I would only do the processor-local thing, and not a global pool.  A global pool could be done by callers elsewhere.  The per-P part in the runtime is the minimal part I want to see from this.





--

---
You received this message because you are subscribed to the Google Groups "golang-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



Dmitry Vyukov

unread,
May 20, 2013, 1:49:22 PM5/20/13
to Brad Fitzpatrick, golang-dev
On Mon, May 20, 2013 at 9:35 PM, Brad Fitzpatrick <brad...@golang.org> wrote:
> I'm nervous about having anything more than Get and Put for now.

Good point.

> Also, if SetCapacity and Drain can't be used concurrently, how can I ever
> use it without a RWMutex around all the Get/Put calls, which somewhat
> negates the point of whole thing in the first place.

The SetCapacity is intended to be used during initialization, before
any Puts and Gets.
The Drain is intended to be used during shutdown, e.g. to close DB
connections or something.


> If we're going to have Drain, it should be allowed to be concurrent.
>
> But do we need Drain? A Get loop could work too. The pools should be small
> enough that a loop of maxprocs would be fine.

Get can't get resources cached in other P's. That's the point, other
P's can work with their resources w/o any synchronization.
1 local resource is not enough even for the simplest case of Sprintf()
calls v.String() which in turn calls Sprintf(). Not saying about more
recursive things and long resource usage, when a goroutine migrates
with non-zero probability.

> I would only do the processor-local thing, and not a global pool. A global
> pool could be done by callers elsewhere. The per-P part in the runtime is
> the minimal part I want to see from this.

The global pool can be done by callers, but that's what you always
want (except synthetic benchmarks).
In the end the local/global is no more than implementation detail, the
only place where they are exposed is SetCapacity(). Probably we can
make SetCapacity() accept a single number, and then figure out
local/global settings from it.

Brad Fitzpatrick

unread,
May 20, 2013, 1:54:27 PM5/20/13
to Dmitry Vyukov, golang-dev
On Mon, May 20, 2013 at 10:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
On Mon, May 20, 2013 at 9:35 PM, Brad Fitzpatrick <brad...@golang.org> wrote:
> I'm nervous about having anything more than Get and Put for now.

Good point.

> Also, if SetCapacity and Drain can't be used concurrently, how can I ever
> use it without a RWMutex around all the Get/Put calls, which somewhat
> negates the point of whole thing in the first place.

The SetCapacity is intended to be used during initialization, before
any Puts and Gets.

That makes sense, but then it could just be a constructor option:

var pool = sync.NewPool(-1) // automatic

var pool = sync.NewPool(10)

Or like fmt:

var pool = sync.NewPool(8, func() interface{} { return new(T) })
 
The Drain is intended to be used during shutdown, e.g. to close DB
connections or something.

Perhaps. I don't see it being used much, though.

Could it be concurrent if it stops the world at least? If it's used rarely, that shouldn't matter.
 
> If we're going to have Drain, it should be allowed to be concurrent.
>
> But do we need Drain? A Get loop could work too. The pools should be small
> enough that a loop of maxprocs would be fine.

Get can't get resources cached in other P's.

Yeah, of course. I was thinking more about a global cache still. But it could get other P's stuff if it stops the world, right?
 
That's the point, other
P's can work with their resources w/o any synchronization.
1 local resource is not enough even for the simplest case of Sprintf()
calls v.String() which in turn calls Sprintf(). Not saying about more
recursive things and long resource usage, when a goroutine migrates
with non-zero probability.

It'd be nice if the per-P value could be learned based on access patterns. It seems we have enough knobs already.

minux

unread,
May 20, 2013, 1:57:38 PM5/20/13
to Dmitry Vyukov, Brad Fitzpatrick, golang-dev
On Tue, May 21, 2013 at 1:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
> Also, if SetCapacity and Drain can't be used concurrently, how can I ever
> use it without a RWMutex around all the Get/Put calls, which somewhat
> negates the point of whole thing in the first place.

The SetCapacity is intended to be used during initialization, before
any Puts and Gets.
The Drain is intended to be used during shutdown, e.g. to close DB
connections or something.
If you only intend SetCapacity to be used during initialization, why not
make it a parameter of a sync.NewPool construction function?

Dmitry Vyukov

unread,
May 20, 2013, 2:02:27 PM5/20/13
to Brad Fitzpatrick, golang-dev
On Mon, May 20, 2013 at 9:54 PM, Brad Fitzpatrick <brad...@golang.org> wrote:
> On Mon, May 20, 2013 at 10:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>>
>> On Mon, May 20, 2013 at 9:35 PM, Brad Fitzpatrick <brad...@golang.org>
>> wrote:
>> > I'm nervous about having anything more than Get and Put for now.
>>
>> Good point.
>>
>> > Also, if SetCapacity and Drain can't be used concurrently, how can I
>> > ever
>> > use it without a RWMutex around all the Get/Put calls, which somewhat
>> > negates the point of whole thing in the first place.
>>
>> The SetCapacity is intended to be used during initialization, before
>> any Puts and Gets.
>
>
> That makes sense, but then it could just be a constructor option:
>
> var pool = sync.NewPool(-1) // automatic
>
> var pool = sync.NewPool(10)
>
> Or like fmt:
>
> var pool = sync.NewPool(8, func() interface{} { return new(T) })


This interface has a very long history. I remember Russ suggested
"zero intialization" like what we have for most sync components, e.g.
var mu sync.Mutex
I agree the explicit constructor function is much more flexible.
I think this component won't be used all over the place, so it's fine
to have the constructor.



>> The Drain is intended to be used during shutdown, e.g. to close DB
>> connections or something.
>
>
> Perhaps. I don't see it being used much, though.

The alternative will be to use SetFinalizer(). I think we don't want
to encourage that.
If we don't have Drain(), then we can make Put() return nothing
instead of bool, it can just silently discard the resource.


> Could it be concurrent if it stops the world at least? If it's used rarely,
> that shouldn't matter.
>
>>
>> > If we're going to have Drain, it should be allowed to be concurrent.
>> >
>> > But do we need Drain? A Get loop could work too. The pools should be
>> > small
>> > enough that a loop of maxprocs would be fine.
>>
>> Get can't get resources cached in other P's.
>
>
> Yeah, of course. I was thinking more about a global cache still. But it
> could get other P's stuff if it stops the world, right?


Yes, it's possible to make it work with stop the world. But how will
it be used then?



>> That's the point, other
>> P's can work with their resources w/o any synchronization.
>> 1 local resource is not enough even for the simplest case of Sprintf()
>> calls v.String() which in turn calls Sprintf(). Not saying about more
>> recursive things and long resource usage, when a goroutine migrates
>> with non-zero probability.
>
>
> It'd be nice if the per-P value could be learned based on access patterns.
> It seems we have enough knobs already.

I need to think about this.

Ian Lance Taylor

unread,
May 20, 2013, 2:02:57 PM5/20/13
to Dmitry Vyukov, Brad Fitzpatrick, golang-dev
On Mon, May 20, 2013 at 10:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>
> The Drain is intended to be used during shutdown, e.g. to close DB
> connections or something.

That seems to me to be a somewhat exotic case. And I can do it
anyhow, by keeping a separate list of connections. That is, each time
Get() returns nil, I create a new connection, add it to my separate
list, and then Put the new connection.

Can we say

type Pool struct {
Capacity int
...
}

and make sure the 0 value for Capacity does something sensible? Then
clients can write

var myPool sync.Pool

and get reasonable behaviour, and they can write

var myPool = sync.Pool { Capacity: 10 }

for more control.

Ian

Gustavo Niemeyer

unread,
May 20, 2013, 2:44:52 PM5/20/13
to Dmitry Vyukov, golang-dev
Proposal looks nice. A minor suggestion:

On Mon, May 20, 2013 at 1:04 PM, Dmitry Vyukov <dvy...@google.com> wrote:
> // Get gets a resource from the pool.
> // Returns nil on failure.
> func (p *Pool) Get() interface{}

Get methods in most cases borrow a reference to the value without
further consequences. I suggest using a method name such as Pop or
Take, to make it slightly more clear that this is in fact stealing the
reference from the pool.


gustavo @ http://niemeyer.net

Dmitry Vyukov

unread,
May 21, 2013, 2:29:09 AM5/21/13
to Brad Fitzpatrick, golang-dev
I think we can do automatic tuning of capacity, but the Pool will need
to silently discard resources (e.g. cached too much, and then decided
to discard some).
With automatic tuning we can have just:
func (p *Pool) Put(v interface{})
func (p *Pool) Take() interface{}

But then I suspect it won't be suitable for anything other than
"reduce number of allocations and garbage". We sure have a dozen of
places in std lib where we can use it. But I am not sure about users
of std lib, because such Pool does not provide any new functionality,
it's essentially a replacement for "malloc". So it's only will be used
during optimization cycle for optimization.

Dmitry Vyukov

unread,
May 21, 2013, 2:31:21 AM5/21/13
to Brad Fitzpatrick, golang-dev
Here is a simple prototype:
https://codereview.appspot.com/9611046

Do we want such component in public interface?

Ian Lance Taylor

unread,
May 21, 2013, 9:30:10 AM5/21/13
to Dmitry Vyukov, Brad Fitzpatrick, golang-dev
On Mon, May 20, 2013 at 11:29 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>
> I think we can do automatic tuning of capacity, but the Pool will need
> to silently discard resources (e.g. cached too much, and then decided
> to discard some).

As I understand the proposed semantics, the interface is somewhat
leaky anyhow, in the sense Take can return nil even though you have
Put values into the pool. A program that uses cgo could easily lose
track of values stored in the pool entirely, if some thread winds up
dedicated to running C code. So personally I think it's OK if the
pool can drop values.


> With automatic tuning we can have just:
> func (p *Pool) Put(v interface{})
> func (p *Pool) Take() interface{}
>
> But then I suspect it won't be suitable for anything other than
> "reduce number of allocations and garbage". We sure have a dozen of
> places in std lib where we can use it. But I am not sure about users
> of std lib, because such Pool does not provide any new functionality,
> it's essentially a replacement for "malloc". So it's only will be used
> during optimization cycle for optimization.

Why would this ever be used for anything other than optimization?
What else did you have in mind?

Ian

Dmitry Vyukov

unread,
May 21, 2013, 9:49:46 AM5/21/13
to Ian Lance Taylor, Brad Fitzpatrick, golang-dev
On Tue, May 21, 2013 at 5:30 PM, Ian Lance Taylor <ia...@golang.org> wrote:
> On Mon, May 20, 2013 at 11:29 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>>
>> I think we can do automatic tuning of capacity, but the Pool will need
>> to silently discard resources (e.g. cached too much, and then decided
>> to discard some).
>
> As I understand the proposed semantics, the interface is somewhat
> leaky anyhow, in the sense Take can return nil even though you have
> Put values into the pool. A program that uses cgo could easily lose
> track of values stored in the pool entirely, if some thread winds up
> dedicated to running C code. So personally I think it's OK if the
> pool can drop values.

Cgo or syscall won't have that effect because the caching is per-P, not per-M.
But of course it's still possible if some P's are not used for a long time.


>> With automatic tuning we can have just:
>> func (p *Pool) Put(v interface{})
>> func (p *Pool) Take() interface{}
>>
>> But then I suspect it won't be suitable for anything other than
>> "reduce number of allocations and garbage". We sure have a dozen of
>> places in std lib where we can use it. But I am not sure about users
>> of std lib, because such Pool does not provide any new functionality,
>> it's essentially a replacement for "malloc". So it's only will be used
>> during optimization cycle for optimization.
>
> Why would this ever be used for anything other than optimization?
> What else did you have in mind?

I think that need for other types of Pools/Caches is better understood
by users. For example, db connection pooling that would require
something line non-losing FIFO pool with hard limit on resource number
and blocking Take(). Or expensive-to-compute resource pool that would
require something line concurrent timed map. While this Pool can be
used only as malloc optimization.
Also, if we ever provide other types of pools/caches, we must not end
up with confusing naming, e.g. Pool vs FIFOPool.

Ian Lance Taylor

unread,
May 21, 2013, 10:04:56 AM5/21/13
to Dmitry Vyukov, Brad Fitzpatrick, golang-dev
On Tue, May 21, 2013 at 6:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>
> I think that need for other types of Pools/Caches is better understood
> by users. For example, db connection pooling that would require
> something line non-losing FIFO pool with hard limit on resource number
> and blocking Take(). Or expensive-to-compute resource pool that would
> require something line concurrent timed map. While this Pool can be
> used only as malloc optimization.

It seems to me that we would want to use entirely different
implementations for those.

> Also, if we ever provide other types of pools/caches, we must not end
> up with confusing naming, e.g. Pool vs FIFOPool.

That's a good reason to pick a good package name. E.g., I'm not sure
why this Pool should be in the sync package.

Ian

Dmitry Vyukov

unread,
May 21, 2013, 10:07:42 AM5/21/13
to Ian Lance Taylor, Brad Fitzpatrick, golang-dev
Any suggestions on naming?

Sameer Ajmani

unread,
May 21, 2013, 10:52:58 AM5/21/13
to Dmitry Vyukov, Ian Lance Taylor, Brad Fitzpatrick, golang-dev
If this pool is just for managing allocations (as opposed to DB connections, say), then package alloc, type Pool.

Ingo Oeser

unread,
May 21, 2013, 12:36:36 PM5/21/13
to golan...@googlegroups.com
One problem completely unsolved until now is the type safety of pools.

e.g. I can do pool.Put(int(3)) and pool.Put("foo") and have to use g := pool.Take().(int) to extract it, 
but it will fail on the string I just put accidentally into it.

May I suggest at least checking the type on put and making the pools typed by the type of a value supplied on pool creation?
Any other ideas how to solve this riddle?

Is this a just another case of "doctor it hurts..."?

Brad Fitzpatrick

unread,
May 21, 2013, 12:48:34 PM5/21/13
to Ingo Oeser, golang-dev
On Tue, May 21, 2013 at 9:36 AM, Ingo Oeser <night...@googlemail.com> wrote:
One problem completely unsolved until now is the type safety of pools.

e.g. I can do pool.Put(int(3)) and pool.Put("foo") and have to use g := pool.Take().(int) to extract it, 
but it will fail on the string I just put accidentally into it.

May I suggest at least checking the type on put and making the pools typed by the type of a value supplied on pool creation?
Any other ideas how to solve this riddle?

Is this a just another case of "doctor it hurts..."?

Yes.

It's as type safe as the language permits.

By your argument, sync/atomic and container/list are also not safe, and ....

Ian Lance Taylor

unread,
May 21, 2013, 1:56:33 PM5/21/13
to Ingo Oeser, golan...@googlegroups.com
On Tue, May 21, 2013 at 9:36 AM, Ingo Oeser <night...@googlemail.com> wrote:
> One problem completely unsolved until now is the type safety of pools.
>
> e.g. I can do pool.Put(int(3)) and pool.Put("foo") and have to use g :=
> pool.Take().(int) to extract it,
> but it will fail on the string I just put accidentally into it.
>
> May I suggest at least checking the type on put and making the pools typed
> by the type of a value supplied on pool creation?

While it is possible to do that, I don't think that is appropriate for
this relatively low level API.

Ian

Brad Fitzpatrick

unread,
May 21, 2013, 3:39:24 PM5/21/13
to Sameer Ajmani, Dmitry Vyukov, Ian Lance Taylor, golang-dev
On Tue, May 21, 2013 at 7:52 AM, Sameer Ajmani <sam...@golang.org> wrote:



On Tue, May 21, 2013 at 10:07 AM, Dmitry Vyukov <dvy...@google.com> wrote:
On Tue, May 21, 2013 at 6:04 PM, Ian Lance Taylor <ia...@golang.org> wrote:
> On Tue, May 21, 2013 at 6:49 AM, Dmitry Vyukov <dvy...@google.com> wrote:
>>
>> I think that need for other types of Pools/Caches is better understood
>> by users. For example, db connection pooling that would require
>> something line non-losing FIFO pool with hard limit on resource number
>> and blocking Take(). Or expensive-to-compute resource pool that would
>> require something line concurrent timed map. While this Pool can be
>> used only as malloc optimization.
>
> It seems to me that we would want to use entirely different
> implementations for those.
>
>> Also, if we ever provide other types of pools/caches, we must not end
>> up with confusing naming, e.g. Pool vs FIFOPool.
>
> That's a good reason to pick a good package name.  E.g., I'm not sure
> why this Pool should be in the sync package.

Any suggestions on naming?

If this pool is just for managing allocations (as opposed to DB connections, say), then package alloc, type Pool.

I like "alloc" as a package name.  But top-level seems to promote it too heavily.  I'd like to put it under something else, but "runtime/alloc" is the only option, but it's only a runtime thing in that it's partially implemented by the runtime.  But what isn't?  OTOH, pprof is under runtime too.

Then the whatever/alloc package can have all allocation-related stuff, like the BytePool Get(n int) []byte / Put([]byte), etc.  I'd prefer the process-local mechanism be restricted to just being a processor-local thing (perhaps with an ugly name) and then others can build a local + global pool/cache on top of it, perhaps also in package alloc.

Ian Lance Taylor

unread,
May 21, 2013, 4:03:58 PM5/21/13
to Brad Fitzpatrick, Sameer Ajmani, Dmitry Vyukov, golang-dev
I suggest container/alloc. bradfitz and gri are here with me and they
also think that works.

> Then the whatever/alloc package can have all allocation-related stuff, like
> the BytePool Get(n int) []byte / Put([]byte), etc. I'd prefer the
> process-local mechanism be restricted to just being a processor-local thing
> (perhaps with an ugly name) and then others can build a local + global
> pool/cache on top of it, perhaps also in package alloc.

The processor local thing shouldn't be called Pool, because it is not
a pool. It is a leaky cache of values. Could we call it LocalCache?

Ian

Gustavo Niemeyer

unread,
May 21, 2013, 4:14:57 PM5/21/13
to Ian Lance Taylor, Brad Fitzpatrick, Sameer Ajmani, Dmitry Vyukov, golang-dev
On Tue, May 21, 2013 at 5:03 PM, Ian Lance Taylor <ia...@golang.org> wrote:
> I suggest container/alloc. bradfitz and gri are here with me and they
> also think that works.

What about container/cache or similar? This is a matter of preference,
so I won't really argue, but I just notice alloc couldn't possibly be
more generic. Anything under container could be inside alloc.


gustavo @ http://niemeyer.net

Ian Lance Taylor

unread,
May 21, 2013, 5:38:46 PM5/21/13
to Gustavo Niemeyer, Brad Fitzpatrick, Sameer Ajmani, Dmitry Vyukov, golang-dev
I don't agree. Anything that is concerned with allocating memory
could be under container/alloc. The existing packages under container
could not really be under alloc. At least, so it seems to me.

Ian

Dmitry Vyukov

unread,
May 21, 2013, 11:58:24 PM5/21/13
to Ingo Oeser, golang-dev
On Tue, May 21, 2013 at 8:36 PM, Ingo Oeser <night...@googlemail.com> wrote:
> One problem completely unsolved until now is the type safety of pools.
>
> e.g. I can do pool.Put(int(3)) and pool.Put("foo") and have to use g :=
> pool.Take().(int) to extract it,
> but it will fail on the string I just put accidentally into it.
>
> May I suggest at least checking the type on put and making the pools typed
> by the type of a value supplied on pool creation?


How will it help? In either case you get runtime panic. It should be
easy to debug and fix in both cases.

Dmitry Vyukov

unread,
May 22, 2013, 12:04:25 AM5/22/13
to Brad Fitzpatrick, Sameer Ajmani, Ian Lance Taylor, golang-dev
On Tue, May 21, 2013 at 11:39 PM, Brad Fitzpatrick
<brad...@golang.org> >>> On Tue, May 21, 2013 at 6:04 PM, Ian Lance
I can not understand why you want it to be proc-local only. It's no
more than an implementation detail. On the contrary, it will be
global-only in the first version; and it will become global only if we
decide to abandon proc-local stuff in runtime for some reason; or it
can be global-only in some other implementation of std lib.
In some cases proc-local only implementation will only add overheads
and provide no caching at all.

Dmitry Vyukov

unread,
May 22, 2013, 4:06:01 AM5/22/13
to Ian Lance Taylor, Brad Fitzpatrick, Sameer Ajmani, golang-dev
"Local" is no more than an implementation detail. We do not have
local_new, local_make and local_go, while they are actually all
"local".

Dmitry Vyukov

unread,
May 22, 2013, 4:23:00 AM5/22/13
to golang-dev
On Mon, May 20, 2013 at 8:00 PM, Dmitry Vyukov <dvy...@google.com> wrote:
> This is about adding sync.Pool component, see
> https://code.google.com/p/go/issues/detail?id=4720 for context. There
> is a number of places in the standard library where we can use it, and
> it looks generally useful.
>
> The Pool should be simple, i.e. no LRU/time-based eviction, etc.
> User must be able to "close" resources, i.e. no silent discard of resources.
> The Pool operates with interchangeable resources represented as interface{}.
> The Pool prefers LIFO reuse for efficiency.
>
> I have a proof-of-concept implementation based on per-P (GOMAXPROCS) caches:
> https://codereview.appspot.com/7686043/
> and an older one:
> https://codereview.appspot.com/4928043/diff/12001/src/pkg/sync/cache.go
>
> The proposed interface is:
>
> type Pool struct {
>   ...
> }
>
> 14 // Get gets a resource from the cache.
> 15 // Returns nil on failure.
> 16 func (c *Cache) Get() interface{} {
>
>
> 29 // Put puts the object into the cache.
> 30 // Returns false if the case cache is full.
> bradfitz 2013/03/11 00:41:23 case cache?
> 31 func (c *Cache) Put(v interface{}) bool {



What about this?



PACKAGE DOCUMENTATION

package alloc
    import "container/alloc"

    Package alloc provides helper components for memory management.


TYPES

type Cache struct {
    // contains filtered or unexported fields
}
    A Cache caches interchangeable objects.


func (c *Cache) Put(v interface{})
    Put adds the object v into the cache. The object can be discarded, or
    evicted from the cache at a later point.

func (c *Cache) Take() interface{}
    Take removes and returns one of the objects previously put into the
    cache or nil.




Ian Lance Taylor

unread,
May 22, 2013, 9:45:41 AM5/22/13
to Dmitry Vyukov, golang-dev
LGTM

Dmitry Vyukov

unread,
May 22, 2013, 10:18:27 AM5/22/13
to Ian Lance Taylor, golang-dev
Mailed https://codereview.appspot.com/9648043 to make the discussion more concrete.

Dmitry Vyukov

unread,
Jun 3, 2013, 10:34:11 AM6/3/13
to golang-dev
The current interface is below.
We need a high-level decision on the package.


PACKAGE DOCUMENTATION

package alloc
    import "container/alloc"

    Package alloc provides helper components for memory management.


TYPES

type Cache struct {

    // New specifies an optional function to be used to create new entries
    // when the cache is empty.
    // It may be called from multiple concurrent goroutines
    // and must not be changed concurrently with calls to Take.
    New func() interface{}

    // contains filtered or unexported fields
}
    A Cache caches interchangeable objects.


func (c *Cache) Put(v interface{}) bool
    Put adds the object v into the cache. Returns true if the object was
    added. The object may be deleted by the Cache at any time.


func (c *Cache) Take() interface{}
    Take removes an item from the cache and returns it. If the cache is
    empty and c.New is set, Take returns a new item allocated by calling
    c.New. If the cache is empty and c.New is nil, Take returns nil.




Russ Cox

unread,
Jun 3, 2013, 10:38:53 AM6/3/13
to Dmitry Vyukov, golang-dev
This description is too vague. It doesn't tell me anything about why this package exists. Is there some kind of per-CPU affinity going on? When should I use it? When should I not use it? Also, 'package alloc' seems too vague and nesting it under container seems inappropriate. It's not like a heap, list, or ring.

Russ

Ian Lance Taylor

unread,
Jun 3, 2013, 10:45:52 AM6/3/13
to Russ Cox, Dmitry Vyukov, golang-dev
On Mon, Jun 3, 2013 at 7:38 AM, Russ Cox <r...@golang.org> wrote:
> Also, 'package alloc' seems too vague
> and nesting it under container seems inappropriate. It's not like a heap,
> list, or ring.

We've kicked around the package name quite a bit already and
container/alloc was the best we've come up with so far. Do you have
any alternative suggestions?

Ian

Gustavo Niemeyer

unread,
Jun 3, 2013, 11:19:02 AM6/3/13
to Ian Lance Taylor, Russ Cox, Dmitry Vyukov, golang-dev
container/cache
container/pool
container/shelf


gustavo @ http://niemeyer.net

Kevin Gillette

unread,
Jun 3, 2013, 1:24:00 PM6/3/13
to golan...@googlegroups.com
If you call the type Cache, users will assume it contains heterogenous values and wonder why Put and Take don't accept a string key, but they will accept that it's leaky.

If you call it Pool, users will assume it contains homogenous values, but will assume it doesn't leak.

Based on semantics, LeakyPool may be better, but if you call the package container/subtle, you could get away with Cache or Pool.

Russ Cox

unread,
Jun 3, 2013, 4:52:51 PM6/3/13
to Kevin Gillette, golan...@googlegroups.com
The most important problem with this proposal is that the documentation does not explain what the new Cache type is for.

It appears that the motivation is to provide some kind of lightweight per-CPU caching, but that doesn't come out in the docs, nor the implementation. So I'm quite confused about the expected generality of the type.

I'm happy to talk about names, but I can't do that until I know what the goal is.

Dmitry Vyukov

unread,
Jun 4, 2013, 7:42:46 AM6/4/13
to Russ Cox, Kevin Gillette, golang-dev
Sorry, I thought that the referenced issue 4720 provides the context:
-----
This bug is about sync.Cache, a cache of interchangeable fixed-size items.
The idea is that a sync.Cache might replace fmt's internal cache, and it
might be used for a cache of fixed-size buffers for io.Copy.
-----

An important user-visible property of the Cache is that elements can
be silently discarded at any time. This allows for capacity
auto-tuning, but at the same time makes it usable only for "memory"
resources (i.e. not database connections).
The interface with explicit capacity management was declared too complex.

From implementation point of view, yes, the final goal is to use
per-GOMAXPROC caching. But I am not sure we want to spell it in user
docs. A better term for user can be fast/efficient/lightweight.

Russ Cox

unread,
Jun 14, 2013, 12:13:38 PM6/14/13
to Dmitry Vyukov, Kevin Gillette, golang-dev
[Sorry, reply all failure moved two messages off list; they're below.]

I'm not comfortable with the precedent set by the name container/alloc. So far the container tree has been about data structures, and "alloc" doesn't fit that. And almost all containers allocate. container/cache would be better, but I fear it would be a dumping ground.

It's also a bit strange to put it in sync, but it is at least synchronized (doesn't require locks), and it does seem more related to the things there, in that eventually it will have a tight coupling with the runtime, at least if I understand the model correctly.

My suggestion is that we put it in sync for now, but with a note to revisit before Go 1.2 is locked down.

package sync

// A LocalCache is a cache holding a set of reusable, interchangeable objects.
// The implementation limits the cache size to at most one cached object per CPU
// being used simultaneously by the program, so it is roughly a CPU-local cache.
// There is no support for flushing the cache or for finalization of cached objects
// that are discarded.
type LocalCache struct {
}

// Get removes and returns an object from the cache.
// It returns nil when no object is available.
func (c *LocalCache) Get() interface{}

// Put adds x to the cache; the value x may then be
// returned by a future call to Get or may be discarded.
func (c *LocalCache) Put(x interface{})

With all that, though, there's still the question of the implementation. It looks like the one in the CL is just a placeholder. Where's the real one? I'd rather do it in one step than two, especially since the real implementation may raise questions or problems that influence the API.

Thanks.
Russ

---------- Forwarded message ----------
From: Russ Cox <r...@golang.org>
Date: Tue, Jun 4, 2013 at 10:06 AM
Subject: Re: [golang-dev] Re: sync.Pool
To: Dmitry Vyukov <dvy...@google.com>
Cc: Russ Cox <r...@golang.org>, Kevin Gillette <extempor...@gmail.com>


How do you plan to do the auto-tuning? We have to tell the potential users enough to make a decision about whether it's right for them.

Russ


---------- Forwarded message ----------
From: Dmitry Vyukov <dvy...@google.com>
Date: Tue, Jun 4, 2013 at 10:18 AM
Subject: Re: [golang-dev] Re: sync.Pool
To: Russ Cox <r...@golang.org>
Cc: Kevin Gillette <extempor...@gmail.com>


On Tue, Jun 4, 2013 at 6:06 PM, Russ Cox <r...@golang.org> wrote:
> How do you plan to do the auto-tuning? We have to tell the potential users enough to make a decision about whether it's right for them.


Per-GOMAXPROC hard limit on cached objects.
On Put add the object till the limit.
Even N-th operation check if some objects are consistently unused
(based on low watermark), release part of unused objects.

So there is hard limit on number of cached objects -- k*GOMAXPROCS.
But I think it won't be useful for very "expensive" resources, e.g.
1GB memory chunks, because the caching is still somewhat unpredictable
and opaque.


Dmitry Vyukov

unread,
Jun 14, 2013, 12:34:55 PM6/14/13
to Russ Cox, Kevin Gillette, golang-dev
On Fri, Jun 14, 2013 at 8:13 PM, Russ Cox <r...@golang.org> wrote:
> [Sorry, reply all failure moved two messages off list; they're below.]
>
> I'm not comfortable with the precedent set by the name container/alloc. So
> far the container tree has been about data structures, and "alloc" doesn't
> fit that. And almost all containers allocate. container/cache would be
> better, but I fear it would be a dumping ground.
>
> It's also a bit strange to put it in sync, but it is at least synchronized
> (doesn't require locks), and it does seem more related to the things there,
> in that eventually it will have a tight coupling with the runtime, at least
> if I understand the model correctly.
>
> My suggestion is that we put it in sync for now, but with a note to revisit
> before Go 1.2 is locked down.
>
> package sync
>
> // A LocalCache is a cache holding a set of reusable, interchangeable
> objects.
> // The implementation limits the cache size to at most one cached object per
> CPU


One object per CPU is not enough even for a trivial case of Stringer
calling String() on a subobject.
+ goroutine can be descheduled while holding the object.
+ rpc, http need much larger caches
+ as far as I understand regexp wants more as well

It's not intended for extremely expnesive objects (e.g. 1GB). If the
objects are, say, 4k and the max cache size is, say, 16 -- 64k per CPU
looks like a reasonable number.



> // being used simultaneously by the program, so it is roughly a CPU-local
> cache.
> // There is no support for flushing the cache or for finalization of cached
> objects
> // that are discarded.
> type LocalCache struct {
> }
>
> // Get removes and returns an object from the cache.
> // It returns nil when no object is available.
> func (c *LocalCache) Get() interface{}
>
> // Put adds x to the cache; the value x may then be
> // returned by a future call to Get or may be discarded.
> func (c *LocalCache) Put(x interface{})


Brad? You wanted New ctor and bool from Put().


> With all that, though, there's still the question of the implementation. It
> looks like the one in the CL is just a placeholder. Where's the real one?
> I'd rather do it in one step than two, especially since the real
> implementation may raise questions or problems that influence the API.


I will do real implementation.

Brad Fitzpatrick

unread,
Jun 14, 2013, 1:01:51 PM6/14/13
to Dmitry Vyukov, Russ Cox, Kevin Gillette, golang-dev
I don't care.

It's a small detail. Every package can also just reimplement it.
 

Russ Cox

unread,
Jun 14, 2013, 5:47:44 PM6/14/13
to Dmitry Vyukov, Kevin Gillette, golang-dev
On Fri, Jun 14, 2013 at 12:34 PM, Dmitry Vyukov <dvy...@google.com> wrote:
One object per CPU is not enough even for a trivial case of Stringer
calling String() on a subobject.
+ goroutine can be descheduled while holding the object.
+ rpc, http need much larger caches
+ as far as I understand regexp wants more as well

I'm confused again. I thought you said GOMAXPROCS was a hard limit on the cache size, so that's what I wrote. Clearly I still don't understand what this is for, which is a problem. I am starting to be afraid this cache will be wrong for all users. Can you try again to explain what the use case is and what a user should need to know about how to use it?

It's fine to add New, but let's first understand the rest of the picture.

Russ

Christoph Hack

unread,
Jun 14, 2013, 6:35:54 PM6/14/13
to golan...@googlegroups.com, Dmitry Vyukov, Kevin Gillette
A sync.Pool is a cache that can contain many items and can be accessed by many Goroutines concurrently. Internally, a sync.Pool may use additional thread-local (up to GOMAXPROCS) caches for each Goroutine to keep the amount of synchronization overhead that is needed to access the shared cache to a minimum. I'm guessing Dmitry is planning to implement a really nice algorithm that takes objects from the local cache if possible and eventually returns stored items to the global cache again, so that other Goroutines can claim them.

Dmitry's first proposal was more complicated in order to make the sync.Cache component also usable for database connections and other objects that might require a special clean-up phase. The current API proposal drops items internally, so that it can only be used for things that do not require any special cleanup (i.e. allocated memory).

Dmitry Vyukov

unread,
Jun 15, 2013, 1:30:35 PM6/15/13
to Russ Cox, Kevin Gillette, golang-dev
On Sat, Jun 15, 2013 at 1:47 AM, Russ Cox <r...@golang.org> wrote:
> On Fri, Jun 14, 2013 at 12:34 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>>
>> One object per CPU is not enough even for a trivial case of Stringer
>> calling String() on a subobject.
>> + goroutine can be descheduled while holding the object.
>> + rpc, http need much larger caches
>> + as far as I understand regexp wants more as well
>
>
> I'm confused again. I thought you said GOMAXPROCS was a hard limit on the
> cache size, so that's what I wrote.

I've said:
>So there is hard limit on number of cached objects -- k*GOMAXPROCS.

> Clearly I still don't understand what
> this is for, which is a problem. I am starting to be afraid this cache will
> be wrong for all users. Can you try again to explain what the use case is
> and what a user should need to know about how to use it?

Use case:
optimizing garbage generation; instead of creating and discarding
objects on every operation, the Cache allows to reuse objects

What users need to know:
- the Cache is "thread-safe", Put/Take can be called concurrently
- it's badly suitable for objects that require explicit closing (e.g. os.File)
- it's not suitable for the situation when you need to create N
objects and let the Cache manage them (e.g. db.Connection)
- the cached objects can be silently discarded
- there is hard limit of number of cached objects, in a more general
form it is k*GOMAXPROCS+c

Here is the real implementation:
https://codereview.appspot.com/10299043/

Dmitry Vyukov

unread,
Jun 15, 2013, 2:13:12 PM6/15/13
to Russ Cox, Kevin Gillette, golang-dev
I've also updated the patch that uses the Cache in fmt package:
https://codereview.appspot.com/10305043/

It shows intended usage and quite impressive performance improvement:
benchmark old ns/op new ns/op delta
BenchmarkSprintfEmpty 77 69 -10.22%
BenchmarkSprintfEmpty-2 338 35 -89.44%
BenchmarkSprintfEmpty-4 426 23 -94.48%
BenchmarkSprintfEmpty-8 460 22 -95.22%
BenchmarkSprintfEmpty-16 436 16 -96.28%
BenchmarkSprintfEmpty-32 435 11 -97.47%
BenchmarkSprintfEmpty-64 425 9 -97.74%
BenchmarkSprintfString 261 232 -11.11%
BenchmarkSprintfString-2 298 165 -44.63%
BenchmarkSprintfString-4 555 85 -84.61%
BenchmarkSprintfString-8 668 54 -91.78%
BenchmarkSprintfString-16 673 39 -94.09%
BenchmarkSprintfString-32 647 32 -94.98%
BenchmarkSprintfString-64 691 30 -95.63%

voidl...@gmail.com

unread,
Jun 15, 2013, 3:36:03 PM6/15/13
to golan...@googlegroups.com, Jan Mercl
To expand on this: Cache vs Pool: Names are important.

I'm confused at this point as to which name is being used, but I think it is important to note that these terms are not interchangeable.

A pool: is a collection of stateless objects
A cache: is collection of stateful objects

If any member of the collection will do, the collection is a pool of things. If a specific member of the collection is needed, it is a cache. Why does this matter?
Because people, especially new people, are going to be endlessly confused as to what this "cache" does.

Supporting evidence:

Examples:
  • A CPU cache is backed by memory, items in it are not all the same, they are stateful.
  • A connection pool is stateless, if I need a connection, I don't care which one I get.
External source evidence:
As always feel free to ignore this rant, I just feel for clarity, correctness and easy uptake calling things by their proper names is important.

On Monday, May 20, 2013 11:39:38 AM UTC-5, Dmitry Vyukov wrote:
On Mon, May 20, 2013 at 8:32 PM, Jan Mercl <0xj...@gmail.com> wrote:
> On Mon, May 20, 2013 at 6:11 PM, Dmitry Vyukov <dvy...@google.com> wrote:
>
> The Get method could also be alternatively
>
> func (c *Cache) Get(func(x interface{}) bool) interface{}
>
> where the passed function:
> - if nil -> ignored
> - otherwise it's passed cached (candidate) items until returns true.
> Then that item is returned from Get. If the function never returns
> true the Get returns interface{}(nil).
>
> The idea is that cached items can have properties and not every item
> have properties which the client of Get needs. For example, think of a
> Cache of []byte: the client may need only buffers of some minimal len
> or cap. Without such function the client would have to repeat .Get
> until satisfied, then return all the below-the-cut buffers back.
>
> Non-technical: I prefer Cache, the originally proposed name, instead of Pool.


In the issue tylor@ wrote:
"Please call this construct a Pool (or FixedPool) rather than a Cache
if it comes into existence. I can already imagine myself explaining to
new people over and over again on #go-nuts that sync.Cache is actually
a pool, etc etc..."

I also find it somewhat confusing. I think Cache is a more overloaded
term than Pool, there are LRUCache's, TimedCache's, etc.

Andrew Gerrand

unread,
Jun 16, 2013, 6:21:25 PM6/16/13
to voidl...@gmail.com, golang-dev, Jan Mercl
+1 on Pool. That's mostly what we've been calling these mechanisms internally, too.

Dmitry Vyukov

unread,
Jun 29, 2013, 2:01:52 PM6/29/13
to Russ Cox, Kevin Gillette, golang-dev
ping

Russ Cox

unread,
Jul 1, 2013, 5:37:47 PM7/1/13
to Dmitry Vyukov, Kevin Gillette, golang-dev
Sorry, but there's too much going on in the runtime to review this now, and also aren't you on vacation?

Let's please wait until the preemptive scheduler has been in for a while and is working. There still seem to be plenty of bugs to fix there.

Russ

Dmitry Vyukov

unread,
Aug 9, 2013, 11:20:32 AM8/9/13
to Russ Cox, Kevin Gillette, golang-dev
It's time to decide whether we want this in 1.2 or not.

I am voting for including it as sync.Pool.

Brad Fitzpatrick

unread,
Aug 9, 2013, 11:21:58 AM8/9/13
to Dmitry Vyukov, Kevin Gillette, Russ Cox, golang-dev

I'll redundantly say I'm still interested.

On Aug 9, 2013 8:20 AM, "Dmitry Vyukov" <dvy...@google.com> wrote:
It's time to decide whether we want this in 1.2 or not.

I am voting for including it as sync.Pool.

Russ Cox

unread,
Aug 9, 2013, 11:42:18 AM8/9/13
to Brad Fitzpatrick, Dmitry Vyukov, Kevin Gillette, golang-dev
I don't think this is going to make it in. I still have concerns about the API and about the proliferation of premature performance optimization it will trigger. I believe Rob also has similar reservations.

Russ

Brad Fitzpatrick

unread,
Aug 9, 2013, 11:54:44 AM8/9/13
to Russ Cox, Dmitry Vyukov, Kevin Gillette, golang-dev
The API discussion has barely even started, though.  It's always been pushed off.  It's too early to have API concerns, isn't it?

As for proliferation of usage... aren't we already there?  They're just all ad-hoc right now and don't integrate with the GC at all.  (they never free their cached items, even when they're idle and memory is tight)

It'd be nice to have a mechanism for free pools that we don't have to feel dirty about using.  It being faster is a nice bonus.

We can convert the existing callers and still say no to new ones.

We could even do it without exposing any new API to third-party packages, so we can try it out just in the standard library without promising that it remains.


Dmitry Vyukov

unread,
Aug 9, 2013, 11:57:10 AM8/9/13
to Brad Fitzpatrick, Russ Cox, Kevin Gillette, golang-dev
On Fri, Aug 9, 2013 at 7:54 PM, Brad Fitzpatrick <brad...@golang.org> wrote:
> The API discussion has barely even started, though. It's always been pushed
> off. It's too early to have API concerns, isn't it?
>
> As for proliferation of usage... aren't we already there? They're just all
> ad-hoc right now and don't integrate with the GC at all. (they never free
> their cached items, even when they're idle and memory is tight)
>
> It'd be nice to have a mechanism for free pools that we don't have to feel
> dirty about using. It being faster is a nice bonus.
>
> We can convert the existing callers and still say no to new ones.
>
> We could even do it without exposing any new API to third-party packages, so
> we can try it out just in the standard library without promising that it
> remains.

How do you want to do it? I mean "w/o exposing new API".

Brad Fitzpatrick

unread,
Aug 9, 2013, 12:04:12 PM8/9/13
to Dmitry Vyukov, Russ Cox, Kevin Gillette, golang-dev
At least two possible ways:

1) modify the go tool.  define "internal/<x>" as an internal package that only the standard library can use.  if you're building a package not in the standard library, it can't import "internal/...".  this has the advantage that we can move duplicated code from multiple places into the standard library into new internal leaf packages, without changing our outward-facing API.

2) modify the runtime package to define unexported constructor funcs in all the packages that want to use a pool.  imagine:

func fmt
type pool interface { Get() Put() }
func newPool() pool  // defined in pkg runtime   

Dmitry Vyukov

unread,
Aug 9, 2013, 12:14:58 PM8/9/13
to Brad Fitzpatrick, Russ Cox, Kevin Gillette, golang-dev
2 looks much simpler (if 1 is not part of bigger plan). It must be
possible to export it directly from sync, if users will do import _
"sync".

Making it std-lib-internal-only API, of course, has much less far
reaching consequences.

Russ Cox

unread,
Aug 9, 2013, 2:12:48 PM8/9/13
to Dmitry Vyukov, Brad Fitzpatrick, Kevin Gillette, golang-dev
In brief: I don't believe the cache sizing can be done correctly for most uses. I don't believe that there are enough valid uses. I don't want the library to fill up with little hidden pools, because they are very hard to do both transparently and safely (see the discussion about the bufio pool for example). The only truly safe use I can think of for this is fmt, and fmt is probably fast enough already.

We have been doing a lot of higher-priority things for Go 1.2, which means we don't have time for a long discussion about sync.Pool.

Russ

mtn...@gmail.com

unread,
Aug 26, 2013, 4:43:29 PM8/26/13
to golan...@googlegroups.com, Dmitry Vyukov, Brad Fitzpatrick, Kevin Gillette
As a user of go to write backend services, this is the one feature I was truly looking forward to for 1.2.

All the ad-hoc leaky buckets I'm making are nothing but boilerplate, and I've love to be able to use something like sync.Pool.  Is premature optimization a problem? Yes, but for my services, stop-the-world GC pauses are worse, and I can't profile ahead of time for every combination of real world scenarios.  The obvious places that need a pool will get them regardless.

Please reconsider.  A naive sync.Pool helps me get products to production faster.  A GC-aware sync.Pool would be a godsend. 

Matt

Dmitry Vyukov

unread,
Sep 1, 2013, 7:50:53 AM9/1/13
to mtn...@gmail.com, golang-dev, Brad Fitzpatrick, Kevin Gillette
On Tue, Aug 27, 2013 at 12:43 AM, <mtn...@gmail.com> wrote:
> As a user of go to write backend services, this is the one feature I was
> truly looking forward to for 1.2.
>
> All the ad-hoc leaky buckets I'm making are nothing but boilerplate, and
> I've love to be able to use something like sync.Pool. Is premature
> optimization a problem? Yes, but for my services, stop-the-world GC pauses
> are worse, and I can't profile ahead of time for every combination of real
> world scenarios. The obvious places that need a pool will get them
> regardless.
>
> Please reconsider. A naive sync.Pool helps me get products to production
> faster. A GC-aware sync.Pool would be a godsend.


Please copy this to the issue:
https://code.google.com/p/go/issues/detail?id=4720
Reply all
Reply to author
Forward
0 new messages