Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

BUG #8034: pg_buffercache gets invalid memory alloc request size with very large shared memory buffers

28 views
Skip to first unread message

dbe...@whitepages.com

unread,
Apr 2, 2013, 3:44:10 PM4/2/13
to
The following bug has been logged on the website:

Bug reference: 8034
Logged by: Devin Ben-Hur
Email address: dbe...@whitepages.com
PostgreSQL version: 9.2.3
Operating system: Ubuntu Precise
Description:

When a very large shared buffer pool (~480GB) is used with postgresql,
pg_buffercache contrib module gets an allocation error trying to Allocate
NBuffers worth of BufferCachePagesRec records:

https://github.com/postgres/postgres/blob/REL9_2_3/contrib/pg_buffercache/pg_buffercache_pages.c#L101-L102

The requested allocation exceeds the 1GB limitation imposed by
AllocSizeIsValid macro:
https://github.com/postgres/postgres/blob/REL9_2_3/src/include/utils/memutils.h#L40-L43

Reproduce:
1) acquire server with half terabyte of memory
2) tweak OS settings to allow large shared memory
3) set postgresql.conf: shared_buffers = 400GB
4) CREATE EXTENSION pg_buffercache;
5) SELECT * FROM pg_buffercache LIMIT 1;




--
Sent via pgsql-bugs mailing list (pgsql...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Mark Kirkwood

unread,
Apr 4, 2013, 5:25:01 PM4/4/13
to
On 03/04/13 08:44, dbe...@whitepages.com wrote:
> The following bug has been logged on the website:
>
> Bug reference: 8034
> Logged by: Devin Ben-Hur
> Email address: dbe...@whitepages.com
> PostgreSQL version: 9.2.3
> Operating system: Ubuntu Precise
> Description:
>
> When a very large shared buffer pool (~480GB) is used with postgresql,
> pg_buffercache contrib module gets an allocation error trying to Allocate
> NBuffers worth of BufferCachePagesRec records:
>
> https://github.com/postgres/postgres/blob/REL9_2_3/contrib/pg_buffercache/pg_buffercache_pages.c#L101-L102
>
> The requested allocation exceeds the 1GB limitation imposed by
> AllocSizeIsValid macro:
> https://github.com/postgres/postgres/blob/REL9_2_3/src/include/utils/memutils.h#L40-L43
>
> Reproduce:
> 1) acquire server with half terabyte of memory
> 2) tweak OS settings to allow large shared memory
> 3) set postgresql.conf: shared_buffers = 400GB
> 4) CREATE EXTENSION pg_buffercache;
> 5) SELECT * FROM pg_buffercache LIMIT 1;
>
>
>

Yes indeed - however I'm not sure this is likely to be encountered in
any serious configuration. The general rule for sizing shared buffers is:

shared_buffers = min(0.25 * RAM, 8G)

Now there has been some discussion about how settings bigger than 8G
make sense in some cases...but I'm not aware of any suggestions that
sizes in the hundreds of G make sense.

However it would be nice if pg_buffercache *could* work with bigger
sizes if they make sense at any time. Someone who understands the
memory allocation system better than I do will need to comment about how
that might work :-)

Cheers

Mark

Tom Lane

unread,
Apr 4, 2013, 5:37:21 PM4/4/13
to
Mark Kirkwood <mark.k...@catalyst.net.nz> writes:
>> When a very large shared buffer pool (~480GB) is used with postgresql,
>> pg_buffercache contrib module gets an allocation error trying to Allocate
>> NBuffers worth of BufferCachePagesRec records:

> Yes indeed - however I'm not sure this is likely to be encountered in
> any serious configuration.

I too am a bit skeptical of trying to make this actually work. For one
thing, pg_buffercache would be locking down the entire buffer arena for
a rather significant amount of time while it transfers gigabytes of data
into the local array. What would likely make more sense, if we ever get
to the point where this is a practical size of configuration, is to
provide a mechanism to read out data for just a portion of the arena
at a time.

> However it would be nice if pg_buffercache *could* work with bigger
> sizes if they make sense at any time. Someone who understands the
> memory allocation system better than I do will need to comment about how
> that might work :-)

There has been some discussion of inventing a "big_palloc"
infrastructure to allow allocation of arrays larger than 1GB, for use
in places like the sort code. If we ever get around to doing that,
it'd be straightforward enough to make pg_buffercache use the facility
... but I really doubt pg_buffercache itself is a sufficient reason
to do it.

regards, tom lane
0 new messages