Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

BUG #8013: Memory leak

2 views
Skip to first unread message

stie...@comcast.net

unread,
Mar 30, 2013, 10:01:27 AM3/30/13
to
The following bug has been logged on the website:

Bug reference: 8013
Logged by: Rae Stiening
Email address: stie...@comcast.net
PostgreSQL version: 9.2.3
Operating system: Suse Linux 9.3
Description:

The query:
SELECT pts_key,count(*)
FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
pts_key

Which is executed as:
GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
Filter: (count(*) <> 1)
-> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
Sort Key: pts_key
-> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
width=4)

uses all available memory (32GB). pts_key is an integer and the table
contains about 500 million rows.

PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux)
4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit
(

compile options:

env CFLAGS='-O3 -march=native' ./configure --with-segsize=128




--
Sent via pgsql-bugs mailing list (pgsql...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

ajmcello

unread,
Mar 30, 2013, 11:41:14 PM3/30/13
to
unsubscribe

Jeff Lake

unread,
Mar 30, 2013, 11:42:29 PM3/30/13
to
memory leak with 500 Million rows ??
sounds like to big of a db

-----------------------------------------------------
-Jeff Lake K8JSL
MichiganWxSystem.com
AllisonHouse.com
TheWeatherCenter.net
GRLevelXStuff.com

Greg Stark

unread,
Mar 31, 2013, 11:01:21 AM3/31/13
to
On Sat, Mar 30, 2013 at 2:01 PM, <stie...@comcast.net> wrote:
> uses all available memory (32GB). pts_key is an integer and the table
> contains about 500 million rows.

Please post the schema definition and all the log messages that occur
from this. If it Postgres runs out memory it should include a dump of
the memory usage.


--
greg

Peter Geoghegan

unread,
Mar 31, 2013, 11:40:03 AM3/31/13
to
On 30 March 2013 14:01, <stie...@comcast.net> wrote:
> env CFLAGS='-O3 -march=native' ./configure --with-segsize=128

Why did you build with a segment size of 128GB? Postgres binaries
built with a non-standard segment size are not widely used.


--
Regards,
Peter Geoghegan

ajmcello

unread,
Mar 31, 2013, 12:44:49 PM3/31/13
to
unsubscribe

Tom Lane

unread,
Mar 31, 2013, 1:16:12 PM3/31/13
to
stie...@comcast.net writes:
> The query:
> SELECT pts_key,count(*)
> FROM tm_tm_pairs GROUP BY pts_key HAVING count(*) !=1 ORDER BY
> pts_key

> Which is executed as:
> GroupAggregate (cost=108680937.80..119278286.60 rows=470993280 width=4)
> Filter: (count(*) <> 1)
> -> Sort (cost=108680937.80..109858421.00 rows=470993280 width=4)
> Sort Key: pts_key
> -> Seq Scan on tm_tm_pairs (cost=0.00..8634876.80 rows=470993280
> width=4)

> uses all available memory (32GB). pts_key is an integer and the table
> contains about 500 million rows.

That query plan doesn't look like it should produce any undue memory
consumption on the server side. How many distinct values of pts_key are
there, and what are you using to collect the query result client-side?
psql, for instance, would try to absorb the whole query result
in-memory, so there'd be a lot of memory consumed by psql if there are
a lot of pts_key values. (You can set FETCH_COUNT to alleviate that.)

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

regards, tom lane

Jeff Janes

unread,
Mar 31, 2013, 1:32:17 PM3/31/13
to
On Sunday, March 31, 2013, Tom Lane wrote:

A different line of thought is that you might have set work_mem to
an unreasonably large value --- the sort step will happily try to
consume work_mem worth of memory.

I don't think that that can be the problem here, because memtuples can never be more than 1GB even if work_mem is much larger than that.  Even if his sort is using pass-by-reference (I don't think it would be), they should be skinny enough that that limitation should prevent it from blowing out memory.

Cheers,

Jeff

Daniel Farina

unread,
Mar 31, 2013, 2:20:18 PM3/31/13
to
On Sat, Mar 30, 2013 at 8:41 PM, ajmcello <ajmce...@gmail.com> wrote:
> unsubscribe

That's not how you unsubscribe from this list; you'll want to do that here:

http://www.postgresql.org/community/lists/subscribe/

--
fdr

Rae Stiening

unread,
Mar 31, 2013, 3:59:39 PM3/31/13
to
I found that by replacing the postgresql.conf file with the original that is present following an initdb the query ran without a memory problem. I looked at the "bad" configuration file and couldn't see anything wrong with it. I regret that because of a typing error the bad file was accidentally deleted. I have subsequently been unable to reproduce the bad behavior. After editing the original file to be the same as what I had intended for the erased file the query still ran without a problem. Memory usage topped out at about 2.1 GB. Even setting work_mem and maintenance_work_mem to 30000MB did not change the maximum memory usage during the query.

Regards,
Rae Stiening
0 new messages