Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to support more than 1200 client connections

20 views
Skip to first unread message

Sachin Singh

unread,
Aug 2, 2018, 4:46:27 AM8/2/18
to
Hi All,

In order to benchmark the API execution per second for our cloud product we want to test performance for 1200+ client connections and doing some operations. We are using postgres db in system of 32GB RAM. During the test execution after sometime we are seeing db crash because of out of memory issue.

We tried to use pgbouncer to resolve the issue but no luck.

Do we have any solution to support the above requirement for 1 DB node. Or any configuration setting in pgbouncer which will help us.



Thanks
Sachin

Dimitri Fontaine

unread,
Aug 2, 2018, 6:25:30 AM8/2/18
to
Sachin Singh <nots...@gmail.com> writes:
> We tried to use pgbouncer to resolve the issue but no luck.

Which pool_mode did you try? Have you read about the transation
pool_mode yet, and decided if your application is compliant to this one?

--
Dimitri Fontaine

George Neuner

unread,
Aug 2, 2018, 6:41:05 AM8/2/18
to
It probably would be better to ask on one of mailing lists:
pgsql-...@postgresql.org
pgsql-pe...@postgresql.org


You definitely are making too many simultaneous connections - you need
to limit the number to something reasonable: say a few hundred,
maximum. How many connections you can support depends heavily on your
server's configuration [and whether pgbouncer is co-resident. I'm not
a real expert at this, which is why you should ask on the lists.]

Remember that Postgresql creates a new process for every connection:
although the program code and DBMS cache (shared_buffers) are shared,
each connection has its own private state, and extra allocations like
work_buffers and temp_buffers, etc. are query dependent.

Unfortunately, server configuration is a black art: it's easy to make
configuration choices that run well under certain loads and fail
miserably under others.

George

Sachin Singh

unread,
Aug 2, 2018, 10:42:42 AM8/2/18
to
With session pool_mode out of memory error comes and transaction mode not compliant with my application and with transaction mode my application is showing some errors

Looks like we need to go for postgreSQL cluster with load balancing.
Please let me know if you have some other suggestions
Sachin Singh

Sachin Singh

unread,
Aug 2, 2018, 10:51:08 AM8/2/18
to
On Thursday, August 2, 2018 at 4:11:05 PM UTC+5:30, George Neuner wrote:
> On Thu, 2 Aug 2018 01:46:26 -0700 (PDT), Sachin Singh
> <nots...@gmail.com> wrote:
>
> >In order to benchmark the API execution per second for
> >our cloud product we want to test performance for 1200+
> >client connections and doing some operations. We are
> >using postgres db in system of 32GB RAM. During the
> >test execution after sometime we are seeing db crash
> >because of out of memory issue.
> >
> >We tried to use pgbouncer to resolve the issue but no luck.
> >
> >Do we have any solution to support the above requirement
> >for 1 DB node. Or any configuration setting in pgbouncer
> >which will help us.
>
> It probably would be better to ask on one of mailing lists:
> pgsql-...@postgresql.org
> pgsql-pe...@postgresql.org


Thanks for providing the correct mailing lists.
>
>
> You definitely are making too many simultaneous connections - you need
> to limit the number to something reasonable: say a few hundred,
> maximum. How many connections you can support depends heavily on your
> server's configuration [and whether pgbouncer is co-resident. I'm not
> a real expert at this, which is why you should ask on the lists.]

>
> Remember that Postgresql creates a new process for every connection:
> although the program code and DBMS cache (shared_buffers) are shared,
> each connection has its own private state, and extra allocations like
> work_buffers and temp_buffers, etc. are query dependent.
>
> Unfortunately, server configuration is a black art: it's easy to make
> configuration choices that run well under certain loads and fail
> miserably under others.
>
> George

My application is running behind load balancer and it is kind of clustered application and it can support more than 2k+ connections. Bottleneck is happening from db side so we are looking solution for postgres db to support this much connection without db crash.
Planning to try postgreSQL clustering with load balancing option using pgpool-II . Please let me know if any drawbacks or need to know something before proceeding on this.

Sachin

Dimitri Fontaine

unread,
Aug 2, 2018, 11:32:01 AM8/2/18
to
Sachin Singh <nots...@gmail.com> writes:
> With session pool_mode out of memory error comes and transaction mode not
> compliant with my application and with transaction mode my application is
> showing some errors

Fix your app, or see if the errors are limited to certain parts of it.
Then you can use different contexts in pgbouncer for transaction mode
and session mode as you need.

--
Dimitri Fontaine
0 new messages