Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

idle connection timeout ...

1 view
Skip to first unread message

Greg Copeland

unread,
Oct 25, 2002, 9:48:45 AM10/25/02
to
On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
> Ya, I've thought that one through ... I think what I'm more looking at is
> some way of 'limiting' persistent connections, where a server opens n
> connections during a spike, which then sit idle indefinitely since it was
> one fo those 'slashdot effect' kinda spikes ...
>
> Is there any way of the 'master process' *safely/accurately* knowing,
> through the shared memory link, the # of connections currently open to a
> particular database? So that a limit could be set on a per db basis, say
> as an additional arg to pg_hba.conf?

Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.

Greg

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Bruce Momjian

unread,
Oct 25, 2002, 11:20:09 AM10/25/02
to
Tom Lane wrote:
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> >> Well, there are two different things here. I agree that if an app
> >> is going to use persistent connections, it should be the app's
> >> responsibility to manage them. But a per-database, as opposed to
> >> installation-wide, limit on number of connections seems like a
> >> reasonable idea. Note that the limit would result in new connections
> >> being rejected, not old ones being summarily cut.
>
> > But then the app is going to keep trying to connect over and over unless
> > it knows something about why it can't connect.
>
> So? If it hits the installation-wide limit, you'll have the same
> problem; and at that point the (presumably runaway) app would have
> sucked up all the connections, denying service to other apps using other
> databases. I think Marc's point here is to limit his exposure to
> misbehavior of any one client app, in a database server that is serving
> multiple clients using multiple databases.

What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.
I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.

Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

Tom Lane

unread,
Oct 25, 2002, 11:48:12 AM10/25/02
to
Bruce Momjian <pg...@candle.pha.pa.us> writes:
> Basically, total connections is to be set larger than you think you will
> ever need, while you expect per-db to be hit, and if something keeps
> trying to connect and failing, we may get very bad connection
> performance for other backends.

Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.

regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majo...@postgresql.org)

Michael Paesold

unread,
Oct 25, 2002, 12:04:13 PM10/25/02
to
Tom Lane <t...@sss.pgh.pa.us> wrote:

> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection
> > performance for other backends.
>
> Hmm, I see your point. A per-db limit *could* be useful even if it's
> set high enough that you don't expect it to be hit ... but most likely
> people would try to use it in a way that it wouldn't be very efficient
> compared to a client-side solution.

What about a shared database server, where you want to have resource
limits for each database/user?
Could be usefull in such a case, even if it is not very efficient, it
would be the only way. As dba you need not have control over the
client apps.

Just a thought.

Regards,
Michael


---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majo...@postgresql.org

Tom Lane

unread,
Oct 25, 2002, 10:00:19 AM10/25/02
to
"Marc G. Fournier" <scr...@hub.org> writes:
> Is there any way of the 'master process' *safely/accurately* knowing,
> through the shared memory link, the # of connections currently open to a
> particular database? So that a limit could be set on a per db basis, say
> as an additional arg to pg_hba.conf?

It would be better/easier to apply the check later on, when a backend is
adding itself to the PGPROC array. It'd be easy enough to count the
number of other backends showing the same DB OID in their PGPROC
entries, and reject if too many.

regards, tom lane


---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
Oct 25, 2002, 10:40:14 AM10/25/02
to
Greg Copeland <gr...@CopelandConsulting.Net> writes:
> On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
>> Ya, I've thought that one through ... I think what I'm more looking at is
>> some way of 'limiting' persistent connections, where a server opens n
>> connections during a spike, which then sit idle indefinitely since it was
>> one fo those 'slashdot effect' kinda spikes ...
>>
>> Is there any way of the 'master process' *safely/accurately* knowing,
>> through the shared memory link, the # of connections currently open to a
>> particular database? So that a limit could be set on a per db basis, say
>> as an additional arg to pg_hba.conf?

> Well, if you're application is smart enough to know it needs to


> dynamically add connections, it should also be smart enough to tear them
> down after some idle period. I agree with Tom. I think that sounds
> like application domain.

Well, there are two different things here. I agree that if an app


is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.

regards, tom lane

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
Oct 25, 2002, 10:47:45 AM10/25/02
to
Tom Lane wrote:
> Greg Copeland <gr...@CopelandConsulting.Net> writes:
> > On Fri, 2002-10-25 at 00:52, Marc G. Fournier wrote:
> >> Ya, I've thought that one through ... I think what I'm more looking at is
> >> some way of 'limiting' persistent connections, where a server opens n
> >> connections during a spike, which then sit idle indefinitely since it was
> >> one fo those 'slashdot effect' kinda spikes ...
> >>
> >> Is there any way of the 'master process' *safely/accurately* knowing,
> >> through the shared memory link, the # of connections currently open to a
> >> particular database? So that a limit could be set on a per db basis, say
> >> as an additional arg to pg_hba.conf?
>
> > Well, if you're application is smart enough to know it needs to
> > dynamically add connections, it should also be smart enough to tear them
> > down after some idle period. I agree with Tom. I think that sounds
> > like application domain.
>
> Well, there are two different things here. I agree that if an app
> is going to use persistent connections, it should be the app's
> responsibility to manage them. But a per-database, as opposed to
> installation-wide, limit on number of connections seems like a
> reasonable idea. Note that the limit would result in new connections
> being rejected, not old ones being summarily cut.

But then the app is going to keep trying to connect over and over unless


it knows something about why it can't connect.

--

Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

Marc G. Fournier

unread,
Oct 25, 2002, 1:32:06 PM10/25/02
to
On Fri, 25 Oct 2002, Tom Lane wrote:

> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection
> > performance for other backends.
>
> Hmm, I see your point. A per-db limit *could* be useful even if it's
> set high enough that you don't expect it to be hit ... but most likely
> people would try to use it in a way that it wouldn't be very efficient
> compared to a client-side solution.

As mentioned in my response to Bruce ... in an ISP situation, a DoS attack
against the database by a single client can be very easy to accomplish in
our current situation ... all I need to do is setup a perl script that
opens all the connections I can to the database I have access to until all
are used up, and nobody else has access to *their* databases ...

Marc G. Fournier

unread,
Oct 25, 2002, 1:28:45 PM10/25/02
to
On Fri, 25 Oct 2002, Bruce Momjian wrote:

> Tom Lane wrote:
> > Bruce Momjian <pg...@candle.pha.pa.us> writes:

> > >> Well, there are two different things here. I agree that if an app
> > >> is going to use persistent connections, it should be the app's
> > >> responsibility to manage them. But a per-database, as opposed to
> > >> installation-wide, limit on number of connections seems like a
> > >> reasonable idea. Note that the limit would result in new connections
> > >> being rejected, not old ones being summarily cut.
> >
> > > But then the app is going to keep trying to connect over and over unless
> > > it knows something about why it can't connect.
> >

> > So? If it hits the installation-wide limit, you'll have the same
> > problem; and at that point the (presumably runaway) app would have
> > sucked up all the connections, denying service to other apps using other
> > databases. I think Marc's point here is to limit his exposure to
> > misbehavior of any one client app, in a database server that is serving
> > multiple clients using multiple databases.
>
> What I am saying is that using the backend to throttle per-db
> connections may not work too well because they will just keep retrying.

Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...

Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...

> I realize that the total limit can be hit too, but I assume that limit
> is set so it will not be hit (it's a resource tradeoff), while the
> per-db limit is there to try to throttle back the persistent
> connections.

Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...

> Basically, total connections is to be set larger than you think you will
> ever need, while you expect per-db to be hit, and if something keeps
> trying to connect and failing, we may get very bad connection

> performance for other backends. This is where doing the limiting on the
> persistent connection end would be a better solution.

Agreed, but unless you have control over both the client and server sides,
its not possible ...

---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org

Marc G. Fournier

unread,
Oct 25, 2002, 1:34:10 PM10/25/02
to
On Fri, 25 Oct 2002, Bruce Momjian wrote:

> Tom Lane wrote:
> > Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > > Basically, total connections is to be set larger than you think you will
> > > ever need, while you expect per-db to be hit, and if something keeps
> > > trying to connect and failing, we may get very bad connection
> > > performance for other backends.
> >

> > Hmm, I see your point. A per-db limit *could* be useful even if it's
> > set high enough that you don't expect it to be hit ... but most likely
> > people would try to use it in a way that it wouldn't be very efficient
> > compared to a client-side solution.
>

> The only way to do it would be, after a few hits of the limit, to start
> delaying the connection rejections so you don't get hammered. It could
> be done, but even then, I am not sure if it would be optimal.

Note that I don't believe there is an "optimal solution" for this ... but
in an environment where there are several clients connecting to several
different databases, the ability for one client to starve out the others
is actually very real ...


---------------------------(end of broadcast)---------------------------

Andrew Sullivan

unread,
Oct 25, 2002, 2:47:15 PM10/25/02
to
On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
> So? If it hits the installation-wide limit, you'll have the same
> problem; and at that point the (presumably runaway) app would have
> sucked up all the connections, denying service to other apps using other
> databases. I think Marc's point here is to limit his exposure to
> misbehavior of any one client app, in a database server that is serving
> multiple clients using multiple databases.

That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.

A

--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<and...@libertyrms.info> M2P 2A8
+1 416 646 3304 x110


---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majo...@postgresql.org so that your
message can get through to the mailing list cleanly

Bruce Momjian

unread,
Oct 25, 2002, 1:31:54 PM10/25/02
to

Yes, my comments related to useing db/user limits to control the number
of persistent connections. From an ISP perspective, I can see value in
user/db limits.

---------------------------------------------------------------------------

Marc G. Fournier wrote:
> On Fri, 25 Oct 2002, Bruce Momjian wrote:
>
> > Tom Lane wrote:
> > > Bruce Momjian <pg...@candle.pha.pa.us> writes:

> > > >> Well, there are two different things here. I agree that if an app
> > > >> is going to use persistent connections, it should be the app's
> > > >> responsibility to manage them. But a per-database, as opposed to
> > > >> installation-wide, limit on number of connections seems like a
> > > >> reasonable idea. Note that the limit would result in new connections
> > > >> being rejected, not old ones being summarily cut.
> > >
> > > > But then the app is going to keep trying to connect over and over unless
> > > > it knows something about why it can't connect.
> > >

> > > So? If it hits the installation-wide limit, you'll have the same
> > > problem; and at that point the (presumably runaway) app would have
> > > sucked up all the connections, denying service to other apps using other
> > > databases. I think Marc's point here is to limit his exposure to
> > > misbehavior of any one client app, in a database server that is serving
> > > multiple clients using multiple databases.
> >

> > What I am saying is that using the backend to throttle per-db
> > connections may not work too well because they will just keep retrying.
>
> Okay, but also bear in mind that alot of the time, when I'm bringign up
> stuff like this, I'm coming from the "ISP" perspective ... if I have one
> client that is using up all 512 connections on the server, none of my
> other clients are getting any connections ...
>
> Yes, the client should have tested his code better, but I want to be able
> to put 'limits' to make it so that everyone isn't affected by ones mistake
> ...
>
> > I realize that the total limit can be hit too, but I assume that limit
> > is set so it will not be hit (it's a resource tradeoff), while the
> > per-db limit is there to try to throttle back the persistent
> > connections.
>
> Nope, the per-db limit is there to try and eliminate the impact of one
> client/application from essentially creating a DoS for all other
> database/clients ...
>

> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection

> > performance for other backends. This is where doing the limiting on the
> > persistent connection end would be a better solution.
>
> Agreed, but unless you have control over both the client and server sides,

> its not possible ...
>
>
>
> ---------------------------(end of broadcast)---------------------------


> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>

--

Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Mike Benoit

unread,
Oct 25, 2002, 3:16:57 PM10/25/02
to
On Fri, 2002-10-25 at 10:31, Bruce Momjian wrote:
>
> Yes, my comments related to useing db/user limits to control the number
> of persistent connections. From an ISP perspective, I can see value in
> user/db limits.
>

Yes, this would be amazingly useful. I work for a web hosting provider
and it happens all too often where a single customer creates a flawed
script which consumes all the DB connections. Obviously denying access
to the rest of our customers.

Being able to set this per DB connection limit in Postgres itself
without having to restart the backend would also make this feature very
nice.

--
Best Regards,

Mike Benoit
NetNation Communication Inc.
Systems Engineer
Tel: 604-684-6892 or 888-983-6600
---------------------------------------

Disclaimer: Opinions expressed here are my own and not
necessarily those of my employer

Tom Lane

unread,
Oct 25, 2002, 4:18:10 PM10/25/02
to
Mike Mascari <mas...@mascari.com> writes:
> [ extensive proposal for PROFILEs ]
> It seems like a nice project, particularly since it wouldn't
> affect anyone that doesn't want to use it.

... except in the added overhead to do the resource accounting and check
to see if there is a restriction ...

> And whenever a new
> resource limitation issue arrises, such as PL/SQL recursion
> depth, a new attribute would be added to pg_profile to handle
> the limitation...

I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.

regards, tom lane

Bruce Momjian

unread,
Oct 25, 2002, 3:03:05 PM10/25/02
to
Andrew Sullivan wrote:
> On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
> > So? If it hits the installation-wide limit, you'll have the same
> > problem; and at that point the (presumably runaway) app would have
> > sucked up all the connections, denying service to other apps using other
> > databases. I think Marc's point here is to limit his exposure to
> > misbehavior of any one client app, in a database server that is serving
> > multiple clients using multiple databases.
>
> That would indeed be a useful item. The only way to avoid such
> exposure right now is to run another back end.

Added to TODO:

* Allow limits on per-db/user connections

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Mike Mascari

unread,
Oct 25, 2002, 3:32:56 PM10/25/02
to
Bruce Momjian wrote:
> Andrew Sullivan wrote:
>
>>On Fri, Oct 25, 2002 at 11:02:48AM -0400, Tom Lane wrote:
>>
>>>So? If it hits the installation-wide limit, you'll have the same
>>>problem; and at that point the (presumably runaway) app would have
>>>sucked up all the connections, denying service to other apps using other
>>>databases. I think Marc's point here is to limit his exposure to
>>>misbehavior of any one client app, in a database server that is serving
>>>multiple clients using multiple databases.
>>
>>That would indeed be a useful item. The only way to avoid such
>>exposure right now is to run another back end.
>
>
> Added to TODO:
>
> * Allow limits on per-db/user connections
>

Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:

Implement the equivalent of Oracle PROFILEs.

I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:

1. A new system table:

pg_profile

2. The attributes of the profiles would be:

profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call

3. A new field would be added to pg_user/pg_shadow:

profileid

4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.

5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.

Example:

CREATE PROFILE clerk
IDLE_TIME 30;

ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;

or, for an ISP maybe:

ALYTER PROFILE default
IDLE_TIME 30;

It seems like a nice project, particularly since it wouldn't

affect anyone that doesn't want to use it. And whenever a new

resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...

Mike Mascari
mas...@mascari.com

Bruce Momjian

unread,
Oct 25, 2002, 3:54:56 PM10/25/02
to

I need others wanting this before I can add something of this
sophistication to TODO.

---------------------------------------------------------------------------

--

Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Tom Lane

unread,
Oct 25, 2002, 4:43:52 PM10/25/02
to
Robert Treat <xzi...@users.sourceforge.net> writes:

> On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
>> I prefer GUC variables to table entries for setting stuff like recursion
>> limits; they're much lighter-weight to create and access, and you don't
>> need an initdb to add or remove a parameter.

> I don't see an adequate way to store the individual settings as GUC
> variables per user...

Have you looked at the per-database and per-user GUC facilities in 7.3?

regards, tom lane

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
Oct 25, 2002, 7:04:33 PM10/25/02
to
Tom Lane wrote:
> Robert Treat <xzi...@users.sourceforge.net> writes:
> > On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
> >> I prefer GUC variables to table entries for setting stuff like recursion
> >> limits; they're much lighter-weight to create and access, and you don't
> >> need an initdb to add or remove a parameter.
>
> > I don't see an adequate way to store the individual settings as GUC
> > variables per user...
>
> Have you looked at the per-database and per-user GUC facilities in 7.3?

Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that user
has exceeded their connection quota.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Robert Treat

unread,
Oct 26, 2002, 12:39:33 AM10/26/02
to
On Friday 25 October 2002 07:03 pm, Bruce Momjian wrote:
> Tom Lane wrote:
> > Robert Treat <xzi...@users.sourceforge.net> writes:
> > > On Fri, 2002-10-25 at 16:17, Tom Lane wrote:
> > >> I prefer GUC variables to table entries for setting stuff like
> > >> recursion limits; they're much lighter-weight to create and access,
> > >> and you don't need an initdb to add or remove a parameter.
> > >
> > > I don't see an adequate way to store the individual settings as GUC
> > > variables per user...
> >
> > Have you looked at the per-database and per-user GUC facilities in 7.3?
>

Maybe I haven't looked at them enough ;-)

> Nice idea. You can now have per-user/db settings that are SET when the
> connection is made. You can set any GUC variable that way. We just
> need a variable that can look at all sessions and determine if that user
> has exceeded their connection quota.

I understand how you are able to set those variables per db, but I don't se=
e=20
how you can get those settings to persist between database shutdowns.=20=20
Perhaps someone can point me to the relevant documentation?

Robert Treat

Bruce Momjian

unread,
Oct 26, 2002, 1:05:39 AM10/26/02
to
Robert Treat wrote:
> > Nice idea. You can now have per-user/db settings that are SET when the
> > connection is made. You can set any GUC variable that way. We just
> > need a variable that can look at all sessions and determine if that user
> > has exceeded their connection quota.
>
> I understand how you are able to set those variables per db, but I don't see
> how you can get those settings to persist between database shutdowns.
> Perhaps someone can point me to the relevant documentation?

The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.

--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Robert Treat

unread,
Oct 26, 2002, 1:42:13 AM10/26/02
to
On Saturday 26 October 2002 01:04 am, Bruce Momjian wrote:
> Robert Treat wrote:
> > > Nice idea. You can now have per-user/db settings that are SET when t=

he
> > > connection is made. You can set any GUC variable that way. We just
> > > need a variable that can look at all sessions and determine if that
> > > user has exceeded their connection quota.
> >
> > I understand how you are able to set those variables per db, but I don't
> > see how you can get those settings to persist between database shutdown=
s.

> > Perhaps someone can point me to the relevant documentation?
>
> The per db/user stuff is stored in the pg_database/pg_shadow tables per
> row, so they exist in permanent storage.

Ah.. that was the missing link. I don't think the docs mention anything ab=
out=20
the information being stored in pg_database or pg_shadow, they always refer=
=20
to the "session default" which made me wonder what happened "between=20
sessions". Perhaps someone (me?) should work up a patch to help clarify=20
this....

Robert Treat

---------------------------(end of broadcast)---------------------------

Bruce Momjian

unread,
Oct 26, 2002, 4:06:36 PM10/26/02
to
Bruno Wolff III wrote:

> On Sat, Oct 26, 2002 at 01:04:55 -0400,
> Bruce Momjian <pg...@candle.pha.pa.us> wrote:
> >
> > The per db/user stuff is stored in the pg_database/pg_shadow tables per
> > row, so they exist in permanent storage.
>
> I have a question about this. This stuff is per user OR per db right?
> When I see per db/user I get the impression that users can have different
> settings depending on which db they connect to. But looking at alter
> database and alter user it looks like settings are per database or
> per user, but there isn't a way to (in general) set something that
> applies to a particular user when connecting to a particular database.
> If there is a way to do that, I would be interested in a hint where to
> look in the documentation.

You are right, there isn't a per/db-user combination setting. I think
one is done before the other, so you could try to set things that way,
maybe in a plpgsql procedure. I think you could do SELECT
db_user_set(); and have that function do that sets for you.

Bruce Momjian

unread,
Oct 26, 2002, 11:12:36 PM10/26/02
to

Tom's idea of doing this as part of the already added per-user/per-db
settings seems like a good direction to take.

---------------------------------------------------------------------------

Marc G. Fournier wrote:
> On Fri, 25 Oct 2002, Bruce Momjian wrote:
>
> >

> > I need others wanting this before I can add something of this
> > sophistication to TODO.
>

> Sounds cool to me, and would satisfy what I'm looking for, since it sounds
> like the same user could have different limits depending on the database
> they are/were connectin gto ...

> > --
> > Bruce Momjian | http://candle.pha.pa.us
> > pg...@candle.pha.pa.us | (610) 359-1001
> > + If your life is a hard drive, | 13 Roberts Road
> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 5: Have you checked our extensive FAQ?
> >
> > http://www.postgresql.org/users-lounge/docs/faq.html
> >
>
>

> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>

--

Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------

Alvaro Herrera

unread,
Oct 27, 2002, 12:09:15 AM10/27/02
to
On Sat, Oct 26, 2002 at 11:58:35PM -0400, Bruce Momjian wrote:
> Alvaro Herrera wrote:

> > On Sat, Oct 26, 2002 at 11:12:01PM -0400, Bruce Momjian wrote:
> > >
> > > Tom's idea of doing this as part of the already added per-user/per-db
> > > settings seems like a good direction to take.
> >
> > Yes, but how? The current implementation has a setting column in
> > pg_shadow, and another in pg_database. Is the idea to create a separate
> > pg_settings table or something like that?
>
> Are you asking how to do per-db-user combination settings, like User A
> can have only two connections to database B? Seems we should get
> per-user and per-db settings working first and see if anyone needs
> combination settings.

Well, now that I think about it... the "user local to database" (those
with the @dbname appended and all that) idea may very well cover the
ground for this. But I'm not actually asking, because I don't need it.

--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Aprender sin pensar es inutil; pensar sin aprender, peligroso" (Confucio)

0 new messages