Well, if you're application is smart enough to know it needs to
dynamically add connections, it should also be smart enough to tear them
down after some idle period. I agree with Tom. I think that sounds
like application domain.
Greg
---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster
What I am saying is that using the backend to throttle per-db
connections may not work too well because they will just keep retrying.
I realize that the total limit can be hit too, but I assume that limit
is set so it will not be hit (it's a resource tradeoff), while the
per-db limit is there to try to throttle back the persistent
connections.
Basically, total connections is to be set larger than you think you will
ever need, while you expect per-db to be hit, and if something keeps
trying to connect and failing, we may get very bad connection
performance for other backends. This is where doing the limiting on the
persistent connection end would be a better solution.
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
Hmm, I see your point. A per-db limit *could* be useful even if it's
set high enough that you don't expect it to be hit ... but most likely
people would try to use it in a way that it wouldn't be very efficient
compared to a client-side solution.
regards, tom lane
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to majo...@postgresql.org)
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection
> > performance for other backends.
>
> Hmm, I see your point. A per-db limit *could* be useful even if it's
> set high enough that you don't expect it to be hit ... but most likely
> people would try to use it in a way that it wouldn't be very efficient
> compared to a client-side solution.
What about a shared database server, where you want to have resource
limits for each database/user?
Could be usefull in such a case, even if it is not very efficient, it
would be the only way. As dba you need not have control over the
client apps.
Just a thought.
Regards,
Michael
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to majo...@postgresql.org
It would be better/easier to apply the check later on, when a backend is
adding itself to the PGPROC array. It'd be easy enough to count the
number of other backends showing the same DB OID in their PGPROC
entries, and reject if too many.
regards, tom lane
---------------------------(end of broadcast)---------------------------
> Well, if you're application is smart enough to know it needs to
> dynamically add connections, it should also be smart enough to tear them
> down after some idle period. I agree with Tom. I think that sounds
> like application domain.
Well, there are two different things here. I agree that if an app
is going to use persistent connections, it should be the app's
responsibility to manage them. But a per-database, as opposed to
installation-wide, limit on number of connections seems like a
reasonable idea. Note that the limit would result in new connections
being rejected, not old ones being summarily cut.
regards, tom lane
---------------------------(end of broadcast)---------------------------
But then the app is going to keep trying to connect over and over unless
it knows something about why it can't connect.
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?
> Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection
> > performance for other backends.
>
> Hmm, I see your point. A per-db limit *could* be useful even if it's
> set high enough that you don't expect it to be hit ... but most likely
> people would try to use it in a way that it wouldn't be very efficient
> compared to a client-side solution.
As mentioned in my response to Bruce ... in an ISP situation, a DoS attack
against the database by a single client can be very easy to accomplish in
our current situation ... all I need to do is setup a perl script that
opens all the connections I can to the database I have access to until all
are used up, and nobody else has access to *their* databases ...
> Tom Lane wrote:
> > Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > >> Well, there are two different things here. I agree that if an app
> > >> is going to use persistent connections, it should be the app's
> > >> responsibility to manage them. But a per-database, as opposed to
> > >> installation-wide, limit on number of connections seems like a
> > >> reasonable idea. Note that the limit would result in new connections
> > >> being rejected, not old ones being summarily cut.
> >
> > > But then the app is going to keep trying to connect over and over unless
> > > it knows something about why it can't connect.
> >
> > So? If it hits the installation-wide limit, you'll have the same
> > problem; and at that point the (presumably runaway) app would have
> > sucked up all the connections, denying service to other apps using other
> > databases. I think Marc's point here is to limit his exposure to
> > misbehavior of any one client app, in a database server that is serving
> > multiple clients using multiple databases.
>
> What I am saying is that using the backend to throttle per-db
> connections may not work too well because they will just keep retrying.
Okay, but also bear in mind that alot of the time, when I'm bringign up
stuff like this, I'm coming from the "ISP" perspective ... if I have one
client that is using up all 512 connections on the server, none of my
other clients are getting any connections ...
Yes, the client should have tested his code better, but I want to be able
to put 'limits' to make it so that everyone isn't affected by ones mistake
...
> I realize that the total limit can be hit too, but I assume that limit
> is set so it will not be hit (it's a resource tradeoff), while the
> per-db limit is there to try to throttle back the persistent
> connections.
Nope, the per-db limit is there to try and eliminate the impact of one
client/application from essentially creating a DoS for all other
database/clients ...
> Basically, total connections is to be set larger than you think you will
> ever need, while you expect per-db to be hit, and if something keeps
> trying to connect and failing, we may get very bad connection
> performance for other backends. This is where doing the limiting on the
> persistent connection end would be a better solution.
Agreed, but unless you have control over both the client and server sides,
its not possible ...
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
> Tom Lane wrote:
> > Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > > Basically, total connections is to be set larger than you think you will
> > > ever need, while you expect per-db to be hit, and if something keeps
> > > trying to connect and failing, we may get very bad connection
> > > performance for other backends.
> >
> > Hmm, I see your point. A per-db limit *could* be useful even if it's
> > set high enough that you don't expect it to be hit ... but most likely
> > people would try to use it in a way that it wouldn't be very efficient
> > compared to a client-side solution.
>
> The only way to do it would be, after a few hits of the limit, to start
> delaying the connection rejections so you don't get hammered. It could
> be done, but even then, I am not sure if it would be optimal.
Note that I don't believe there is an "optimal solution" for this ... but
in an environment where there are several clients connecting to several
different databases, the ability for one client to starve out the others
is actually very real ...
---------------------------(end of broadcast)---------------------------
That would indeed be a useful item. The only way to avoid such
exposure right now is to run another back end.
A
--
----
Andrew Sullivan 204-4141 Yonge Street
Liberty RMS Toronto, Ontario Canada
<and...@libertyrms.info> M2P 2A8
+1 416 646 3304 x110
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majo...@postgresql.org so that your
message can get through to the mailing list cleanly
---------------------------------------------------------------------------
Marc G. Fournier wrote:
> On Fri, 25 Oct 2002, Bruce Momjian wrote:
>
> > Tom Lane wrote:
> > > Bruce Momjian <pg...@candle.pha.pa.us> writes:
> > > >> Well, there are two different things here. I agree that if an app
> > > >> is going to use persistent connections, it should be the app's
> > > >> responsibility to manage them. But a per-database, as opposed to
> > > >> installation-wide, limit on number of connections seems like a
> > > >> reasonable idea. Note that the limit would result in new connections
> > > >> being rejected, not old ones being summarily cut.
> > >
> > > > But then the app is going to keep trying to connect over and over unless
> > > > it knows something about why it can't connect.
> > >
> > > So? If it hits the installation-wide limit, you'll have the same
> > > problem; and at that point the (presumably runaway) app would have
> > > sucked up all the connections, denying service to other apps using other
> > > databases. I think Marc's point here is to limit his exposure to
> > > misbehavior of any one client app, in a database server that is serving
> > > multiple clients using multiple databases.
> >
> > What I am saying is that using the backend to throttle per-db
> > connections may not work too well because they will just keep retrying.
>
> Okay, but also bear in mind that alot of the time, when I'm bringign up
> stuff like this, I'm coming from the "ISP" perspective ... if I have one
> client that is using up all 512 connections on the server, none of my
> other clients are getting any connections ...
>
> Yes, the client should have tested his code better, but I want to be able
> to put 'limits' to make it so that everyone isn't affected by ones mistake
> ...
>
> > I realize that the total limit can be hit too, but I assume that limit
> > is set so it will not be hit (it's a resource tradeoff), while the
> > per-db limit is there to try to throttle back the persistent
> > connections.
>
> Nope, the per-db limit is there to try and eliminate the impact of one
> client/application from essentially creating a DoS for all other
> database/clients ...
>
> > Basically, total connections is to be set larger than you think you will
> > ever need, while you expect per-db to be hit, and if something keeps
> > trying to connect and failing, we may get very bad connection
> > performance for other backends. This is where doing the limiting on the
> > persistent connection end would be a better solution.
>
> Agreed, but unless you have control over both the client and server sides,
> its not possible ...
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
Yes, this would be amazingly useful. I work for a web hosting provider
and it happens all too often where a single customer creates a flawed
script which consumes all the DB connections. Obviously denying access
to the rest of our customers.
Being able to set this per DB connection limit in Postgres itself
without having to restart the backend would also make this feature very
nice.
--
Best Regards,
Mike Benoit
NetNation Communication Inc.
Systems Engineer
Tel: 604-684-6892 or 888-983-6600
---------------------------------------
Disclaimer: Opinions expressed here are my own and not
necessarily those of my employer
... except in the added overhead to do the resource accounting and check
to see if there is a restriction ...
> And whenever a new
> resource limitation issue arrises, such as PL/SQL recursion
> depth, a new attribute would be added to pg_profile to handle
> the limitation...
I prefer GUC variables to table entries for setting stuff like recursion
limits; they're much lighter-weight to create and access, and you don't
need an initdb to add or remove a parameter.
regards, tom lane
Added to TODO:
* Allow limits on per-db/user connections
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
Could I suggest that such a feature falls under the category of
resource limits, and that the TODO should read something like:
Implement the equivalent of Oracle PROFILEs.
I think this would be a good project for 7.4. I'm not yet
volunteering, but if I can wrap up my current project, I might
be able to do it, depending upon the 7.4 target date. It would be:
1. A new system table:
pg_profile
2. The attributes of the profiles would be:
profname
session_per_user
cpu_per_session
cpu_per_call
connect_time
idle_time
logical_reads_per_session
logical_reads_per_call
3. A new field would be added to pg_user/pg_shadow:
profileid
4. A 'default' profile would be created when a new database is
created with no resource limits. CREATE/ALTER user would be
modified to allow for the specification of the profile. If no
profile is provided, 'default' is assumed.
5. A new CREATE PROFILE/ALTER PROFILE/DROP PROFILE command set
would be implemented to add/update/remove the tuples in
pg_profiles. And according modification of pg_dump for
dump/reload and psql for appropriate \ command.
Example:
CREATE PROFILE clerk
IDLE_TIME 30;
ALTER USER john PROFILE clerk;
ALTER USER bob PROFILE clerk;
or, for an ISP maybe:
ALYTER PROFILE default
IDLE_TIME 30;
It seems like a nice project, particularly since it wouldn't
affect anyone that doesn't want to use it. And whenever a new
resource limitation issue arrises, such as PL/SQL recursion
depth, a new attribute would be added to pg_profile to handle
the limitation...
Mike Mascari
mas...@mascari.com
---------------------------------------------------------------------------
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
> I don't see an adequate way to store the individual settings as GUC
> variables per user...
Have you looked at the per-database and per-user GUC facilities in 7.3?
regards, tom lane
---------------------------(end of broadcast)---------------------------
Nice idea. You can now have per-user/db settings that are SET when the
connection is made. You can set any GUC variable that way. We just
need a variable that can look at all sessions and determine if that user
has exceeded their connection quota.
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
Maybe I haven't looked at them enough ;-)
> Nice idea. You can now have per-user/db settings that are SET when the
> connection is made. You can set any GUC variable that way. We just
> need a variable that can look at all sessions and determine if that user
> has exceeded their connection quota.
I understand how you are able to set those variables per db, but I don't se=
e=20
how you can get those settings to persist between database shutdowns.=20=20
Perhaps someone can point me to the relevant documentation?
Robert Treat
The per db/user stuff is stored in the pg_database/pg_shadow tables per
row, so they exist in permanent storage.
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
Ah.. that was the missing link. I don't think the docs mention anything ab=
out=20
the information being stored in pg_database or pg_shadow, they always refer=
=20
to the "session default" which made me wonder what happened "between=20
sessions". Perhaps someone (me?) should work up a patch to help clarify=20
this....
Robert Treat
---------------------------(end of broadcast)---------------------------
You are right, there isn't a per/db-user combination setting. I think
one is done before the other, so you could try to set things that way,
maybe in a plpgsql procedure. I think you could do SELECT
db_user_set(); and have that function do that sets for you.
---------------------------------------------------------------------------
Marc G. Fournier wrote:
> On Fri, 25 Oct 2002, Bruce Momjian wrote:
>
> >
> > I need others wanting this before I can add something of this
> > sophistication to TODO.
>
> Sounds cool to me, and would satisfy what I'm looking for, since it sounds
> like the same user could have different limits depending on the database
> they are/were connectin gto ...
> > --
> > Bruce Momjian | http://candle.pha.pa.us
> > pg...@candle.pha.pa.us | (610) 359-1001
> > + If your life is a hard drive, | 13 Roberts Road
> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 5: Have you checked our extensive FAQ?
> >
> > http://www.postgresql.org/users-lounge/docs/faq.html
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>
--
Bruce Momjian | http://candle.pha.pa.us
pg...@candle.pha.pa.us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
---------------------------(end of broadcast)---------------------------
Well, now that I think about it... the "user local to database" (those
with the @dbname appended and all that) idea may very well cover the
ground for this. But I'm not actually asking, because I don't need it.
--
Alvaro Herrera (<alvherre[a]dcc.uchile.cl>)
"Aprender sin pensar es inutil; pensar sin aprender, peligroso" (Confucio)