PyDAL pool_size and connections

251 views
Skip to first unread message

Richard Behan-Howell

unread,
Sep 17, 2021, 2:35:07 AM9/17/21
to py4web
Hi,
Recently I have had some Postgres psycopg2.OperationalError: FATAL: remaining connection slots are reserved for non-replication superuser connections errors that brought my web site down for 8 hours until I reloaded it.

The max number of connections in my Postgres DB was set to 20 and I have changed that to 100 now.
The pool_size was set to 1, each for two py4web web apps and 10 for a web2py app that use the same DB. I have now increased this to 10 for each webapp.

My question is, how can I tell how many connections py4web is using at any given time and is there a way to limit it so I don't go over the Postgres limit?
Will py4web disconnect old connections?
I couldn't find an answer to this in the documentation.

I don't want this to bring my website down again.

Regards,
Richard.

Massimo DiPierro

unread,
Sep 17, 2021, 3:06:41 AM9/17/21
to Richard Behan-Howell, py4web
We met need some debugging here. Is it possible you actually had more than 20 users connected concurrently busy in long requests? Pydal will put connections in a pool and reuse them. Will create a new only if all those previously created are busy.

--
You received this message because you are subscribed to the Google Groups "py4web" group.
To unsubscribe from this group and stop receiving emails from it, send an email to py4web+un...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/py4web/ca2759a9-3cbf-4b17-adf0-411f9d02bde2n%40googlegroups.com.

Richard Behan-Howell

unread,
Sep 17, 2021, 6:58:50 PM9/17/21
to py4web
Massimo,
Some more context, I am using pythonanywhere with 2 web workers each website. (3 web apps)
I do have incoming requests from a python task to push data into the DB, which sometimes pushes up to 7 or 8 requests in very short succession.
So I believe this to be my issue, with only 20 allowed connections for peak times. Along with me connecting to the DB with pgadmin to do maintenance and leaving it open.
I have changed the DB max connections to 100 to allow for peak demand. 

I have checked the postgres connections from pg_stat_activity and at the moment there are only 8 idle connections, 2 from one, 4 from another and 2 from web2py. (from the pool of five each.) so all good.

I will leave this for now and monitor it. Is it possible to find out when PyDal opens and closes a connection to a log file perhaps - I'm more curious.

Massimo

unread,
Sep 19, 2021, 11:48:57 PM9/19/21
to py4web
keep us posted.

zejd...@gmail.com

unread,
Apr 1, 2022, 12:05:32 PM4/1/22
to py4web
Hi, not sure if my issue is related to what Richard mentioned, but I am experiencing a DB connection leak. In my setting I have 4 apps (each running on its own py4web in a separate docker container), all connected to one postgres dbms running in its own container. One db per app, each app with DB_POOL_SIZE=10. To avoid hitting postgres max connections limit quickly, I had to increase postgres max_connections setting. But after a while I get hundreds of connections left in "Idle" state after either COMMIT or ROLLBACK (checked with pg_stat_activity). Idling connections are from all my apps. It happens even on my dev instance with no concurrent users. Please, do you have an idea how this could be solved? I tried closing connections explictly calling db.close(), but it leads to errors when the pool tries to re-use a connection which has been closed..

Thank you in advance for any hints!
David

zejd...@gmail.com

unread,
Apr 10, 2022, 12:49:02 PM4/10/22
to py4web
Hi, please, may I ask some hints how to investigate on the DB connection leak problem? Number of PostgreSQL connections from all my apps keeps growing even to hundreds. All connections are entering Idle state after COMMIT or ROLLBACK query. It seems that the connection pool does not pick them properly, but keeps opening new ones (despite
DAL with pool_size=10). Not sure if any connections are ever closed.

Thanks in advance!
David

zejd...@gmail.com

unread,
Apr 19, 2022, 6:06:01 AM4/19/22
to py4web
Hi, I am still dealing with this nasty problem.

Just tried pool_size=0 but the number of idling connections keeps growing anyways. As soon as MAX_CONNECTIONS set by PostgreSQL is reached, the db starts refusing further connections, crashing the app.

The problem disappears if I use this version of DAL, but it requires pool_size=0:

class ClosingDAL(pydal.DAL, Fixture):

  reconnect_on_request = True

  def on_request(self, context):
     if self.reconnect_on_request:
       self._adapter.reconnect()
     threadsafevariable.ThreadSafeVariable.restore(ICECUBE)

  def on_error(self, context):
     self.rollback()
     self.close()

  def on_success(self, context):
     self.commit()
     self.close()

Since DAL from core.py performs either commit() or rollback() only, which part of pydal/py4web is responsible for closing idling connections above pool_size? Do you have an idea how to make the connection pool working, not causing connection leak?

Thanks!
David

zejd...@gmail.com

unread,
Apr 21, 2022, 7:57:21 AM4/21/22
to py4web
Hi, after upgrade to the latest pypi version (py4web 1.20220412.1, pydal 20220213.2) the connection leak with DAL is still present and with my ClosingDAL I started receiving this error:

Traceback (most recent call last):
  File "/home/py4web/.local/lib/python3.8/site-packages/py4web/core.py", line 935, in wrapper
    ret = func(*func_args, **func_kwargs)
  File "/home/py4web/.local/lib/python3.8/site-packages/py4web/core.py", line 920, in wrapper
    raise context["exception"]
  File "/home/py4web/.local/lib/python3.8/site-packages/py4web/core.py", line 916, in wrapper
    call(fixture.on_success, context)
  File "/home/py4web/.local/lib/python3.8/site-packages/py4web/core.py", line 882, in call
    return f(context)
  File "/home/py4web/apps/.../modules/....py", line 205, in on_success
    self.close()
  File "/home/py4web/.local/lib/python3.8/site-packages/pydal/base.py", line 830, in close
    if self._db_uid in THREAD_LOCAL._pydal_db_instances_:
AttributeError: '_thread._local' object has no attribute '_pydal_db_instances_'

so, now I have to ignore the error (another workaround):

class ClosingDAL(pydal.DAL, Fixture):

  reconnect_on_request = True

  def on_request(self, context):
    if self.reconnect_on_request:
      self._adapter.reconnect()
    threadsafevariable.ThreadSafeVariable.restore(ICECUBE)

  def __close_silently(self):
    try:
      self.close()
    except Exception as e:
      print(f"[DAL] closing error {e}")

  def on_error(self, context):
    self.rollback()
    self.__close_silently()

  def on_success(self, context):
    self.commit()
    self.__close_silently()

David

Massimo

unread,
May 1, 2022, 2:38:22 PM5/1/22
to py4web
Sorry for the delay. Will prioritize investigating this. Please open a ticket if you have not done so already

Massimo

unread,
Jul 25, 2022, 6:29:28 AM7/25/22
to py4web
Hello Richard,

you were absolutely right. There was a major bug in py4web related to connection pools and it is now fixed in version 1.20220725.1. I also made a new pydal version (also version 20220725.1) with some convenience functions for pooling.

It is all deployed so please give it a try.

The pydal logic related to this is ore complicated that needs to be because it is designed to support web2py, multiple databases, and distributed transactions. At this time py4web does not have logic to support distributed transactions but it does support multiple  databases.

Thanks for bringing this up.

Massimo

Massimo

unread,
Jul 25, 2022, 6:31:12 AM7/25/22
to py4web
Also thanks to David, and Luca about this.
Reply all
Reply to author
Forward
0 new messages