How to handle SQLA autoflush failures during Request commit?

42 views
Skip to first unread message

Jens Troeger

unread,
Jun 21, 2019, 1:39:21 PM6/21/19
to pylons-discuss
Hello,

I’m using gunicorn in front of a Pyramid server, and I changed the configuration from 1 to 4 workers. Now I occasionally see the following exception:

InvalidRequestError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)
(pymysql.err.OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction')
[SQL: UPDATE … SET foo=%(bar)s WHERE …]
[parameters: {…}]
(Background on this error at: http://sqlalche.me/e/e3q8)

The server follows the SQLA cookie-cutter template, and is running with a SQLAlchemy Session factory whose autoflush still defaults to true. Curiously, it seems that only one particular endpoint keeps triggering this problem, and the failing UPDATE attempts to flip a boolean flag.

However, the exception gives rise to two questions:
  1. SQLAlchemy perspective. The reason for using autoflush here is to ensure that new ORM objects whose primary key is generated like so:
        id = Column(UUID(), default=uuid.uuid4, primary_key=True)
    have a valid id after they’ve been newly created. Would using a manual dbsession.flush() be preferable with autoflush disabled? Are there better recommended ways of handling this problem? (See this related question.)
  2. Pyramid perspective. Every Request object has its own Session object associated which commits when request handling is done. However, the above exception in a sense fails the request handling. If I was to follow the suggestion to “begin a new transaction with this Session” then how would I do that? Does Pyramid provide existing support to handle such issues, or does the exception indicate a problem elsewhere that requires attention? What is best practice here?
In general, I think I’d like to understand the finer details of SQLAlchemy’s autoflush and how that interplays with Pyramid’s requests and their Sessions and transactions. I wonder, for example, if the above indicates that a client has sent the same request more than once (Android’s httplib seems to do that), and with more than one workers enabled this manifests a race condition.

Much thanks!
Jens

Theron Luhn

unread,
Jun 21, 2019, 3:03:24 PM6/21/19
to 'andi' via pylons-discuss
Also using the cookie-cutter SQLAlchemy in front of Gunicorn.

1) I have autoflush disabled.  It doesn’t reduce the mental load—as your example shows, you still need to understand when and why to use flush—and premature flushes are a headache to work around.

2) After a query fails (in this case a deadlock), the session is unusable for the remainder of the request.  This is expected behavior.  Usually when I’ve encountered an exception like this, it’s because either my exception view or a tween somewhere is trying to run a query.  I’m not sure how to overcome this limitation, but I’ve haven’t found it too bad to work around.

— Theron



--
You received this message because you are subscribed to the Google Groups "pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pylons-discus...@googlegroups.com.
To post to this group, send email to pylons-...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pylons-discuss/e0201c58-f801-42c6-93e8-a3bb08e9a230%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jonathan Vanasco

unread,
Jun 24, 2019, 4:53:53 PM6/24/19
to pylons-discuss
This generally happens because of an issue in your exception handling or because you are trying to share a database connection across process boundaries. 

The first thing to do is to check if you are connecting to the database before gunicorn forked.  If so, this will likely go away if you call `dispose` on the SqlAlchemy engine.  `dispose` will essentially reset the connection pool, and ensure each worker will have it's own, dedicated pool

There could also be an issue where you are sharing a Session across requests or process boundaries.  that can be harder to pinpoint.

Reply all
Reply to author
Forward
0 new messages