Interpreting a SA traceback

7 views
Skip to first unread message

Dan Stromberg

unread,
May 24, 2023, 1:33:51 PM5/24/23
to 'Dan Stromberg' via sqlalchemy
I know, python2 is long dead. We’re almost ready for Python3, but not quite.

Anyway, here’s a traceback that I’m not sure what to make of. Is it
saying that a transaction got so big that it couldn’t be fully flushed
within the timeout window? I’ve elided a sensitive part from the very
end of the traceback. I realize this error is coming from mysql, not
SA.

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475,
in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461,
in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/app/common/routing.py", line 117, in __call__
return Wrapper(*args, **kwargs)
File "/app/app/common/auth/manager.py", line 340, in AuthDecorator
return callback(self, *args, **kwargs)
File "/app/app/common/routing.py", line 101, in Wrapper
return self.callback(context, *a, **k)
File "/app/app/api/service/v3/publish/controllers.py", line 99, in
publish_new_build
return publish_helper(context, manifestData, pipeline, keyring, token=token)
File "/app/app/common/publishing/helpers.py", line 181, in publish_helper
build.save(commit=False)
File "/app/app/db/mixins.py", line 16, in save
_session.flush()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/scoping.py",
line 153, in do
return getattr(self.registry(), name)(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
line 2254, in flush
self._flush(objects)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
line 2380, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py",
line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py",
line 2344, in _flush
flush_context.execute()
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py",
line 391, in execute
rec.execute(self)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py",
line 556, in execute
uow
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py",
line 181, in save_obj
mapper, table, insert)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py",
line 866, in _emit_insert_statements
execute(statement, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 948, in execute
return meth(self, multiparams, params)ddd
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py",
line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 1200, in _execute_context
context)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py",
line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py",
line 1193, in _execute_context
context)
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py",
line 507, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line
165, in execute
result = self._query(query)
File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line
321, in _query
conn.query(q)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py",
line 860, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py",
line 1061, in _read_query_result
result.read()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py",
line 1349, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py",
line 1018, in _read_packet
packet.check_error()
File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py",
line 384, in check_error
err.raise_mysql_exception(self._data)
File "/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line
107, in raise_mysql_exception
raise errorclass(errno, errval)
InternalError: (pymysql.err.InternalError) (1205, u'Lock wait timeout
exceeded; try restarting transaction') [SQL: u'INSERT INTO …
(Background on this error at: http://sqlalche.me/e/2j85)

We’re running:
tact@tact_pub_api:/app$ python2 -m pip list -v | grep -i sqlalchemy
Flask-SQLAlchemy 2.5.1
/usr/local/lib/python2.7/dist-packages pip
SQLAlchemy 1.2.5
/usr/local/lib/python2.7/dist-packages pip

And:
bash-4.2$ rpm --query --all | grep -i mysql
mysql-shell-8.0.33-1.el7.x86_64
mysql-community-server-minimal-5.7.42-1.el7.x86_64

Any suggestions? Might it help to increase a timeout somewhere? Or
maybe we need to use smaller transactions?

Thanks for the cool software.

Philip Semanchuk

unread,
May 24, 2023, 1:48:43 PM5/24/23
to sqlal...@googlegroups.com


> On May 24, 2023, at 1:33 PM, Dan Stromberg <stro...@gmail.com> wrote:
>
> I know, python2 is long dead. We’re almost ready for Python3, but not quite.
>
> Anyway, here’s a traceback that I’m not sure what to make of. Is it
> saying that a transaction got so big that it couldn’t be fully flushed
> within the timeout window? I’ve elided a sensitive part from the very
> end of the traceback. I realize this error is coming from mysql, not
> SA.


Hi Dan,
I think you hit the nail on the head when you said "I realize this error is coming from mysql, not SA." It looks to me like your Python code attempted a commit, but some other transaction running on the server was locking the table that your Python code was attempting to INSERT into. Eventually MySQL noticed that your INSERT transaction had been in a lock wait state for too long and killed the INSERT transaction. SQLAlchemy is just reporting what happened.

The remedy might be to increase the lock wait timeout. I'm use Postgres, not MySQL, so rather than misdirecting you on how to do that, I'll let you look it up. Increasing the lock wait timeout might be a reasonable strategy if for some reason it's currently set to a very low value, but if it's not (and I expect the default is not low), then increasing the timeout addresses the symptom rather than the disease. It's good practice not to have long running transactions. The first avenue I would explore is to figure out what other transaction was open for so long that your INSERT was locked out, and then to figure out why it was open for so long.


Hope this helps
Philip
Reply all
Reply to author
Forward
0 new messages