you should definitely create the engine and metadata on a per-process
basis. When using SQLite, the engine automatically chooses the
"SingletonThreadPool" connection pool, which will maintain a single
SQLite connection per application thread, which is never moved across
threads (unless you did so explicitly). The error you're getting
would only occur if you are sharing the connection returned by the
engine across threads, which can also occur if you're using a single
Session that's bound to a connection across threads. When using the
scoped_session() manager, this also should not occur - some
description of this lifecycle is at http://www.sqlalchemy.org/docs/05/session.html#unitofwork_contextual_lifespan
.
The scoped_session approach is widely used in conjunction with sqlite
in many web frameworks including Pylons, Turbogears and Zope so you'd
have to ensure that your specific approach is not sharing a single
Connection or Session between threads.