Thanks for the answers.
I've been using gunicorn with sync workers and it seems clear to me.
If I understand correctly, up-till-some-point, everything in the
script is shared across the processes, and these receive identical ids
in Python. Everything which happens at import time is definitely in
this "shared" memory space. Client libraries which cannot be shared,
need to be regenerated / disposed / taken care of like with
engine.dispose().
The fact that database engines get different ids in each worker means
to me that I'm totally safe under gunicorn sync with SQLAlchemy.
With waitress on the other hand, I'm surprised. I mean if waitress is
multi-threaded, then how come it doesn't require every single module
and sub-dependency to be thread-safe? I've only been using waitress
for local development via pserve, but I see it's a popular choice for
a production server as well. How can it work in a multi-threaded mode,
if you have no idea about every single library's every dependency you
use? Or you actually check out all your libraries?
Now about SQL, why do I need any kind of transaction implementation? I
mean isn't
with engine.connect() as conn:
conn.execute(stmt)
a transaction? I'd just need to explicitly pass conn to every
function which need DB access, but I don't see why is it important to
do anything more than this, even with write queries. (I don't know how
would I have DB access in view derivers with this pattern though).
Zsolt
> To view this discussion on the web visit
https://groups.google.com/d/msgid/pylons-discuss/0e53f127-d115-4724-95aa-5cfd14f23223%40googlegroups.com.