Session query timeout

1,360 views
Skip to first unread message

Chris

unread,
Jul 17, 2008, 9:24:21 AM7/17/08
to sqlalchemy
Hi all, I'm using SQLAlchemy to access a large table (~280 million
rows), and I'm getting timeout issues. At 30 seconds, SQLAlchemy
quits. In lieu of getting all tables past and future to be indexed
differently, I was wondering if there was a way using session.query
(*not* select()) to change the default timeout of 30 seconds to
something more suitable for such large tables. All I have found on
the web points to QueuePool and the pool_timeout parameter, but it is
unclear as to how this fits in with a mapper and session.query. Here
is a snippet of the code I am using:

def createSession():
global session, table1
engine = create_engine('mssql://<database>', echo=False)
metadata = MetaData()
metadata.bind=engine
table1 = Table('<Large Table>', metadata,\
Column('LocationId',Integer,primary_key=True),
autoload=True)
mymapper1=mapper(Object,table1)
Session = sessionmaker(bind=engine, autoflush=True,
transactional=True)
session = Session()

def getData(lmplocation_id):
global session, table1
if not session:
createSession()
result = session.query(Object)
return result

Thanks,
Chris

Michael Bayer

unread,
Jul 17, 2008, 10:20:17 AM7/17/08
to sqlal...@googlegroups.com

what database is this ? The "timeout" of the connection itself, or
the max time allowed to spend on an execution is typically a specific
setting within your client library. If its an argument to the DBAPI's
connect() method, it can be passed through using the "connect_args"
kwarg to create_engine() - though I usually see setting like these
within the configuration files for the database itself.

SQLA itself does not set any timeout parameters; we only have
"pool_recycle" which is turned off by default, and only takes effect
upon pool checkout for an otherwise unused connection.

Reply all
Reply to author
Forward
0 new messages