Chris
unread,Jul 17, 2008, 9:24:21 AM7/17/08Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to sqlalchemy
Hi all, I'm using SQLAlchemy to access a large table (~280 million
rows), and I'm getting timeout issues. At 30 seconds, SQLAlchemy
quits. In lieu of getting all tables past and future to be indexed
differently, I was wondering if there was a way using session.query
(*not* select()) to change the default timeout of 30 seconds to
something more suitable for such large tables. All I have found on
the web points to QueuePool and the pool_timeout parameter, but it is
unclear as to how this fits in with a mapper and session.query. Here
is a snippet of the code I am using:
def createSession():
global session, table1
engine = create_engine('mssql://<database>', echo=False)
metadata = MetaData()
metadata.bind=engine
table1 = Table('<Large Table>', metadata,\
Column('LocationId',Integer,primary_key=True),
autoload=True)
mymapper1=mapper(Object,table1)
Session = sessionmaker(bind=engine, autoflush=True,
transactional=True)
session = Session()
def getData(lmplocation_id):
global session, table1
if not session:
createSession()
result = session.query(Object)
return result
Thanks,
Chris