Foreign key reflection error?

293 views
Skip to first unread message

thatsanice...@mac.com

unread,
Nov 3, 2011, 3:12:48 PM11/3/11
to sqlal...@googlegroups.com
Hi,

I'm getting the following error with SQLAlchemy 0.7.3:

sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relationship Survey.bossSpectrumHeaders. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well.

I am relating two tables named "Survey" and "BOSSSpectrumHeader". The former is in a schema called "platedb" and the latter in another schema called "boss". The latter table has numerous foreign keys from the platedb schema, but and each fail if I comment out the previous one. The search path is "platedb, shared, boss, photofield, twomass, public". The python code is:

class BOSSSpectrumHeader(Base):
__tablename__ = 'spectrum_header'
__table_args__ = {'autoload' : True, 'schema' : 'boss', 'extend_existing' : True}

class Survey(Base):
__tablename__ = 'survey'
__table_args__ = {'autoload' : True, 'schema' : 'platedb'}

Survey.bossSpectrumHeaders = relationship(BOSSSpectrumHeader, backref="survey")

Finally, the SQL definitions of the tables are pasted below. Is there something I am missing? Why is the foreign key not being retrieved via reflection? Virtually everything else (including cross-schema relationships) is working fine.

Thanks for any help!

Cheers,
Demitri

---

CREATE TABLE boss.spectrum_header
(
pk integer NOT NULL DEFAULT nextval('spectrum_header_pk_seq'::regclass),
...
survey_pk integer NOT NULL,
CONSTRAINT boss_spectrum_header_pk PRIMARY KEY (pk ),
CONSTRAINT survey_fk FOREIGN KEY (survey_pk)
REFERENCES survey (pk) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT
)

CREATE TABLE survey
(
pk serial NOT NULL,
label text,
CONSTRAINT survey_pk PRIMARY KEY (pk ),
CONSTRAINT survey_label_uniq UNIQUE (label )
)

Michael Bayer

unread,
Nov 3, 2011, 6:25:08 PM11/3/11
to sqlal...@googlegroups.com

On Nov 3, 2011, at 3:12 PM, thatsanice...@mac.com wrote:

> Hi,
>
> I'm getting the following error with SQLAlchemy 0.7.3:
>
> sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relationship Survey.bossSpectrumHeaders. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well.
>
> I am relating two tables named "Survey" and "BOSSSpectrumHeader". The former is in a schema called "platedb" and the latter in another schema called "boss". The latter table has numerous foreign keys from the platedb schema, but and each fail if I comment out the previous one. The search path is "platedb, shared, boss, photofield, twomass, public". The python code is:

two things I notice, first why using "extend_existing" - suggests theres more going on here. Also are you certain the foreign key from boss.spectrum_header points to the "platedb.schema" table and not another "schema" table elsewhere ? Yet another thing, when you reflect the foreign key from spectrum_header, it may not be coming back with "platedb" as the schema since you appear to be referring to the remote table using the implicit search path. SQLAlchemy may not be matching that up like you expect. There was an issue regarding this which was fixed in 0.7.3, another user relying upon a long search path. Do you get different results using 0.7.2 ? can you try defining your foreign key constraints in PG consistently with regards to how you're using schemas in your model ? (i.e. either the FK is to platedb.schema in PG, or remove the "platedb" schema from Survey).

> --
> You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
> To post to this group, send email to sqlal...@googlegroups.com.
> To unsubscribe from this group, send email to sqlalchemy+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
>

thatsanice...@mac.com

unread,
Nov 4, 2011, 5:16:59 PM11/4/11
to sqlal...@googlegroups.com
Hello,

Thanks Mike for the comments. Before I answer the questions you asked, I want to note I found a workaround without making any changes to the database-- I just reversed the tables in the definition. At first I was using:

Survey.bossSpectrumHeaders = relationship(BOSSSpectrumHeader, backref="survey")

Changing this to the following worked:

BOSSSpectrumHeader.survey = relationship(Survey, backref="bossSpectrumHeaders")

I'm not really sure how to interpret that.


On Nov 3, 2011, at 6:25 PM, Michael Bayer wrote:

> two things I notice, first why using "extend_existing" - suggests theres more going on here.

I'm not wholly sure why that is in place; my colleague wrote that part.

> Also are you certain the foreign key from boss.spectrum_header points to the "platedb.schema" table and not another "schema" table elsewhere ?

Yes, that table name is unique across all schemas.

> Yet another thing, when you reflect the foreign key from spectrum_header, it may not be coming back with "platedb" as the schema since you appear to be referring to the remote table using the implicit search path. SQLAlchemy may not be matching that up like you expect. There was an issue regarding this which was fixed in 0.7.3, another user relying upon a long search path.

I do have a long search path.

> Do you get different results using 0.7.2 ?

No, the first thing I did when I got this error was upgrade to 0.7.3.

> can you try defining your foreign key constraints in PG consistently with regards to how you're using schemas in your model ? (i.e. either the FK is to platedb.schema in PG, or remove the "platedb" schema from Survey).

I was reading the descriptions from PGAdmin3 - apparently they drop the schema in the display when the table is on the search path. There's nothing that I can do to further put the table in the schema, as it were.

I hope that knowing that reversing the order works helps to point to the problem...?

Cheers,
Demitri

Michael Bayer

unread,
Nov 4, 2011, 5:25:24 PM11/4/11
to Demitri Muna, sqlal...@googlegroups.com


I think the important thing here is that the table definition on the Python side needs to represent the table in the same way that the foreign key def will represent it from PG.

It's based on this fact:

> I was reading the descriptions from PGAdmin3 - apparently they drop the schema in the display when the table is on the search path. There's nothing that I can do to further put the table in the schema, as it were.


So SQLA does this:

1. reflect boss.spectrum_header
2. see that boss.spectrum_header has a foreign key - to a table called "survey". No schema is given for this FK def. So SQLAlchemy creates a new table called "survey" in the metadata collection. The schema is "None".
3. SQLAlchemy then reflects "survey", all its columns. PG's liberal search path allows this to work without issue.
4. The application then goes to reflect what is, from SQLAlchemy's perspective, an entirely different table called platedb.survey. Populates that table with things too.
5. The MetaData now has three tables: boss.spectrum_header, platedb.survey, survey. Errors ensue since boss.spectrum_header points to "survey" and not "platedb.survey".

Solutions:

1. set the search path to be only "public" for the application's connection. Use explicit schema names for all constructs outside of "public". A connection event that emits "SET search_path TO public" on each new connection will achieve this without affecting the database outside of the app.

2. Leave the liberal search path in place. Remove the usage of "schema" within the SQLAlchemy application and let PG's liberal search path find things.


Demitri Muna

unread,
May 17, 2012, 12:02:39 PM5/17/12
to sqlal...@googlegroups.com
Hi,

I'd like to revive this thread from a little while back as I'm still having problems. Thanks again to Michael for the help.

In short, I'm having problems with SQLAlchemy determining the foreign key relationship between two tables in two different schemas. For example, this

VisitSpectrum.catalogObject = relation(CatalogObject, backref="visitSpectra")

gives me the error:

ArgumentError: Could not determine join condition between parent/child tables on relationship VisitSpectrum.fitsHeaderValues. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well.

while this

CatalogObject.visitSpectra = relation(VisitSpectrum, backref="catalogObject")

works. I am using SQLAlchemy 0.7.7 and Python 2.7.

I would have thought that these two lines should be functionally identical. SA is able to determine the relationship one direction, so the information is certainly there. The problem has to do, as Michael noted, with the schema search path, but even so I don't see where the ambiguity lies.

I've removed the "public" schema from my database, and in my Python model classes where I always explicitly set the "schema" value in __table_args__ to avoid any ambiguity. The tables are each in a different schema:

apogeedb.VisitSpectrum
catalogdb.CatalogObject

When the search path is this:

catalogdb, apogeedb

this works:

VisitSpectrum.catalogObject = relation(CatalogObject, backref="visitSpectra")

When the search path is

apogeedb, catalogdb

this works:

CatalogObject.visitSpectra = relation(VisitSpectrum, backref="catalogObject")

Flipping any of these gives the error above. No other schema (there are two more) contain tables with these names. This still strikes me as a bug somewhere. Any thoughts?

Michael's second suggestion (below) of not explicitly specifying the schema won't work in my case as I have a few schemas that do have overlapping names (hence, the schema separation).

Cheers,
Demitri

---

Demitri Muna

unread,
May 17, 2012, 12:14:50 PM5/17/12
to sqlal...@googlegroups.com
Hi,

As a quick follow up, the inability to cross schema in one direction means that join tables won't work regardless of path order. For example, given

apogeedb.Calibration
platedb.Exposure

neither of these will work since, I'm guessing, the join must be made in both directions:

Calibration.exposures = relation(Exposure, backref="calibrations")
Exposure.calibrations = relation(Calibration, backref="exposures")

Again, the schema arg is explicitly set. This does work:

Calibration.exposures = relation(Exposure, secondary=ExposureToCalibration.__table__, backref="calibrations")


Cheers,
Demitri

Michael Bayer

unread,
May 17, 2012, 12:21:58 PM5/17/12
to sqlal...@googlegroups.com

"secondary" is never implied in relationship(), you must always specify this table explicitly.

Also I'd advise caution when mapping to a "secondary" table - SQLAlchemy may perform persistence operations twice against it, unless the relationship is set as viewonly=True.

Michael Bayer

unread,
May 17, 2012, 12:28:07 PM5/17/12
to sqlal...@googlegroups.com

On May 17, 2012, at 12:02 PM, Demitri Muna wrote:

> Hi,
>
> I'd like to revive this thread from a little while back as I'm still having problems. Thanks again to Michael for the help.
>
> In short, I'm having problems with SQLAlchemy determining the foreign key relationship between two tables in two different schemas. For example, this
>
> VisitSpectrum.catalogObject = relation(CatalogObject, backref="visitSpectra")
>
> gives me the error:
>
> ArgumentError: Could not determine join condition between parent/child tables on relationship VisitSpectrum.fitsHeaderValues. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well.
>
> while this
>
> CatalogObject.visitSpectra = relation(VisitSpectrum, backref="catalogObject")
>
> works. I am using SQLAlchemy 0.7.7 and Python 2.7.

they are equivalent and I'd consider here that perhaps in the second case you aren't actually testing the mapping ? The error is only raised once mappers go through the deferred "configure" step, which is the first time you use the mappings.

A bug report for SQLA would look like this:

t1 = Table("table1", metadata, schema="schema1", autoload=True, autoload_with=some_engine)
t2 = Table("table2", metadata, schema="schema2", autoload=True, autoload_with=some_engine)

assert t1.c.some_column.references(t2.c.some_other_column)

that is, "some_column" is expected to have a ForeignKey to "some_other_column" based on the reflection.

Please provide that including the raw "CREATE TABLE" statements, removing any extraneous columns and data not needed to reproduce the issue.

Demitri Muna

unread,
May 17, 2012, 6:18:38 PM5/17/12
to sqlal...@googlegroups.com
Hi,

On May 17, 2012, at 12:21 PM, Michael Bayer wrote:

> "secondary" is never implied in relationship(), you must always specify this table explicitly.

Sorry, yes, it's been a while since I've written a new model classes file.

> they are equivalent and I'd consider here that perhaps in the second case you aren't actually testing the mapping ? The error is only raised once mappers go through the deferred "configure" step, which is the first time you use the mappings.

I always call "compile_mappers()" after defining all my tables and relationships to test them - let me know if I'm misunderstanding its use.

> A bug report for SQLA would look like this:
>
> t1 = Table("table1", metadata, schema="schema1", autoload=True, autoload_with=some_engine)
> t2 = Table("table2", metadata, schema="schema2", autoload=True, autoload_with=some_engine)
>
> assert t1.c.some_column.references(t2.c.some_other_column)
>
> that is, "some_column" is expected to have a ForeignKey to "some_other_column" based on the reflection.
>
> Please provide that including the raw "CREATE TABLE" statements, removing any extraneous columns and data not needed to reproduce the issue.

See attached. I've created a very simple toy model to demonstrate this. The full database I created is provided as an SQL dump. I'm using PostgreSQL 9.0.5. No data is required to make the assertions I made fail. The code provided is in three Python files - one that defines the database connection, another that defines the model classes (both designed for use by many scripts), and a third script that is the one that should be executed. The database user is "schema_test". The code of interest appears in lines 37-57 of ModelClasses.py. Finally, you'll have to update the database connection information for your setup at the top of "multiple_schema_test.py".

> Also I'd advise caution when mapping to a "secondary" table - SQLAlchemy may perform persistence operations twice against it, unless the relationship is set as viewonly=True.

I'm not completely sure what you mean here. I assume I need to create an Python class for join tables as any other to use for setting up the relations. I don't otherwise use the class in any way. Please let me know if I'm doing something wrong in my code example; I've never had problems before.

And I just noticed in the docs that "relation" is now called "relationship"...

Cheers,
Demitri



SQLAlchemy_schema_test.zip

Michael Bayer

unread,
May 17, 2012, 6:33:00 PM5/17/12
to sqlal...@googlegroups.com

On May 17, 2012, at 6:18 PM, Demitri Muna wrote:

>
>> A bug report for SQLA would look like this:
>>
>> t1 = Table("table1", metadata, schema="schema1", autoload=True, autoload_with=some_engine)
>> t2 = Table("table2", metadata, schema="schema2", autoload=True, autoload_with=some_engine)
>>
>> assert t1.c.some_column.references(t2.c.some_other_column)
>>
>> that is, "some_column" is expected to have a ForeignKey to "some_other_column" based on the reflection.
>>
>> Please provide that including the raw "CREATE TABLE" statements, removing any extraneous columns and data not needed to reproduce the issue.
>
> See attached. I've created a very simple toy model to demonstrate this. The full database I created is provided as an SQL dump. I'm using PostgreSQL 9.0.5. No data is required to make the assertions I made fail. The code provided is in three Python files - one that defines the database connection, another that defines the model classes (both designed for use by many scripts), and a third script that is the one that should be executed. The database user is "schema_test". The code of interest appears in lines 37-57 of ModelClasses.py. Finally, you'll have to update the database connection information for your setup at the top of "multiple_schema_test.py".

OK this is way too much verbiage. If the error is as you describe, a bug involving reflection of two tables across two schemas, I'd need a test which illustrates this alone, no ORM or full database dumps, with a simple failure to identify a foreign key constraint between them. I've attached a sample of exactly what I'd like to see.

If the issue is not as simple as this, then we will have to dig into multiple tables, mappings, etc. but you should be able to reproduce the specific case here with just two tables.

test.py

Demitri Muna

unread,
May 17, 2012, 7:05:50 PM5/17/12
to sqlal...@googlegroups.com

On May 17, 2012, at 6:33 PM, Michael Bayer wrote:

> OK this is way too much verbiage. If the error is as you describe, a bug involving reflection of two tables across two schemas, I'd need a test which illustrates this alone, no ORM or full database dumps, with a simple failure to identify a foreign key constraint between them. I've attached a sample of exactly what I'd like to see.
>
> If the issue is not as simple as this, then we will have to dig into multiple tables, mappings, etc. but you should be able to reproduce the specific case here with just two tables.

I've refactored the example to the simplest case. We can see if the fix addresses many-to-many relationships as well.

Demitri

schema_test.py

Michael Bayer

unread,
May 18, 2012, 9:19:06 AM5/18/12
to sqlal...@googlegroups.com
very nice ! Though Im sorry I put you through this as I just remembered a key caveat with Postgresql, and in fact it's even documented here:

http://docs.sqlalchemy.org/en/rel_0_7/dialects/postgresql.html#remote-cross-schema-table-introspection


When using cross-schema reflection, you have the option of either using only "public" in your schema search path, or *not* schema-qualifying the tables. This is because when you have the alternate schemas in your search path, Postgresql does not tell SQLAlchemy about the schema name when it returns foreign key information - it returns just the tablename, columnnname, but not the schema. Therefore when you have your Table objects which schema names in them, SQLAlchemy can't match them up and instead makes another Table that you aren't seeing as the target of each cross-schema foreign key, which has no schema name.

That's likely what's happening here, evidenced by the fact that your test case works for me (I leave the search path set at its default of "public").




Demitri Muna

unread,
May 18, 2012, 3:38:12 PM5/18/12
to sqlal...@googlegroups.com
Hi Michael,

On May 18, 2012, at 9:19 AM, Michael Bayer wrote:

> When using cross-schema reflection, you have the option of either using only "public" in your schema search path, or *not* schema-qualifying the tables. This is because when you have the alternate schemas in your search path, Postgresql does not tell SQLAlchemy about the schema name when it returns foreign key information - it returns just the tablename, columnnname, but not the schema. Therefore when you have your Table objects which schema names in them, SQLAlchemy can't match them up and instead makes another Table that you aren't seeing as the target of each cross-schema foreign key, which has no schema name.

Thanks for the explanation. This unfortunately makes things difficult for me as I have at least two schema that have nearly the same tables, the schema model providing a clean separation of the data. Removing the schema from search_path introduces ambiguities.

Where would be the best place to try to find a solution? Is it psycopg2 that's not returning the schema information? The information is of course in the database, so it sounds like maybe the SQL query to get the foreign keys could be updated to explicitly include the schema? I imagine it's not at the database level since, again, the information is there.

Also, given the Python classes in my example code, how do I print out the foreign keys from a table object?

Cheers,
Demitri

Michael Bayer

unread,
May 18, 2012, 5:10:12 PM5/18/12
to sqlal...@googlegroups.com

On May 18, 2012, at 3:38 PM, Demitri Muna wrote:

> Hi Michael,
>
> On May 18, 2012, at 9:19 AM, Michael Bayer wrote:
>
>> When using cross-schema reflection, you have the option of either using only "public" in your schema search path, or *not* schema-qualifying the tables. This is because when you have the alternate schemas in your search path, Postgresql does not tell SQLAlchemy about the schema name when it returns foreign key information - it returns just the tablename, columnnname, but not the schema. Therefore when you have your Table objects which schema names in them, SQLAlchemy can't match them up and instead makes another Table that you aren't seeing as the target of each cross-schema foreign key, which has no schema name.
>
> Thanks for the explanation. This unfortunately makes things difficult for me as I have at least two schema that have nearly the same tables, the schema model providing a clean separation of the data. Removing the schema from search_path introduces ambiguities.

If you've created all your Table objects with an explicit "schema" definition, you shouldn't need anything in your search path - technically not even "public". If the issue is that your database connection defaults to a certain search path, change the search path just within your SQLAlchemy application to not include defaults. A "connect" event listener can achieve this.

> Where would be the best place to try to find a solution? Is it psycopg2 that's not returning the schema information?

It's Postgresql's pg_catalog.pg_get_constraintdef() function.


> The information is of course in the database, so it sounds like maybe the SQL query to get the foreign keys could be updated to explicitly include the schema?

Consider that an application which *does* want to use a custom search path, and does *not* specify "schema" inside of each Table, would want SQLAlchemy to *disregard* the schema information from the foreign key. The behavior of PG here, omitting the schema information if that schema is in fact already in the search path, is quite natural. The choice here is simple - either your tablenames are unique across schemas, in which case you can use a composite search path to place them all into one namespace and you don't use "schema" inside of each Table, or they are not, in which case you should be schema-qualifying things and not have those schemas in your search path. The two approaches can't reasonably be mixed without complicating things.


> Also, given the Python classes in my example code, how do I print out the foreign keys from a table object?

you'd see this in the table.constraints collection.

Demitri Muna

unread,
May 21, 2012, 8:05:38 PM5/21/12
to sqlal...@googlegroups.com
Hi Michael,

As a very quick test to see if I could make things work, I created a new postgres user and set that user's search_path to just '"$user"' (since it can't be empty, but as there are no tables with the user's name, that's effectively what it is). My toy example worked.

On May 18, 2012, at 5:10 PM, Michael Bayer wrote:

> If you've created all your Table objects with an explicit "schema" definition, you shouldn't need anything in your search path - technically not even "public". If the issue is that your database connection defaults to a certain search path, change the search path just within your SQLAlchemy application to not include defaults. A "connect" event listener can achieve this.

I'm trying to avoid creating a separate user just for scripts that use SA and it sounds like your suggestion above should address this, but I'm a little stuck on how that will work. This is what I have:

def my_on_connect(dbapi_con, connection_record):
print "New DBAPI connection:", dbapi_con
# ??? execute query 'SET search_path TO "$user"'

I call

listen(Pool, 'connect', my_on_connect)

very early in my code, which seems to be properly called. Can you help me fill in the missing line? If I understand things correctly, then this hook will set the search path for the lifetime of any connection made to the database, regardless of what the user has set otherwise.

I also tried something like this:

http://www.mail-archive.com/sqlal...@googlegroups.com/msg09859.html

but that didn't work. I first make the database connection and then later define my table classes through autoload, and it seemed to have a problem with that order (I think I got a "table not found" error upon metadata.reflect().)

Thanks again for all your help - I'm finally seeing the light at the end of this tunnel!

Cheers,
Demitri

Michael Bayer

unread,
May 21, 2012, 9:37:20 PM5/21/12
to sqlal...@googlegroups.com

On May 21, 2012, at 8:05 PM, Demitri Muna wrote:

> Hi Michael,
>
> As a very quick test to see if I could make things work, I created a new postgres user and set that user's search_path to just '"$user"' (since it can't be empty, but as there are no tables with the user's name, that's effectively what it is). My toy example worked.
>
> On May 18, 2012, at 5:10 PM, Michael Bayer wrote:
>
>> If you've created all your Table objects with an explicit "schema" definition, you shouldn't need anything in your search path - technically not even "public". If the issue is that your database connection defaults to a certain search path, change the search path just within your SQLAlchemy application to not include defaults. A "connect" event listener can achieve this.
>
> I'm trying to avoid creating a separate user just for scripts that use SA and it sounds like your suggestion above should address this, but I'm a little stuck on how that will work. This is what I have:
>
> def my_on_connect(dbapi_con, connection_record):
> print "New DBAPI connection:", dbapi_con
> # ??? execute query 'SET search_path TO "$user"'

should be able to execute with a cursor:

cursor = dbapi_con.cursor()
cursor.execute('SET search_path TO "$user"')

>
> http://www.mail-archive.com/sqlal...@googlegroups.com/msg09859.html
>
> but that didn't work. I first make the database connection and then later define my table classes through autoload, and it seemed to have a problem with that order (I think I got a "table not found" error upon metadata.reflect().)

OK table not found would be affected based on the search path and whether or not you have "schema" on your table.

Demitri Muna

unread,
May 22, 2012, 4:31:59 PM5/22/12
to sqlal...@googlegroups.com
Hi,

Progress! But still not working 100%. I have three assert statements: one that tests a "to one" relationship across schemas, one that test a "to one" relationship within the same scheme, and two that test a "many to many" relationship across schemas (one assert for each direction).

When I set the search_path to what I'd like it to be (e.g. "things, people") and use the "connect" event listener as detailed earlier in this thread, the first two asserts (which broke before) now work. The first of the "many to many" asserts works, but the reverse does not. However, the relationship statements (in either direction) *do* seem to work. I can't explain that.

Going for it, I put the event listener into my large project and crossed my fingers, but it doesn't work. I'm still running into these errors across schemas:

ArgumentError: Could not determine join condition between parent/child tables on relationship VisitSpectrum.observation. Specify a 'primaryjoin' expression. If 'secondary' is present, 'secondaryjoin' is needed as well.

It works when I set the search path to "$user", i.e. no existing table, and different things break depending on the order search path. As far as I can tell the listener is setting the search path properly, but I have no idea how persistent it is or if another connection is being made somehow where it's not. So I'm getting frustrated.

Attached is my toy model where the relationship assertion breaks.

Demitri


schema_test.py

Demitri Muna

unread,
May 22, 2012, 4:34:57 PM5/22/12
to sqlal...@googlegroups.com

Where in my last email "three statements" = "five statements"...

Sigh.

Michael Bayer

unread,
May 22, 2012, 5:03:36 PM5/22/12
to sqlal...@googlegroups.com
Did a quick experiment, and it appears that a ROLLBACK resets the search path. Try calling dbapi_con.commit() in your event handler, that appears to cause the search path to remain persistent for the life of that connection.
> --
> You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
> To post to this group, send email to sqlal...@googlegroups.com.
> To unsubscribe from this group, send email to sqlalchemy+...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
>
> <schema_test.py>

Demitri Muna

unread,
May 22, 2012, 5:30:44 PM5/22/12
to sqlal...@googlegroups.com
Hi,

On May 22, 2012, at 5:03 PM, Michael Bayer wrote:

> Did a quick experiment, and it appears that a ROLLBACK resets the search path. Try calling dbapi_con.commit() in your event handler, that appears to cause the search path to remain persistent for the life of that connection.

Success!! That was it. Thank you *very much* for your help Michael.

For posterity, the sum of my fix is pasted below. I placed this code before the database connection was made.

Cheers,
Demitri

---

from sqlalchemy.event import listen
from sqlalchemy.pool import Pool

def my_on_connect(dbapi_con, connection_record):
'''
Callback function to be called at every connection.
dbapi_con - type: psycopg2._psycopg.connection
connection_record - type: sqlalchemy.pool._ConnectionRecord
'''
#print "New DBAPI connection:"#, dbapi_con
cursor = dbapi_con.cursor()
# no schema exists with the same name as the user,
# effectively an empty search path (postgres does not allow
# it to actually be empty
cursor.execute('SET search_path TO "$user"')
dbapi_con.commit()

listen(Pool, 'connect', my_on_connect)
Reply all
Reply to author
Forward
0 new messages