Autogenerate with Multi-Tenant

371 views
Skip to first unread message

Brian Hill

unread,
Mar 19, 2020, 5:30:04 PM3/19/20
to sqlalchemy-alembic
Are there known issues with using autogenerate with multi-tenant (schema_translate_map)?

My metadata doesn't have a schema and I my schema_translate_map={None:'my_schema'}.

This works for migrations but when I use autogenerate the generated revision file is the full schema and not the diff.

It's detecting the alembic version table in the tenant schema. If I run the autogenerate a second time with the new revision file it fails saying Taget database is not up to date.

Thanks,

Brian

Mike Bayer

unread,
Mar 19, 2020, 5:37:38 PM3/19/20
to sqlalchem...@googlegroups.com


On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
Are there known issues with using autogenerate with multi-tenant (schema_translate_map)?

it's complicated and not one-size-fits-all, if you consider that to be an issue



My metadata doesn't have a schema and I my schema_translate_map={None:'my_schema'}.

This works for migrations but when I use autogenerate the generated revision file is the full schema and not the diff.

It's detecting the alembic version table in the tenant schema. If I run the autogenerate a second time with the new revision file it fails saying Taget database is not up to date.

are you setting include_schemas=True in your environment?  I'd probably not do this.   Can't help much more without a complete and concise working example of your env.py, please remove all extraneous details.



Thanks,

Brian


--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.

Brian Hill

unread,
Mar 19, 2020, 7:09:38 PM3/19/20
to sqlalchemy-alembic
Here's my env.py. Thanks for the help.
Brian


On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:


On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
Are there known issues with using autogenerate with multi-tenant (schema_translate_map)?

it's complicated and not one-size-fits-all, if you consider that to be an issue



My metadata doesn't have a schema and I my schema_translate_map={None:'my_schema'}.

This works for migrations but when I use autogenerate the generated revision file is the full schema and not the diff.

It's detecting the alembic version table in the tenant schema. If I run the autogenerate a second time with the new revision file it fails saying Taget database is not up to date.

are you setting include_schemas=True in your environment?  I'd probably not do this.   Can't help much more without a complete and concise working example of your env.py, please remove all extraneous details.



Thanks,

Brian


--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+unsub...@googlegroups.com.
env.py

Mike Bayer

unread,
Mar 19, 2020, 7:19:08 PM3/19/20
to sqlalchem...@googlegroups.com
so let me get this straight:

1. you have many schemas

2. you want to run autogenerate only once

3. you want your mirations genrated with None for schema

4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  arbitrary?) schema to use as the target, is that right?

As it stands, your autogenerate is not going to look in any schema except "public", assuming default PG search path, because you did not set "include_schemas=True".

If you truly want autogenerate to look at one schema and only that schema, and generate everything as non-schema qualified, then I would simply set the default schema in PG to that schema name using search_path:

connection.execute("SET search_path TO my_schema")

that way everything Alembic reflects will be from "my_schema" and it will see the schema as blank, and it should generate as such.  you might need to disable the schema translate map when autogenerate runs but try it without doing that first.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.


Attachments:
  • env.py

Brian Hill

unread,
Mar 19, 2020, 7:41:54 PM3/19/20
to sqlalchemy-alembic


On Thursday, March 19, 2020 at 7:19:08 PM UTC-4, Mike Bayer wrote:
so let me get this straight:

1. you have many schemas


yes
 
2. you want to run autogenerate only once

yes
 
3. you want your mirations genrated with None for schema

yes
 
4. *HOWEVER*, when you run autogenerate, you are picking *one* (random?  arbitrary?) schema to use as the target, is that right?

yes one, but it won't be arbitrary, it will be a development schema generated from the latest version (prior to the change we want to autogenerate).
 

As it stands, your autogenerate is not going to look in any schema except "public", assuming default PG search path, because you did not set "include_schemas=True".


i had it set to True and it behaves the same. i just took it out based on your previous suggestion, but again, it behaves the same.

If you truly want autogenerate to look at one schema and only that schema, and generate everything as non-schema qualified, then I would simply set the default schema in PG to that schema name using search_path:

connection.execute("SET search_path TO my_schema")

that way everything Alembic reflects will be from "my_schema" and it will see the schema as blank, and it should generate as such.  you might need to disable the schema translate map when autogenerate runs but try it without doing that first.


it's still generating the entire schema in the revision file.

here's the updated section and it behaves the same with/without schema translate map:

with connectable.connect() as connection:

# create DB_SCHEMA if it doesn't exist
connection.execute(f'create schema if not exists {DB_SCHEMA}')

# map metadata schema (None) to DB_SCHEMA
# connection = connection.execution_options(schema_translate_map={None:DB_SCHEMA})

connection.execute(f"set search_path to {DB_SCHEMA}")

# set alembic version table location in DB_SCHEMA
context.configure(
connection=connection,
include_schemas=True,
target_metadata=metadata,
version_table_schema=DB_SCHEMA,
)

with context.begin_transaction():
context.run_migrations()

 


On Thu, Mar 19, 2020, at 7:09 PM, Brian Hill wrote:
Here's my env.py. Thanks for the help.
Brian

On Thursday, March 19, 2020 at 5:37:38 PM UTC-4, Mike Bayer wrote:


On Thu, Mar 19, 2020, at 5:30 PM, Brian Hill wrote:
Are there known issues with using autogenerate with multi-tenant (schema_translate_map)?

it's complicated and not one-size-fits-all, if you consider that to be an issue



My metadata doesn't have a schema and I my schema_translate_map={None:'my_schema'}.

This works for migrations but when I use autogenerate the generated revision file is the full schema and not the diff.

It's detecting the alembic version table in the tenant schema. If I run the autogenerate a second time with the new revision file it fails saying Taget database is not up to date.

are you setting include_schemas=True in your environment?  I'd probably not do this.   Can't help much more without a complete and concise working example of your env.py, please remove all extraneous details.



Thanks,

Brian


--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+unsub...@googlegroups.com.


--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alembic+unsub...@googlegroups.com.

Mike Bayer

unread,
Mar 19, 2020, 8:20:06 PM3/19/20
to sqlalchem...@googlegroups.com
no, don't use "include_schemas".  you want to run alembic in a mode where it has no idea there are other schemas to use.   i think version_table_schema is OK here but don't use include_schemas.  Also turn on INFO or DEBUG logging for the sqlalchemy logger to see exactly what tables it is reflecting.



To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.

Brian Hill

unread,
Mar 19, 2020, 8:56:36 PM3/19/20
to sqlalchemy-alembic
i think i just figured out that what i'm trying to do is unreasonable and unecessary. alembic shouldn't have to know how to reverse map the schemas and i don't need to do it that way. i can create my dev schema in public and autogenerate against that, which maintains the same schema in metadata and the db.

thanks,

brian

Brian Hill

unread,
Mar 19, 2020, 9:15:06 PM3/19/20
to sqlalchemy-alembic
same behavior, entire schema generated. i'll look at using INFO and DEBUG tomorrow to get a better idea of what's going on, but using public as my dev schema to autogenerate against works. i just have to exclude the alembic tables.

again thanks for your help in understanding what i was trying to do.

brian
 

Mike Bayer

unread,
Mar 20, 2020, 9:49:52 AM3/20/20
to sqlalchem...@googlegroups.com
I just realized that you really dont need to even use schema_translate_map at all here.  If you use a completely straight env.py, and simply set search_path=SCHEMA, you should be able to run Alembic in entirely "schemaless" mode; don't include schemas anywhere nor would you need to set it for the alembic version table.    Postgresql will also CREATE/ ALTER in that schema as well.    As long as your environment runs completely in just one schema at a time, this can be set up entirely at the connection level.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.

Brian Hill

unread,
Mar 20, 2020, 10:48:34 AM3/20/20
to sqlalchemy-alembic
this is what i tried but it generated the whole schema:
.
with connectable.connect() as connection:

connection.execute(f'create schema if not exists {DB_SCHEMA}')
connection.execute(f'set search_path to {DB_SCHEMA}')

context.configure(
connection=connection,
target_metadata=metadata,
)

with context.begin_transaction():
context.run_migrations()

it works when my schema is 'public', when it matches metadata. my solution will be to set my schema to 'public' and use this simple env.py.

thanks,

brian

Mike Bayer

unread,
Mar 20, 2020, 12:11:21 PM3/20/20
to sqlalchem...@googlegroups.com
OK one more addition to the recipe, please do this:

    DB_SCHEMA = "my_foo_schema"

    with connectable.connect() as connection:
        connection.execute(f'create schema if not exists {DB_SCHEMA}')
        connection.execute(f'set search_path to {DB_SCHEMA}')

        connection.dialect.default_schema_name = DB_SCHEMA


will work completely

alternatively you can use an on connect event on the engine to do the same thing.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.

Brian Hill

unread,
Mar 20, 2020, 2:24:57 PM3/20/20
to sqlalchemy-alembic
That works!

Thank you.

Brian
Reply all
Reply to author
Forward
0 new messages