OK I had wondered if you would actually want that. but that means
if schemas have been altered manually like that, then the next
migration is going to show those changes being reversed, if you are OK
with that.
What we do now is start using techniques from the "multidb" template
that's included with Alembic.
Assume you added this to alembic.ini:
schema_names = test_schema, test_schema_2
then you can genrerate migrations for each schema individually like this:
import re
schema_names = re.split(r',\s*', config.get_main_option('schema_names'))
def run_migrations_online():
connectable = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
with connectable.connect() as connection:
for schema_name in schema_names:
connection.execute("SET search_path TO %s" % schema_name)
connection.dialect.default_schema_name = schema_name
context.configure(
connection=connection,
target_metadata=target_metadata,
upgrade_token="%s_upgrades" % schema_name,
downgrade_token="%s_downgrades" % schema_name,
)
with context.begin_transaction():
context.run_migrations(schema_name=schema_name)
you also need to modify script.py.mako to render these sections:
def upgrade(schema_name):
globals()["upgrade_%s" % schema_name]()
def downgrade(schema_name):
globals()["downgrade_%s" % schema_name]()
<%
import re
schema_names = re.split(r',\s*', config.get_main_option('schema_names'))
%>
% for schema_name in schema_names:
def upgrade_${schema_name}():
${context.get("%s_upgrades" % schema_name, "pass")}
def downgrade_${schema_name}():
${context.get("%s_downgrades" % schema_name, "pass")}
% endfor
When you run an autogenerate, you will get this:
def upgrade(schema_name):
globals()["upgrade_%s" % schema_name]()
def downgrade(schema_name):
globals()["downgrade_%s" % schema_name]()
def upgrade_test_schema():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('t1',
sa.Column('id', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade_test_schema():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('t1')
# ### end Alembic commands ###
def upgrade_test_schema_2():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('t1',
sa.Column('id', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade_test_schema_2():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('t1')
# ### end Alembic commands ###
note that the "schema=<name>" is still not present, but is not needed
since your env.py sets the search path to just that single schema, the
"system" schema is not included (and is not needed). When you run
migrations, that same logic occurs for migrations being run and you
each set of DDL will run within the scope of that SEARCH_PATH. I have
tested this and can confirm it works.
Now if you really *want* the "schema=<name>" anyway, you can add it in
with a rewriter, like this:
with connectable.connect() as connection:
def process_revision_directives(context, revision, directives):
script = directives[0]
schema_name = context.opts['x_schema_name']
upgrade_ops = script.upgrade_ops_list[-1]
downgrade_ops = script.downgrade_ops_list[-1]
for op in upgrade_ops.ops + downgrade_ops.ops:
op.schema = schema_name
if hasattr(op, "ops"):
for sub_op in op.ops:
sub_op.schema = schema_name
for schema_name in ['test_schema', 'test_schema_2']:
conn = connection
conn.execute("SET search_path TO %s" % schema_name)
conn.dialect.default_schema_name = schema_name
context.configure(
connection=conn,
target_metadata=target_metadata,
upgrade_token="%s_upgrades" % schema_name,
downgrade_token="%s_downgrades" % schema_name,
process_revision_directives=process_revision_directives,
x_schema_name=schema_name
)
with context.begin_transaction():
context.run_migrations(schema_name=schema_name)
I've added an extra param to the context for each schema so the above
revision processing function can get at it.
I was working on this as a new cookbook section which is in WIP at
https://gerrit.sqlalchemy.org/#/c/zzzeek/alembic/+/864/, however I'm
still skeptical that the use case of same model for multiple schemas
that are *not* identical is going to be that useful, so I may amend
this recipe to work like the "multidb" template and have individual
MetaData objects per schema just as the main example, it can be
modified to have a single MetaData if that's what someone wants.
>
> Thank you