autogenerate detects changes but creates empty migration

2,692 views
Skip to first unread message

Colleen Ross

unread,
May 29, 2013, 6:56:54 PM5/29/13
to sqlalchem...@googlegroups.com
I'm trying to get alembic working auto-producing migrations with the --autogenerate flag.

When I run alembic -n mydbname --autogenerate -m "my message" I get something like:

INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate] Detected removed table u'some_table'
INFO  [alembic.autogenerate] Detected NULL on column 'table_a.column_a'
INFO  [alembic.autogenerate] Detected added column 'table_b.column_b'
.......

but then when I look at the migration file that is generated, upgrade and downgrade both simply say pass.

What's going on?!

Michael Bayer

unread,
May 29, 2013, 8:14:15 PM5/29/13
to sqlalchem...@googlegroups.com
shrugs, whats the rest of the output say ?






--
You received this message because you are subscribed to the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy-alem...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Colleen Ross

unread,
May 30, 2013, 12:24:21 AM5/30/13
to sqlalchem...@googlegroups.com
Just "creating migration file <blah> at <path>"
Don't have exact output on me at the moment, but no errors.

(will edit when back in the office with the exact output)

Michael Bayer

unread,
May 30, 2013, 1:09:38 AM5/30/13
to sqlalchem...@googlegroups.com
would need a lot more detail in order to be of any assistance.


Colleen Ross

unread,
May 30, 2013, 12:48:10 PM5/30/13
to sqlalchem...@googlegroups.com
I really appreciate you looking into this. Here is a lot more detail, and you can tell me if I'm missing anything else.

1. a description of our setup and my install:

We have 3 databases (which here I'll refer to as db1, db2, and db3). We have, up until now, managed them with sqlalchemy-migrate. Locally, they're all on the same machine (right now I'm just trying to get this working locally). I:
- installed alembic via pip
- created an "alembic-migrations" directory
- ran alembic init --template multidb db2
- copied the db2 directory produced and made similar db1 and db3 directories
- edited the .ini file to section by db1, db2, and db3, and added a script location and a sqlalchemy url for each section
- modified "databases" in the db2  (the one I've been doing all my testing on) env.py file to simply say databases = db2
- modified target_metadata to point to <db2's models file>.Base.metadata
- ran the command below

2. the full output (I added some warn level log statements to env.py)

[cross@cross alembic-migrations]$ alembic -n db2 revision --autogenerate -m "added column a to table a"
WARNI [env_py] db2
WARNI [env_py] db2
WARNI [env_py] MetaData(bind=None)
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate] Detected removed table u'migrate_version'
INFO  [alembic.autogenerate] Detected NULL on column 'table_b.column_b'
INFO  [alembic.autogenerate] Detected added column 'table_a.column_a'
INFO  [alembic.autogenerate] Detected NOT NULL on column 'table_a.column_c'
INFO  [alembic.autogenerate] Detected removed column 'table_c.column_d'
INFO  [alembic.autogenerate] Detected removed column 'table_c.column_e'
INFO  [alembic.autogenerate] Detected removed column 'table_c.column_f'
INFO  [alembic.autogenerate] Detected removed column 'table_c.column_g'
WARNI [env_py] I ran them migrations
WARNI [env_py] I committed
  Generating /home/cross/reponame/alembic-
  migrations/db2/versions/5153fd93e1e2_added_column_a_.py...done

3. The full contents of the generated file

[cross@cross alembic-migrations]$ cat db2/versions/5153fd93e1e2_added_column_a_.py 
"""added column_a to table_a

Revision ID: 5153fd93e1e2
Revises: None
Create Date: 2013-05-29 23:43:55.139871

"""

# revision identifiers, used by Alembic.
revision = '5153fd93e1e2'
down_revision = None

from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import mysql

def upgrade(engine_name):
    eval("upgrade_%s" % engine_name)()


def downgrade(engine_name):
    eval("downgrade_%s" % engine_name)()





def upgrade_db2():
    pass


def downgrade_db2():
    pass

4. the contents of alembic.ini

# a multi-database configuration.

[alembic]

# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s

# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false

[db1]
sqlalchemy.url = mysql://username:password@localhost/db1?charset=utf8&use_unicode=0
# path to migration scripts
script_location = db1
databases = db1

[db2]
sqlalchemy.url = mysql://username:password@localhost/db2?charset=utf8&use_unicode=0
# path to migration scripts
script_location = db2
databases = db2

[db3]
sqlalchemy.url = mysql://username:password@localhost/db3?charset=utf8&use_unicode=0
# path to migration scripts
script_location = db3
databases = db3


# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic

[handlers]
keys = console

[formatters]
keys = generic

[logger_root]
level = WARN
handlers = console
qualname =

[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine

[logger_alembic]
level = INFO
handlers =
qualname = alembic

[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic

[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S


5. the contents of db2/env.py

from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
import logging
import re

USE_TWOPHASE = False

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
logger = logging.getLogger(__name__)

# gather section names referring to different
# databases.  These are named "engine1", "engine2"
# in the sample .ini file.
db_names = 'db2'

# add your model's MetaData objects here
# for 'autogenerate' support.  These must be set
# up to hold just those tables targeting a
# particular database. table.tometadata() may be
# helpful here in case a "copy" of
# a MetaData is needed.
from reponame.db2 import models

target_metadata = {
      'db2': models.Base.metadata,
}
#target_metadata = {}

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.

def run_migrations_offline():
    """Run migrations in 'offline' mode.

    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well.  By skipping the Engine creation
    we don't even need a DBAPI to be available.

    Calls to context.execute() here emit the given string to the
    script output.

    """
    # for the --sql use case, run migrations for each URL into
    # individual files.

    engines = {}
    for name in re.split(r',\s*', db_names):
        engines[name] = rec = {}
        rec['url'] = context.config.get_section_option(name,
                                            "sqlalchemy.url")

    for name, rec in engines.items():
        logger.info("Migrating database %s" % name)
        file_ = "%s.sql" % name
        logger.info("Writing output to %s" % file_)
        context.configure(
                    url=rec['url'],
                    output_buffer=open(file_, 'w')
                )
        with context.begin_transaction():
            context.run_migrations(engine_name=name)

def run_migrations_online():
    """Run migrations in 'online' mode.

    In this scenario we need to create an Engine
    and associate a connection with the context.

    """

    # for the direct-to-DB use case, start a transaction on all
    # engines, then run all migrations, then commit all transactions.


    engines = {}
    for name in re.split(r',\s*', db_names):
        logger.warn(name)
        engines[name] = rec = {}
        rec['engine'] = engine_from_config(
                                    context.config.get_section(name),
                                    prefix='sqlalchemy.',
                                    poolclass=pool.NullPool)

    for name, rec in engines.items():
        engine = rec['engine']
        rec['connection'] = conn = engine.connect()

        if USE_TWOPHASE:
            rec['transaction'] = conn.begin_twophase()
        else:
            rec['transaction'] = conn.begin()

    try:
        for name, rec in engines.items():
            logger.warn(name)
            logger.warn(target_metadata.get(name))
            logger.info("Migrating database %s" % name)
            context.configure(
                        connection=rec['connection'],
                        upgrade_token="%s_upgrades",
                        downgrade_token="%s_downgrades",
                        target_metadata=target_metadata.get(name)
                    )
            context.run_migrations(engine_name=name)
            logger.warn("I ran them migrations")

        if USE_TWOPHASE:
            for rec in engines.values():
                rec['transaction'].prepare()

        for rec in engines.values():
            rec['transaction'].commit()
            logger.warn("I committed")
    except:
        logger.error("hey, an error!")
        for rec in engines.values():
            rec['transaction'].rollback()
        raise
    finally:
        for rec in engines.values():
            rec['connection'].close()


if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()



Hopefully that's enough information, but again, if not, please let me know what else I can provide!


--
You received this message because you are subscribed to a topic in the Google Groups "sqlalchemy-alembic" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/sqlalchemy-alembic/bsuQcV6KLO8/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to sqlalchemy-alem...@googlegroups.com.

Michael Bayer

unread,
May 30, 2013, 1:17:20 PM5/30/13
to sqlalchem...@googlegroups.com
1. has this setup worked before with autogenerate, and is suddenly failing for no reason, and 

2. what's in your script.mako ?

Colleen Ross

unread,
May 30, 2013, 1:20:11 PM5/30/13
to sqlalchem...@googlegroups.com
1. No, this is our *very first* test run with alembic.

2. standard:

<%!
import re

%>"""${message}

Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}


"""

# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}


from alembic import op
import sqlalchemy as sa
${imports if imports else ""}


def upgrade(engine_name):
    eval("upgrade_%s" % engine_name)()


def downgrade(engine_name):
    eval("downgrade_%s" % engine_name)()

<%
    db_names = config.get_main_option("databases")
%>

## generate an "upgrade_<xyz>() / downgrade_<xyz>()" function
## for each database name in the ini file.

% for db_name in re.split(r',\s*', db_names):

def upgrade_${db_name}():
    ${context.get("%s_upgrades" % db_name, "pass")}


def downgrade_${db_name}():
    ${context.get("%s_downgrades" % db_name, "pass")}

% endfor

Michael Bayer

unread,
May 30, 2013, 1:45:12 PM5/30/13
to sqlalchem...@googlegroups.com
the "multidb" setup is not one that people use very often so it might have issues.

In particular those upgrade/downgrade tokens look weird, I will have to test on this end but can you try it like this in env.py (note the "% name" added to each of those tokens):

            context.configure(
                        connection=rec['connection'],
                        upgrade_token="%s_upgrades" % name,
                        downgrade_token="%s_downgrades" % name,
                        target_metadata=target_metadata.get(name)
                    )


if thats the case then this would be a bugfix on our end to that template.

Colleen Ross

unread,
May 30, 2013, 3:20:40 PM5/30/13
to sqlalchem...@googlegroups.com
It seems very likely that's what the problem was.

Unfortunately, due to the difficulty of setup, lack of documentation, and difficulty of using sqlalchemy .7 with alembic, we'll be postponing our switch.

Michael Bayer

unread,
May 30, 2013, 3:27:37 PM5/30/13
to sqlalchem...@googlegroups.com
you're doing multi DB with migrate, huh?  the irony that Alembic was developed because migrate made such things almost impossible without extensive monkeypatching.

Colleen Ross

unread,
May 30, 2013, 3:32:44 PM5/30/13
to sqlalchem...@googlegroups.com
Oh, I believe that. At this point, though, we've hacked it to the point that the multidb-ness is not a problem-- it's that there are a lot of unwritten rules on operations you can't combine (altering multiple tables, updating multiple tables, altering and then updating the same table, etc). I was looking at alembic because those undocumented rules turned 2 migrations into 10 (with an upgrade and a downgrade for each, ugh!)

I'm sure we'll switch over after we upgrade to SqlAlchemy .8

Michael Bayer

unread,
May 30, 2013, 3:42:31 PM5/30/13
to sqlalchem...@googlegroups.com
Yeah that's what I thought, you had to read migrate's source and hack the crap out of it, so it's not like Alembic has failed where Migrate has succeeded.       Alembic was written to improve upon Migrate, and most people see the exposing of individual ALTER operations as a good thing since you don't need to duplicate your entire Table metadata in each migration script.
Reply all
Reply to author
Forward
0 new messages