I want to use alembic in a Trac-based project.
Currently I have three main issues:
1. Is the current db up-to-date?
Trac will query all plugins to check if they need a DB upgrade. I'd like to
ask alembic if the DB conforms to 'head' or not. Currently I create a
temporary, fake alembic file and call context._migrations_fn manually. However
that seems a bit brittle.
I'd like to see some API call where I hand in the 'script location', a plain
db connection and possibly dialect information. It's enough if just True/False
is returned.
2. Run the actual upgrade to head
I have a plain DB connection and I want to run alembic's upgrade.
3. Configure alembic's version table name. Currently it is hard-coded but in
my situation there can be multiple plugins using alembic so I'd like to
configure a specifc table name for each of them.
Ideally the API does not touch any thread-unsafe global variables.
Is this within alembic's scope? Any hints how an implementation should look
like/design considerations?
fs
> Hi,
>
> I want to use alembic in a Trac-based project.
>
> Currently I have three main issues:
> 1. Is the current db up-to-date?
> Trac will query all plugins to check if they need a DB upgrade. I'd like to
> ask alembic if the DB conforms to 'head' or not. Currently I create a
> temporary, fake alembic file and call context._migrations_fn manually. However
> that seems a bit brittle.
>
> I'd like to see some API call where I hand in the 'script location', a plain
> db connection and possibly dialect information. It's enough if just True/False
> is returned.
>
>
> 2. Run the actual upgrade to head
> I have a plain DB connection and I want to run alembic's upgrade.
>
>
> 3. Configure alembic's version table name. Currently it is hard-coded but in
> my situation there can be multiple plugins using alembic so I'd like to
> configure a specifc table name for each of them.
It sounds like you might be on version 0.1, 0.2 has moved to a model that doesn't rely on any kind of globals to operate. An overview of this architecture is here: http://alembic.readthedocs.org/en/latest/api.html
Version table name would be a fairly simple TODO.
Running the upgrade in the simple case is usually via the command module. No config file is necessary but as you are working in relationship to a script location, I assume env.py is present. You would need to alter env.py to be able to receive a hand-placed Connection. But I guess this counts in your mind as a "global", usually with SQLAlchemy theres a Session registry that's thread-local global in any case, so this is usually already there anyway.
So first that approach:
from alembic.config import Config
from alembic import command
alembic_cfg = Config()
alembic_cfg.set_main_option("script_location", some_location)
Session.configure(bind=mybind) # your env.py needs to use this
command.upgrade(alembic_cfg, "head")
If skipping the commands, you'd be using MigrationContext directly. I'd avoid dealing with _migrations_fn as this relies upon env.py, and you're trying to go straight into the API without indirection. There's a clear path to making these functions available as cleanly as they are in the alembic.command module.
from alembic.config import Config
from alembic.script import ScriptDirectory
from alembic.environment import MigrationContext
alembic_cfg = Config()
alembic_cfg.set_main_option("script_location", some_location)
conn = myengine.connect()
script = ScriptDirectory.from_config(config)
ctx = MigrationContext.configure(conn, opts={"script":script})
From that point on ctx and script are set to go. The interface for MigrationContext and ScriptDirectory can be enhanced (meaning, once we work out the use cases we'll add these to Alembic as public API) to expose selected functions in a public way, such as a nicer version of ctx._current_rev(), which you'd compare to something that calls script._current_head(), in order to check your head revision.
The upgrade/downgrade commands receive a list of versions from a call like script.upgrade_from(script._current_head(), ctx._current_rev, ctx). It's easy enough to open up MigrationContext.run_migrations into two methods, one of which receives the version list, then add convenience methods like run_upgrades(start, end), run_downgrades(start, end) that combine the script call with running the upgrades, skipping the usage of self._migrations_fn.
We should be able to add a series of public methods to MigrationContext such as get_current_revision(), inspect_revision("head"), run_upgrades(start, end), run_downgrades(start, end) to make these possible.
I used your second example, using the MigrationContext directly:
...
upgrades = script.upgrade_from(current_head, current_revision, context)
for (upgrade, down_revision, revision) in upgrades:
upgrade()
However it looks like I have to do something more as I get this traceback:
...
File "…/versions/12ca2606697f_create_project_table.py", line 22, in upgrade
Column('project_name', Unicode, primary_key=True),
File "<string>", line 3, in create_table
NameError: global name '_proxy' is not defined
fs
I assume the problem is that I use
from alembic import op
in my migration script and 'op' is always context-depended. Can you point me
towards a method I could inject my op in a non-global way when running the
'upgrade'?
Btw: Just for completeness, I had to update the alembic version manually:
context._update_current_rev(previous_revision, revision)
fs
Am 24.02.2012 22:18, schrieb Michael Bayer:
> But then env.py still runs assuming a global configuration is
> present. (...)
I removed the need for env.py in my system by monkey-patching. However I may
try injecting the op in locals() or alternatively add a threadlocal which the
scripts can use for imports.
I figured that within a closely defined environment (Trac plugin), I don't
need an env.py as all the connection set up is well known.
Btw: If you're curious how all of this plays together, here's my hg:
https://www.schwarz.eu/opensource/hg/tracalchemy/
It's currently alpha and only supports online migration but that's just
> but I don't quite see the use case for running multiple
> migrations in multiple threads simultaneously.
This might happen for Trac if Trac itself runs with multiple threads. Each
thread serves one request, potentially with a different Trac environment. Each
env has a different DB. Therefore theoretically it might happen that two
requests cause DB upgrades at the same time though that's pretty unlikely.
Thanks again for your help, I think getting this to the state 'works 80%'
should be pretty trivial now.
fs
when does trac run database migrations within the web application ? trac upgrades are via the trac-admin upgrade utility.
There's a hacky plugin. Though I just noticed it's 'up for adoption':
http://trac-hacks.org/wiki/AutoUpgradePlugin
fs
even then, the upgrade should be across the board and should be non-concurrent.
though, I guess the topic of concurrent schema upgrades, say if you had 500 servers all needing the same migrations, maybe that's something we'd want to think about.
Am 27.02.2012 22:26, schrieb Michael Bayer:
> yeah I'm skeptical as to how appropriate a plugin like that is, unless Trac itself is auto-upgrading, which to my knowledge it isn't (right?)
Right.
Honestly, I'm not too concerned about concurrent upgrades. I just want to
avoid global stuff when it's easy (and I think for my Trac-usage I can achieve
that more or less) but otherwise, I won't bother too much...
Thanks btw. for alembic it really works great for me here. Also your support
here is appreciated!
fs