Greetings, all! This is my first django-dev post, so please be gentle. :-) I hack on support.mozilla.com, a fairly large Django site with about 1000 tests. Those tests make heavy use of fixtures and, as a result, used to take over 5 minutes to run. So, I spent a few days seeing if I could cut the amount of DB I/O needed. Ultimately, I got the run down to just over 1 minute, and almost all of those gains are translatable to any Django site running against a transactional DB. No changes to the apps themselves are needed. I'd love to push some of this work upstream, if there's interest (or even lack of opposition ;-)).
The speedups came from 3 main optimizations:
1. Class-level fixture setup
Given a transaction DB, there's no reason to reload fixtures via dozens of SQL statements before every test. I made use of setup_class() and teardown_class() (yay, unittest2!) to change the flow for TestCase-using tests to this:
a. Load the fixtures at the top of the class, and commit.
b. Run a test.
c. Roll back, returning to pristine fixtures. Go back to step b.
d. At class teardown, figure out which tables the fixtures loaded into, and expressly clear out what was added.
Before this optimization: 302s to run the suite
After: 97s.
Before: 37,583 queries
After: 4,116
On top of that, an additional 4s was saved by reusing a single connection rather than opening and closing them all the time, bringing the final number down to 93s. (We can get away with this because we're committing any on-cursor-initialization setup, whereas the old TestCase rolled it back.)
Here's the code: https://github.com/erikrose/test-utils/blob/master/test_utils/__init__.py#L121. I'd love to generalize it a bit (to fall back to the old behavior with non-transactional backends, for example) and offer it as a patch to Django proper, replacing TestCase. Thoughts?
(If you notice that copy-and-paste of loaddata sitting off to the side in another module, don't fret; in the patch, that would turn into a refactoring of loaddata to make the computation of the fixture-referenced tables separately reusable.)
2. Fixture grouping
I next observed that many test classes reused the same sets of fixtures, often via subclassing. After the previous optimization, our tests still loaded fixtures 114 times, even though there were only 11 distinct sets of them. So, I thought: why not write a custom testrunner that buckets the classes by fixture set and advises the classes that, unless they're the first or last in a bucket, they shouldn't bother tearing down or setting up the fixtures, respectively? This took the form of a custom nose plugin (we use nose for all our Django stuff), and it took another quarter off the test run:
Before: 97s
After: 74s
Of course, test independence is still preserved. We're just factoring out pointlessly repeated setup.
I don't really have plans to upstream this unless someone calls for it, but I'll be making it available soon, likely as part of django-nose.
3. Startup optimizations
At this point, it was bothering me that, just to run a single test, I had to wait through 15s of DB initialization (mostly auth_permissions and django_content_type population)—stuff which was already perfectly valid from the previous test run. So, building on some work we had already done in this direction, I decided to skip the teardown of the test DB and, symmetrically, the setup on future runs. If you make schema changes, just set an env var, and it wipes and remakes the DB like usual. I could see pushing this into django-nose as well, but it's got the hackiest implementation and can potentially confuse users. I mention it for completeness.
Before: startup time 15s
After: 3s (There's quite a wide variance due to I/O caching luck.)
Code: https://github.com/erikrose/test-utils/commit/b95a1b7
If you read this far, you get a cookie! I welcome your feedback on merging optimization #1 into core, as well as any accusations of insanity re: #2 and #3. FWIW, everything works great without touching any of the tests on 3 of our Django sites, totaling over 2000 tests.
Best regards and wishes for a happy weekend,
Erik Rose
support.mozilla.com
tl;dr: I've written an alternative TestCase base class which makes fixture-using tests much more I/O efficient on transactional DBs, and I'd like to upstream it.
Greetings, all! This is my first django-dev post, so please be gentle. :-) I hack on support.mozilla.com, a fairly large Django site with about 1000 tests. Those tests make heavy use of fixtures and, as a result, used to take over 5 minutes to run. So, I spent a few days seeing if I could cut the amount of DB I/O needed. Ultimately, I got the run down to just over 1 minute, and almost all of those gains are translatable to any Django site running against a transactional DB. No changes to the apps themselves are needed. I'd love to push some of this work upstream, if there's interest (or even lack of opposition ;-)).
The speedups came from 3 main optimizations:
1. Class-level fixture setup
Given a transaction DB, there's no reason to reload fixtures via dozens of SQL statements before every test. I made use of setup_class() and teardown_class() (yay, unittest2!) to change the flow for TestCase-using tests to this:
a. Load the fixtures at the top of the class, and commit.
b. Run a test.
c. Roll back, returning to pristine fixtures. Go back to step b.
d. At class teardown, figure out which tables the fixtures loaded into, and expressly clear out what was added.
Before this optimization: 302s to run the suite
After: 97s.
Before: 37,583 queries
After: 4,116
On top of that, an additional 4s was saved by reusing a single connection rather than opening and closing them all the time, bringing the final number down to 93s. (We can get away with this because we're committing any on-cursor-initialization setup, whereas the old TestCase rolled it back.)
Here's the code: https://github.com/erikrose/test-utils/blob/master/test_utils/__init__.py#L121. I'd love to generalize it a bit (to fall back to the old behavior with non-transactional backends, for example) and offer it as a patch to Django proper, replacing TestCase. Thoughts?
(If you notice that copy-and-paste of loaddata sitting off to the side in another module, don't fret; in the patch, that would turn into a refactoring of loaddata to make the computation of the fixture-referenced tables separately reusable.)
2. Fixture grouping
I next observed that many test classes reused the same sets of fixtures, often via subclassing. After the previous optimization, our tests still loaded fixtures 114 times, even though there were only 11 distinct sets of them. So, I thought: why not write a custom testrunner that buckets the classes by fixture set and advises the classes that, unless they're the first or last in a bucket, they shouldn't bother tearing down or setting up the fixtures, respectively? This took the form of a custom nose plugin (we use nose for all our Django stuff), and it took another quarter off the test run:
Before: 97s
After: 74s
Of course, test independence is still preserved. We're just factoring out pointlessly repeated setup.
I don't really have plans to upstream this unless someone calls for it, but I'll be making it available soon, likely as part of django-nose.
3. Startup optimizations
At this point, it was bothering me that, just to run a single test, I had to wait through 15s of DB initialization (mostly auth_permissions and django_content_type population)—stuff which was already perfectly valid from the previous test run. So, building on some work we had already done in this direction, I decided to skip the teardown of the test DB and, symmetrically, the setup on future runs. If you make schema changes, just set an env var, and it wipes and remakes the DB like usual. I could see pushing this into django-nose as well, but it's got the hackiest implementation and can potentially confuse users. I mention it for completeness.
Before: startup time 15s
After: 3s (There's quite a wide variance due to I/O caching luck.)
Code: https://github.com/erikrose/test-utils/commit/b95a1b7
If you read this far, you get a cookie! I welcome your feedback on merging optimization #1 into core, as well as any accusations of insanity re: #2 and #3. FWIW, everything works great without touching any of the tests on 3 of our Django sites, totaling over 2000 tests.
Best regards and wishes for a happy weekend,
Erik Rose
support.mozilla.com
--
You received this message because you are subscribed to the Google Groups "Django developers" group.
To post to this group, send email to django-d...@googlegroups.com.
To unsubscribe from this group, send email to django-develop...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-developers?hl=en.
So, in the best case, no database setup is required at all to run tests
-- which encourages test driven development :-)
I believe there's no generalized way of creating databases in Django
now, so that would have to be added.
I'd love to hack on that :-)
Jonas
s/creating/copying/
> 1. Class-level fixture setup
>
>> This is the one I'm most interested. I did a patch a number of months ago to do the fixture parsing, but not DB insertion on a per-class basis. I didn't find that to be a big win. However, I'm going to be working on a patch to do bulk inserts (that is a single execute/executemany call for all objects to be inserted), which could be a big win for fixture loading, so I'd kind of like to do that first, to see how big a win this is after that. This is obviously more specialized, and invasive (IMO), so if we can get most of the win without it that might be good enough.
Could you explain what you mean by "to do the fixture parsing"? Did you try to speed up or cache the JSON parsing or something?
It's the per-class setup that yielded the biggest win for 2 of our largest sites: support.mozilla.com and addons.mozilla.com. We have (unsurprisingly) several tests in each class, and this avoids redoing all the I/O for each test. (CPU is practically free in an I/O-using situation like this, so I went straight for the disk writes.)
As for bulk inserts, that would be great to have as well! I'd be surprised if they were a huge win, since, in an MVCC DB, the writes typically happen on commit, and there wouldn't be any fewer of those. On the other hand, it's another way to cut traffic, and engines like MyISAM should benefit more, since they commit immediately. I'd love to see numbers on it. Have you had a chance to bench it?
> Speeding up tests is defintely of interest to me, so thanks for the great work!
You're welcome! I scratched a personal itch, but I hope others can benefit as well.
Toward that, should I work up a Django patch, or would the core team rather I release my work as a pluggable package? I realistically have the time to do only one.
Cheers,
Erik
Patch, please! Fast is good :)
Jacob
So that would be 11 separate DBs for our tests, and you'd just switch between them? Interesting idea. Or are you proposing caching the results of queries for each test class, essentially mocking out the DB?
Perhaps some numbers would illuminate: I clock the total setup and teardown time for support.mozilla.com's 1064 tests at 2.59 seconds after my optimizations. So I'm pretty happy with that. :-) CPU use for the test run is 76% according to the `time` commands, so there's a little more I/O to kill but not much.
Cheers,
Erik
> Also, one thing I'm quickly noticing (I'm a bit confused why its
> setup_class and not setUpClass as well),
I was writing to nose's hooks; didn't realize Django used unittest2 now!
> but this wont work with
> postgres without changing the DELETE code to work like the test
> runner's TRUNCATE foo, bar; (due to foreign key constraints).
Absolutely. I assume this is what you fix below....
> You can do something like this to handle the
> flushing:
>
> sql_list = connection.ops.sql_flush(no_style(),
> tables, connection.introspection.sequence_list())
> for sql in sql_list:
> cursor.execute(sql)
Brilliant! Thanks! Say, can you think of any backends in which you actually have to reset the sequences after truncating? That seems like an interesting decoupling to me. MySQL, anyway, does the reset implicitly; perhaps we can optimize its sql_flush routine.
> Unfortunately, you're still reliant that nothing was created with
> signals that uses constraints. For us this is very common, and I can't
> imagine we're an edge case there
Can you tell me of these signals? Which ones? I don't think we use them, but I don't want to overlook them.
Erik
Regarding the signals, basically we have a bunch of post_save type
things, which tend to store aggregate data for certain conditions.
These get populated (in some cases) in our tests, but don't correspond
to a fixture or a model in the same app.
--
David Cramer
http://justcramer.com
Ah, gotcha. So, a couple solutions off the top of my head: either just punt and truncate everything (except auth_permission and django_content_type). That would be a shame, since it takes awhile. OTOH, if you use fixture grouping (optimization #2), that's a lot more tolerable. Or maybe we could monitor what tables are getting insertions somehow. I'll give it some thought.
I would be very happy to test this against Oracle database to see is how
much patch improves speed since previously running tests against Oracle
has been a real pain specially all db recreate stuff took a long long
time.
--
Jani Tiainen
The complications are: 1) in any interesting test suite, there isn't a
single outcome of a+b, because different tests will have different
fixtures and perhaps even different models, so a number of copies will
have to be captured. 2) As with any caching scheme, invalidation is
important and tricky. In the normal course of development, how will
these cached copies of the database be invalidated and recreated?
Perhaps this isn't so bad, it's roughly analogous to writing migrations,
which we know how to deal with.
I don't have any code to do this, but I envision a set of test
databases, with a modified test runner than knows how to cycle among
them by manipulating settings.DATABASES to use the proper one for each
test class. I'd be glad to help build such a thing, and may be working
toward it myself.
--Ned.
Erik: yep
> I'd been thinking recently about this as well: when you consider all the
> test runs, they're very repetitive. Every time the tests are run, they
> go through the same set of steps: a) create database, b) install
> fixtures, c) run tests. Steps a, b, and c take too long. Step c is what
> we're really interested in, and almost always, steps a and b have the
> same outcome as the last time we ran them. We all know what to do if an
> operation takes too long and usually is the same as last time: cache its
> outcome. The outcome in this case is the state of the database. Caching
> it could be as simple as making a copy of the database after the
> fixtures are installed, then using that copy to run tests.
>
> The complications are: 1) in any interesting test suite, there isn't a
> single outcome of a+b, because different tests will have different
> fixtures and perhaps even different models, so a number of copies will
> have to be captured. 2) As with any caching scheme, invalidation is
> important and tricky. In the normal course of development, how will
> these cached copies of the database be invalidated and recreated?
> Perhaps this isn't so bad, it's roughly analogous to writing migrations,
> which we know how to deal with.
Invalidation is what I'm unsure about too -- multiple ideas came to my
mind, all involving some sort of Great Hash(tm):
1) Use file modification timestamps for all model and test related files.
Advantages: simple, works.
Disadvantages: Triggers cache invalidation for changes not related to
models or tests
2) #1 but do hash the model definitions (at Python level)
Advantages: no cache invalidation on non-model changes.
Disadvantages: tricky, triggers cache invalidation for changes not
related to tests
3) Hash the SQL generated for setup/fixtures. (step in right before the
SQL is sent to the database)
Advantages: No false-positives, simple
Disadvantages: Does not eliminate the need for SQL generation and
fixture parsing + model creation, so this might not be the "highest of
highs" ;-)
Jonas
Even within a single test command run, the same DB setup and same
fixture loads are done many times (for a sizable suite). Invalidating
too often is better than invalidating too little.
> 1) Use file modification timestamps for all model and test related files.
> Advantages: simple, works.
> Disadvantages: Triggers cache invalidation for changes not related to models
> or tests
I think this is a pretty big win, even though it's not theoretically optimal.
Actually in the Oracle backend, resetting the sequence for an empty
table is currently a no-op for transactional reasons.
--Ned.
The difference is, migrations can be merged. Database cache is local
state. No?
--Ned.
Only for "does-it-still-work" sort of tests. Not for test-driven
development, because your models and tests change all the time.
Well, we're debating various ways to improve, and I'm saying, let's do
the simplest thing that will work to raise the chance that it'll
actually get done. :-)
I declare myself bike-shedding. Given the 3 options, I'm:
+1 on #1
+0 on #2
-0 on #3
Heh, I was just going to quietly sit here and do that while everybody else kept mailing. :-)
Great! I'll post again to this thread when the patch is ready. Or, if you'd like to try it now, you can download https://github.com/jbalogh/test-utils and make your test classes subclass FastFixtureTestCase rather than TestCase.
> 3) Hash the SQL generated for setup/fixtures. (step in right before the SQL is sent to the database)
> Advantages: No false-positives, simple
> Disadvantages: Does not eliminate the need for SQL generation and fixture parsing + model creation, so this might not be the "highest of highs" ;-)
FWIW, I tried this awhile back while trying to get rid of the necessity of the FORCE_DB option in test-utils (https://github.com/jbalogh/test-utils/blob/master/test_utils). It added 2 seconds to our test startup time, so I decided against it. It wasn't clear from the profiles that string manips (SQL generation) was taking a lot of that time, so I decided to forgo the model metadata deep comparison code I had thought about writing.
SQL 2003 introduced CREATE TABLE x LIKE y for cloning the schema of a table. It's supported in MySQL at least. You could then do a bunch of INSERT INTO ... SELECTs if you deferred foreign key checks first.
Wait, why do you want to?
I suspect he was thinking of PostgreSQL's support for template
databases. It skips parsing overhead, so that creating a copy of a
template is roughly disk-bound.
Ah yes. I've been away from my old friend Postgres for a few years. :-)
Sometimes, in order to rescue data from an overfull table (because the
cleanup-job had died and a DELETE would take too long) I've done the
following:
- start transcation
- rename bad table
- receate the table (CREATE TABLE x LIKE would work)
- INSERT INTO ... SELECT good data into the recreated table from the
renamed table
- drop renamed table
- end transaction
This works even when the system is up and running, on production servers.
HM
I tried adding a PostgreSQL "disable constraints" statement here:
https://github.com/jbalogh/test-utils/blob/master/test_utils/__init__.py#L109
cursor.execute('SET CONSTRAINT ALL DEFERRED')
It didn't help.
Thanks,
--Ned.
> When I try this on a PostgreSQL database, I have problems relating to violated uniqueness constraints, sometimes from tests themselves, sometimes from setUpClass, sometimes from tearDownClass. In the latter two cases, it's the sites table involved. Is this something others have dealt with, or am I on my own? :)
Do you perchance have anything getting inserted into tables other than the ones explicitly mentioned in your fixtures, perhaps by a post_save signal? The code I have up at the moment doesn't deal with that. A cheap way to make it work would be to truncate *all* tables on teardown_fixtures (slow, ick).
I will be sure to give it a swing with Postgres; I hope to find time to work on the Django patch sometime this week.
Erik
Only wanted to point that optimization of database data fixture loading
in tests is the topic of ticket [1]9449 in the Django bug tracker.
The two contributors that participated in the discussion there so far
took the fixture caching path, but I think the ticket can be considered
associated with the general problem and so is open for other solution
strategies.
--
Ramiro Morales
This might be related to ticket [1]#11665, a knownissue in the TestCase
handling of constraints with pg. Suggestion athere si to use
SET CONSTRAINTS ALL IMMEDIATE
before the rollback.
HTH
--
Ramiro Morales
--Ned.