Tests got ~10x slower after upgrading to Django 1.3

92 views
Skip to first unread message

Cesar Canassa

unread,
Jun 21, 2011, 1:31:26 PM6/21/11
to django...@googlegroups.com
I am working in a large Django project that is currently using the 1.2.3 version. The project test suite has around 500 test which take ~50 seconds to run in my machine.

I recently did a upgrade to the Django 1.3 version, the upgrade was smooth so far, most problems came from the SQL Server driver but these are not used in the test suite, since the test database is SQLite. The only thing I had to change in the project itself was removing the jogging app since the 1.3 has built-in logging facilities.

The problem is that the test suite now takes ~500 seconds to run, that's about 10 times longer than it did with the 1.2.3 version.

Has anyone had a similar issue? I have a felling that this related to running Django under Windows.


Regards,
Cesar Canassa

Russell Keith-Magee

unread,
Jun 21, 2011, 8:48:31 PM6/21/11
to django...@googlegroups.com

I can't say I've heard any such reports -- if anything, 1.3 testing
should be faster, due to a number of optimizations in the test startup
process.

If you identify the source of the slowdown, I'd be interested in
hearing about it in case it is a serious regression on Django's part.

Yours,
Russ Magee %-)

Cesar Canassa

unread,
Jun 23, 2011, 11:54:30 PM6/23/11
to django...@googlegroups.com
Hi Russ!

I did some more testing, I found out that each test is taking an extra ~1s to run, no matter the test size. See this TestCase for example, it takes 4.067s to run.

from django.test import TestCase

class JobTest(TestCase):
    def test_1(self):
        pass

    def test_2(self):
        pass

    def test_3(self):
        pass

    def test_4(self):
        pass

I noticed that if I replace the django.test.TestCase with the django.utils.unittest.TestCase the time drops to 0.009s I think that the problem lies in the TransactionTestCase._fixture_setup method:

    def _fixture_setup(self):
        # If the test case has a multi_db=True flag, flush all databases.
        # Otherwise, just flush default.
        if getattr(self, 'multi_db', False):
            databases = connections
        else:
            databases = [DEFAULT_DB_ALIAS]
        for db in databases:
            call_command('flush', verbosity=0, interactive=False, database=db)

            if hasattr(self, 'fixtures'):
                # We have to use this slightly awkward syntax due to the fact
                # that we're using *args and **kwargs together.
                call_command('loaddata', *self.fixtures, **{'verbosity': 0, 'database': db})

The flush is what takes ~1s to run, If I comment that line the time drops to almost zero seconds.

Does this makes any sense? Did the Django 1.3 changed this behavior?

I am downloading the Django 1.2.5 source to do more investigation.


Best regards,
Cesar Canassa

2011/6/21 Russell Keith-Magee <rus...@keith-magee.com>

--
You received this message because you are subscribed to the Google Groups "Django users" group.
To post to this group, send email to django...@googlegroups.com.
To unsubscribe from this group, send email to django-users...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-users?hl=en.


Russell Keith-Magee

unread,
Jun 24, 2011, 12:39:40 AM6/24/11
to django...@googlegroups.com

Thanks for that analysis, Cesar.

Flush has always been a bit of a performance hole -- especially on
MySQL MyISAM. However, I'm not aware of any change we made in 1.3 that
should make any worse than it was in 1.2. The biggest factor affecting
the speed of a flush (that I'm aware of) is the number of tables to be
flushed; as Django's test suite has gotten bigger, the suite as a
whole has gotten exponentially slower, as more tests means more test
models to flush. However, this factor shouldn't affect an A/B test
between 1.2 and 1.3 of the same test suite.

Good luck narrowing this down some more :-)

Yours,
Russ Magee %-)

Cesar Canassa

unread,
Jun 24, 2011, 2:08:20 AM6/24/11
to django...@googlegroups.com
I think I solved this! The reason why the test cases were taking so long was because Django was not enabling the transaction support, so the tests were falling back to the non-transaction TestCase

My project has two test databases but the 'log' is not used anymore so I didn't care if it was created or not:

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',
    },

    'log': {
        'ENGINE': 'django.db.backends.sqlite3',
        'NAME': ':memory:',
    }
}

When the tests run it calls the following method:

def connections_support_transactions():
    """
    Returns True if all connections support transactions.  This is messy
    because 2.4 doesn't support any or all.
    """
    return all(conn.features.supports_transactions
        for conn in connections.all())

In Django 1.2.3 this method returns all([True, True]) and in Django 1.3 it returns all([True, None])

The problem is that in my test settings I used the same database name ':memory:' on both databases. When this happens in Django 1.2.3 it initialize the both databases, but in Django 1.3 it doesn't initialize the second. I just had to change the names and now the tests are running back in the usual speed!

This was very hard to debbug, maybe Django should include a warning message in case someone tries to do this mistake again?


Thanks!
Cesar Canassa

2011/6/24 Russell Keith-Magee <rus...@keith-magee.com>

Yours,
Russ Magee %-)

Russell Keith-Magee

unread,
Jun 24, 2011, 2:24:01 AM6/24/11
to django...@googlegroups.com

Ah - you didn't say you had two databases. There *were* some very
important changes made to the test setup mechanism for 1.3 when
multiple databases are involved, and they certainly explain the
behavior you're seeing.

Under 1.2.X, if you have multiple databases, and multiple entries
pointed to the same database name, the test database would be
initialized twice. If you were using an on-disk database (rather than
:memory:), this would lead to the creation of *two* test databases,
instead of one test database with two connections. To make matters
worse, under certain conditions, the teardown process would cause the
*production* database to be dropped.

In 1.3, we use a combination of database name, host, port and engine
(and, in the case of Oracle, the username as well) to identify the
'unique' databases, and ensures that each of these test databases are
only initialized once, and the connections are then patched to make
sure you have multiple connections to a single database.

So - it sounds like you might have found a bug, not just something
that needs to be documented. :memory: databases don't behave quite
like on-disk databases, but we don't have any special handling in
place to account for that fact.

If you could open a ticket for this, I'd be much obliged.

It sounds like you've found a temporary workaround; for the record,
another workaround would be to put *anything* into the HOST or PORT
settings for your database. That would be enough to mark the two
databases as distinct from the perspective of the test setup
mechanism.

Yours,
Russ Magee %-)

Cesar Canassa

unread,
Jun 24, 2011, 3:03:14 AM6/24/11
to django...@googlegroups.com
For the record, here is the ticket:


Best regards,
Cesar Canassa

2011/6/24 Russell Keith-Magee <rus...@keith-magee.com>
On Fri, Jun 24, 2011 at 2:08 PM, Cesar Canassa <cesar....@gmail.com> wrote:

Yours,
Russ Magee %-)

Reply all
Reply to author
Forward
0 new messages