Hi all,
So, I've been working on a Django branch [1] to implement the approach
to pluggable user models that was decided upon earlier this year [2]
[1]
https://github.com/freakboy3742/django/tree/t3011
[2]
https://code.djangoproject.com/wiki/ContribAuthImprovements
The user-swapping code itself is coming together well. However, I've
hit a snag during the process of writing tests that may require a
little yak shaving.
The problem is this: With pluggable auth models, you lose the
certainty over exactly which User model is present, which makes it
much harder to write some tests, especially ones that will continue to
work in an unpredictable end-user testing environment.
With regards to pluggable Users, there are 5 types of tests that can be run:
1) Contrib.auth tests that validate that the internals of
contrib.auth work as expected
2) Contrib.auth tests that validate that the internals of
contrib.auth work with a custom User model
3) Contrib.auth tests that validate that the currently specified user
model meets the requirements of the User contract
The problem is that because of the way syncdb works during testing
some of these tests are effectively mutually exclusive. The test
framework runs syncdb at the start of the test run, which sets up the
models that will be available for the duration of testing -- which
then constrains the tests that can actually be run.
This doesn't affect the Django core tests so much; Django's tests will
synchronise auth.User by default, which allows tests of type 1 to run.
It can also provide a custom User model, and use @override_settings to
swap in that model as required for tests of type 2. Tests of type 3
are effectively integration tests which will pass with *any* interface
compliant User model.
However, if I have my own project that includes contrib.auth in
INSTALLED_APPS, ./manage.py test will attempt to run *all* the tests
from contrib.auth. If I have a custom User model in play, that means
that the tests of type 1 *can't* pass, because auth.User won't be
synchronised to the database. I can't even use @override_settings to
force auth.User into use -- the opportunity for syncdb to pick up
auth.User has passed.
We *could* just mark the affected tests that require auth.User as
"skipUnless(user model == auth.User)", but that would mean some
projects would run the tests, and some wouldn't. That seems like an
odd inconsistency to me -- the tests either should be run, or they
shouldn't.
In thinking about this problem, it occurred to me that what is needed
is for us to finally solve an old problem with Django's testing -- the
fact that there is a difference between different types of tests.
There are tests in auth.User that Django's core team needs to run
before we cut a release, and there are integration tests that validate
that when contrib.auth is deployed in your own project, that it will
operate as designed. The internal tests need to run against a clean,
known environment; integration tests must run against your project's
native environment.
Thinking more broadly, there may be other categories -- "smoke tests"
for quick sanity check that a system is working; "Interaction tests"
that run live browser tests; and so on.
Python's unittest library contains the concept of Suites, which seems
to me like a reasonable analog of what I'm talking about here. What is
missing is a syntax for executing those suites, and maybe some helpers
to make it easier to build those suites in the first place.
I don't have a concrete proposal at this point (beyond the high level
idea that suites seem like the way to handle this). This is an attempt
to feel out community opinion about the problem as a whole. I know
there are efforts underway to modify Django's test discovery mechanism
(#17365), and there might be some overlap here. There are also a range
of tickets relating to controlling the test execution process (#9156,
#11593), so there has been plenty of thought put into this general
problem in the past. If anyone has any opinions or alternate
proposals, I'd like to hear them.
Yours,
Russ Magee %-)