Brand new projects should pass their tests (the django.contrib.auth thing from #7611)

121 views
Skip to first unread message

Simon Willison

unread,
Oct 6, 2009, 4:43:38 AM10/6/09
to Django developers
One of the things that has been established at DjangoCon is that, as a
community, we don't have a strong enough culture of testing. This is
despite Django shipping with some good testing tools (TestClient and
the like). Anything we can do to make testing more attractive is a
Good Thing.

In my opinion, one of the biggest psychological barriers to testing is
this:

$ cd /tmp
$ django-admin.py startproject foo
$ cd foo
$ echo "DATABASE_ENGINE='sqlite3';DATABASE_NAME='data.db'" >>
settings.py
$ python manage.py test
Creating test database...
Creating table auth_permission
...
EE..E...EEEEEEE..................
======================================================================
ERROR: test_password_change_fails_with_invalid_old_password
(django.contrib.auth.tests.views.ChangePasswordTest)
...
TemplateDoesNotExist: registration/password_change_form.html

On a brand new project, the tests fail. This has been discussed
before:

http://groups.google.com/group/django-developers/browse_thread/thread/ac888ea8567ba466
http://code.djangoproject.com/ticket/7611

The decision was to wontfix it, since the test failures are correct -
if you don't have those templates set up, the auth application won't
work. That certainly makes sense, but the larger issue remains: a
brand new project doesn't pass its tests out of the box.

A number of potential solutions come to mind:

1. Remove django.contrib.auth from the default set of INSTALLED_APPS
2. Add django.contrib.admin to the default set of INSTALLED_APPS so
that the templates are available
3. Ship the registration/ family of templates in the contrib.auth app
4. Modify the contrib.auth tests to first alter TEMPLATE_DIRS to
ensure the templates are available
5. Remove the views from contrib.auth and move them to
contrib.authviews - which isn't installed by default
6. Something else entirely

I quite like option 1 - I don't think including contrib.auth in brand
new projects is particularly useful. We can change the documentation
to note that if you want to add contrib.admin you'll need to inclued
contrib.auth as well.

I don't like option 2.

Option 3 is a fair bit more complicated than it sounds - we would need
to ensure that those templates can still be used by the admin but with
the admin's outer design applied (probably by using {% extends base %}
and having base be a variable that is independently twiddled with
somehow).

Option 4 would break the idea that the auth tests should tell you if
the templates are misconfigured.

Option 5 would add yet another dependency to the admin.

Option 6 would be welcome if anyone has any ideas.

I think this issue is well worth solving. If we DO solve it, we could
even think about adding some stuff about running "./manage.py test" to
the tutorial.

Cheers,

Simon

Richard Boulton

unread,
Oct 6, 2009, 4:58:59 AM10/6/09
to django-d...@googlegroups.com
2009/10/6 Simon Willison <si...@simonwillison.net>

I think this issue is well worth solving. If we DO solve it, we could
even think about adding some stuff about running "./manage.py test" to
the tutorial.

I think this is probably the biggest thing you could do to make django testing more prevalent - I've read various tutorials and not noticed any mention of "./manage.py test" before: I've only just learnt about it thanks to your post here. :)

A tutorial section which went through best practices for setting up and running tests (ie, how do I make some useful test data and get it in my DB?, etc) would be very helpful.  But just mentioning that Django had test support would be 90% of the win.

Admittedly, I hadn't looked hard, and had I done so I'd have seen the section on testing on the front page of the documentation, but since I'd not had Django's test support pointed out, I'd just been using my standard unittest framework for testing.

--
Richard

Russell Keith-Magee

unread,
Oct 6, 2009, 8:25:44 AM10/6/09
to django-d...@googlegroups.com
On Tue, Oct 6, 2009 at 4:43 PM, Simon Willison <si...@simonwillison.net> wrote:
>
> One of the things that has been established at DjangoCon is that, as a
> community, we don't have a strong enough culture of testing. This is
> despite Django shipping with some good testing tools (TestClient and
> the like). Anything we can do to make testing more attractive is a
> Good Thing.

Completely agreed on this point.

> In my opinion, one of the biggest psychological barriers to testing is
> this:

..

...


> The decision was to wontfix it, since the test failures are correct -
> if you don't have those templates set up, the auth application won't
> work. That certainly makes sense, but the larger issue remains: a
> brand new project doesn't pass its tests out of the box.

Again, completely agreed.

> A number of potential solutions come to mind:

...
> 6. Something else entirely

I think we need something else. In particular, I think we need to
address the problem at a slightly deeper level - we need to
acknowledge that we don't differentiate between application tests and
integration tests within Django's test framework.

To clarify my terms - an app test is a test that validates that the
application logic is correct. This means validating that the foo()
view does what the foo() view should do when it has a correctly
configured environment. An app test is entirely self contained - it
should require nothing from the containing project in order to pass.

On the other hand, an integration test validates that the app has been
correctly deployed into the containing project. This includes ensuring
that all required templates are available, required views are deployed
and required settings are correctly defined. Integration tests don't
validate the internal logic of the app - they only validate that the
environment is correctly configured.

The failing contrib.auth tests are strange beasts in this regard. On
the one hand, they are app tests that validate that the change
password view works as expected. However, they aren't self contained.
They require the existence of project-level template definitions to
work, so they also perform an integration testing role.

My original wontfix from #7611 was driven by a desire for the
contrib.auth tests to act as integration tests. However, in
retrospect, that isn't what they do. A deployment of contrib.auth
isn't _required_ to deploy the password change views, and if you don't
deploy those views, you don't need the templates either. To make
matters worse, the default empty project falls into this category of
'broken' configurations.

So - here's my suggestion for option 6, in two parts:

1. Reverse the decision of #7611, and make the current contrib.auth
test suite a genuine collection of app tests.

2. Add extra tests to validate the integration case. These tests
would be conditional, depending on exactly what has been deployed. For
example: one test would look to see if
contrib.auth.views.password_change has been deployed in the project.
If it has, then the test would try to GET the page. The test passes if
the page renders without throwing a missing template warning. However,
if the view isn't deployed, the test is either not run, or is passed
as a no-op. Essentially, the integration tests should be trying to
catch every way that you could misconfigure your deployment of an
application in a project.

Once we have made these changes for contrib.auth, we should probably
revisit the rest of contrib to make sure the rest of the test suite
behaves as expected.

Making this happen will require two pieces of infrastructure:

* Firstly, we need a way to make app tests completely independent of
the project environment. We started down this path when we added
TestCase.urls, and the patch on #7611 adds another little piece of the
puzzle. What we really need is a way to address this consistently for
_all_ application settings - including those provided by the
application itself. However, it isn't as simple as creating a new
settings object, because some settings - such as database settings -
need to be inherited from the test environment. Doing this in a
consistent fashion may mean deprecating TestCase.urls, but I'm OK with
that.

* Secondly, we need to make sure that we can easily establish if
integration conditions need to be tested. reverse() already does much
of the job here, but some helpers to make it easy wouldn't go astray.
Consideration needs to be given to namespaces - should the
contrib.auth tests validate that admin correctly deploys the password
change view, or should the namespace boundary be considered the edge
of responsibility for contrib.auth integration tests?

Looking longer term, we could also look at marking individual test as
being 'app' or 'integration' - then, we could modify the test runner
so you can run the entire integration suite, or run the app suite for
contrib.auth. The benefit here is that most users have no reason to
run the contrib.auth app test suite - it should always pass as
shipped. However, Django's test runner would run these tests in order
to validate that the app works correctly.

This could possibly be achieved using TestSuite instances, combined
with the test execution capabilities of #11627. The contrib apps would
all define a suite() method that that includes the integration tests,
plus an app_suite() method that returns all the app tests. ./manage.py
test auth would invoke the integration suite; ./manage.py test
auth.app_suite would invoke the app suite.

> I think this issue is well worth solving. If we DO solve it, we could
> even think about adding some stuff about running "./manage.py test" to
> the tutorial.

This is orthogonal IMHO. There are many things that should be added to
the tutorial, including testing. The failing test issue is annoying,
but I don't think it should stop us from adding testing to the
tutorial list. In fact, this bug is a good argument in favour of
adding testing to the tutorial, as it gives us an obvious place to
address the issue for new users.

Yours,
Russ Magee %-)

Adam V.

unread,
Oct 6, 2009, 4:42:02 PM10/6/09
to Django developers
In a Django project, I have a bash script that does:

APPS=`python -c "import settings; print settings.only_our_apps()"`
./manage.py test $APPS

In settings.py I define OUR_APPS as a list, and then define
INSTALLED_APPS as the django built-ins plus OUR_APPS:

_OUR_APPS = (
'something',
'anotherthing',
)

INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
) + _OUR_APPS

At the bottom of settings.py is a dinky function:
def only_our_apps():
return ' '.join(_OUR_APPS)


This all to simply "not run the built-in tests every time I test".

On Oct 6, 5:25 am, Russell Keith-Magee <freakboy3...@gmail.com> wrote:
> On Tue, Oct 6, 2009 at 4:43 PM, Simon Willison <si...@simonwillison.net> wrote:
>
> > One of the things that has been established at DjangoCon is that, as a
> > community, we don't have a strong enough culture of testing. This is
> > despite Django shipping with some good testing tools (TestClient and
> > the like). Anything we can do to make testing more attractive is a
> > Good Thing.
>
> Completely agreed on this point.
>
>
>
> > In my opinion, one of the biggest psychological barriers to testing is
> > this:
> ..
> > On a brand new project, the tests fail. This has been discussed
> > before:
>
> >http://groups.google.com/group/django-developers/browse_thread/thread...

Kevin Teague

unread,
Oct 6, 2009, 6:50:53 PM10/6/09
to Django developers


On Oct 6, 1:43 am, Simon Willison <si...@simonwillison.net> wrote:
>
> Option 6 would be welcome if anyone has any ideas.
>

Do what Grok does:

$ grokproject newapp
$ cd newapp
$ ./bin/test
Running tests at level 1
Total: 0 tests, 0 failures, 0 errors in 0.000 seconds.

That is, if it's a fresh project, and no code has been written, why
are there already tests?

Another way to make testing more popular in Django-based apps would be
to expand the intro tutorial to briefly show how a test is created and
run - that would certainly raise the awareness of the testing
facilities available. Or additionally inject some boilerplate test
suites into a new project when one runs 'django-admin.py
startproject' (e.g. the Grok and BFG project creation tools create a
bare-bones tests.py file in the root of the project directory).

(Incidentally, 'django-admin.py startproject' is probably overdue for
an overhaul ... why is there no setup.py file? Why is library code
being intermingled with scripts - manage.py should not be alongside an
__init__.py!)

Russell Keith-Magee

unread,
Oct 6, 2009, 7:37:06 PM10/6/09
to django-d...@googlegroups.com
On Wed, Oct 7, 2009 at 6:50 AM, Kevin Teague <ke...@bud.ca> wrote:
>
> On Oct 6, 1:43 am, Simon Willison <si...@simonwillison.net> wrote:
>>
>> Option 6 would be welcome if anyone has any ideas.
>
> Do what Grok does:
>
> $ grokproject newapp
> $ cd newapp
> $ ./bin/test
> Running tests at level 1
> Total: 0 tests, 0 failures, 0 errors in 0.000 seconds.
>
> That is, if it's a fresh project, and no code has been written, why
> are there already tests?

As I indicated in my message, there are integration tests - we can
validate that the default collection of applications has been
installed correctly. This isn't an issue straight after startproject
is run, but any configuration by the end user could potentially break
something - for example, removing the app-based template loader would
effectively break the admin.

> Another way to make testing more popular in Django-based apps would be
> to expand the intro tutorial to briefly show how a test is created and
> run - that would certainly raise the awareness of the testing
> facilities available.

This is certainly on the TODO list. Volunteers welcome.

> Or additionally inject some boilerplate test
> suites into a new project when one runs 'django-admin.py
> startproject' (e.g. the Grok and BFG project creation tools create a
> bare-bones tests.py file in the root of the project directory).

As of v1.1, we do this, providing an example of a unit test and a doctest..

Yours,
Russ Magee %-)

Simon Willison

unread,
Oct 8, 2009, 5:52:41 AM10/8/09
to Django developers
On Oct 6, 1:25 pm, Russell Keith-Magee <freakboy3...@gmail.com> wrote:
> I think we need something else. In particular, I think we need to
> address the problem at a slightly deeper level - we need to
> acknowledge that we don't differentiate between application tests and
> integration tests within Django's test framework.

That's a really interesting idea - I've been thinking about it over
the past couple of days and it does seem to make sense. I'm having
trouble imagining exactly how it would work, but I can see that tests
which identify problems with the user's configuration are useful, but
should absolutely be kept separate from those tests that verify that
the application works as a standalone chunk of code.

Are we talking about something like this?

./manage.py test # run application tests
./manage.py test --integration # run integration tests

>  1. Reverse the decision of #7611, and make the current contrib.auth
> test suite a genuine collection of app tests.
>
>  2. Add extra tests to validate the integration case. These tests
> would be conditional, depending on exactly what has been deployed. For
> example: one test would look to see if
> contrib.auth.views.password_change has been deployed in the project.
> If it has, then the test would try to GET the page. The test passes if
> the page renders without throwing a missing template warning. However,
> if the view isn't deployed, the test is either not run, or is passed
> as a no-op. Essentially, the integration tests should be trying to
> catch every way that you could misconfigure your deployment of an
> application in a project.

Conditional tests sound a little bit brittle to me - I think I prefer
the model where a user explicitly requests the integration tests to be
run.

> Making this happen will require two pieces of infrastructure:
>
>  * Firstly, we need a way to make app tests completely independent of
> the project environment. We started down this path when we added
> TestCase.urls, and the patch on #7611 adds another little piece of the
> puzzle. What we really need is a way to address this consistently for
> _all_ application settings - including those provided by the
> application itself. However, it isn't as simple as creating a new
> settings object, because some settings - such as database settings -
> need to be inherited from the test environment.

This is really interesting, because it fits in with my personal
vendetta against settings.py as a whole - the single reason I dislike
settings.py is that I often find myself wanting to use different
settings for different parts of the same application. Running tests
feels like an extension of that desire (and is in many ways a more
obvious use case).

>  * Secondly, we need to make sure that we can easily establish if
> integration conditions need to be tested. reverse() already does much
> of the job here, but some helpers to make it easy wouldn't go astray.

As someone who likes to experiment with alternative ways of
constructing an application (see djng) I'm still not a big fan of
reverse, which tightly couples me to defining all of my URLs in
urls.py with references to real solid functions (rather than re-
dispatching through whatever crazy mechanism takes my fancy on that
particular day). Again, it might turn out that there's a nicer
alternative to integration tests that attempt to run conditionally on
configuration. I'm very unclear on how any of this would work though.

It sounds to me like splitting tests in to application v.s.
integration tests is going to be a lot of work - work that's worth
doing, but it's not going to be a quick fix. I'd like to find a quick
temporary fix for the auth-tests-fail-by-default problem that can at
least resolve this particular issue rather than holding out for a
large scale refactoring of the test framework. Shipping default
registration tests with the contrib.auth app feels like the easiest
option here, especially since default templates would be a useful
feature of that part of the framework anyway.

If we were to ship default templates, simply using {% extends base %}
and providing an argument to the generic views that allows the user to
specify their own base template would be enough to ensure easy
integration with a custom design (and hence work around what I assume
is the reason we didn't ship default templates in the first place).

Does that sound reasonable?

Simon

Paul McLanahan

unread,
Oct 8, 2009, 8:53:36 AM10/8/09
to django-d...@googlegroups.com
On Thu, Oct 8, 2009 at 5:52 AM, Simon Willison <si...@simonwillison.net> wrote:
> If we were to ship default templates, simply using {% extends base %}
> and providing an argument to the generic views that allows the user to
> specify their own base template would be enough to ensure easy
> integration with a custom design (and hence work around what I assume
> is the reason we didn't ship default templates in the first place).

I like default templates. Even if they're only ever directly rendered
by the tests, they're a great way to provide an example of how to use
the app's context, and a nice jumping off point for people to copy
into their project's templates dir and modify. I certainly don't want
to see Django core start coming with a default design or set of
templates, but I do like the idea of a minimal set for the contrib
apps where appropriate and required for testing.

IIRC, the reason I've most often heard for not including default
templates is the fear (I believe well founded) that there would be a
flood of tickets from people who will want them to "just work" with
their design, and will beg for all sorts of bizarre options. Perhaps
we could assuage this by making it clear that these are purely for
testing, but may be copied as a jumping-off point or viewed as an
example. This could be accomplished by putting them in a "test" folder
in the app's templates directory, and passing that as the custom
template argument in the tests. That way there still wouldn't be an
"official" default template that people would expect to "just work",
but the tests would still pass.

Thanks,

Paul

Russell Keith-Magee

unread,
Oct 8, 2009, 10:58:50 AM10/8/09
to django-d...@googlegroups.com
On Thu, Oct 8, 2009 at 5:52 PM, Simon Willison <si...@simonwillison.net> wrote:
>
> On Oct 6, 1:25 pm, Russell Keith-Magee <freakboy3...@gmail.com> wrote:
>> I think we need something else. In particular, I think we need to
>> address the problem at a slightly deeper level - we need to
>> acknowledge that we don't differentiate between application tests and
>> integration tests within Django's test framework.
>
> That's a really interesting idea - I've been thinking about it over
> the past couple of days and it does seem to make sense. I'm having
> trouble imagining exactly how it would work, but I can see that tests
> which identify problems with the user's configuration are useful, but
> should absolutely be kept separate from those tests that verify that
> the application works as a standalone chunk of code.
>
> Are we talking about something like this?
>
> ./manage.py test # run application tests
> ./manage.py test --integration # run integration tests

Possibly. At the simplest level, it just means that we acknowledge the
role played by various tests - i.e., when we build a test, we
acknowlege that it _is_ an app test, that it should run independent of
the project environment, and any failure to do so is a bug in the
test.

We certainly _could_ follow this up by providing a way to invoke
app/integration tests separately, but I can think of a philosophical
reason and a technical reason why we shouldn't.

Philosophically - tests are meant to be run. Providing tools to _not_
run tests strikes me as counterproductive.

Technically - In order to have two subsets of tests, you need to be
able to mark at test as being either "integration" or "app" - either
by direct test annotation, or providing a list of tests that fit into
each category. As far as I can make out, it's impossible to implement
this sort of functionality without either:
1) Duplicating parts of Nose (et al) inside Django. For example,
writing a "this is an integration test" decorator duplicates
functionality provided by other test frameworks.
2) Utilizing techniques that are specific to unittest, and would be
problematic with any non-unittest test runner. For example, we could
use TestSuites to define an app suite and an integration suite that
unittest could use, but AFAIK, Nose can't run TestSuites.

>>  1. Reverse the decision of #7611, and make the current contrib.auth
>> test suite a genuine collection of app tests.
>>
>>  2. Add extra tests to validate the integration case. These tests
>> would be conditional, depending on exactly what has been deployed. For
>> example: one test would look to see if
>> contrib.auth.views.password_change has been deployed in the project.
>> If it has, then the test would try to GET the page. The test passes if
>> the page renders without throwing a missing template warning. However,
>> if the view isn't deployed, the test is either not run, or is passed
>> as a no-op. Essentially, the integration tests should be trying to
>> catch every way that you could misconfigure your deployment of an
>> application in a project.
>
> Conditional tests sound a little bit brittle to me - I think I prefer
> the model where a user explicitly requests the integration tests to be
> run.

That's not what the condition is for. Here's the use case:

Project 1: You have a project that uses contrib.auth so that you have
access to the User model. You also deploy the password reset views so
users can easily reset their password.

Project 2: You're still using the User model, but you're using your
own mechanism for performing password resets. Therefore, your project
won't deploy the builtin password reset views.

The integration test for Project 1 needs to validate that the password
reset templates are available. The integration test for Project 2
doesn't need to perform this check.

The "conditional test" would work something like this:

if view is deployed:
run integration test for view
else:
pass test as a no-op.

So, you'd always _run_ the test, but it would be a no-op if the
integration requirement that the test validates wasn't required.
Essentially, it's the test itself that is changing, depending on the
environment in which the app is deployed.

The other way to do this would be to play tricks on the test suite
generator to hide the integration test when it isn't required, but
this puts us back into the game of trying to build test tools and
duplicate the capabilities of Nose et al.

>> Making this happen will require two pieces of infrastructure:
>>
>>  * Firstly, we need a way to make app tests completely independent of
>> the project environment. We started down this path when we added
>> TestCase.urls, and the patch on #7611 adds another little piece of the
>> puzzle. What we really need is a way to address this consistently for
>> _all_ application settings - including those provided by the
>> application itself. However, it isn't as simple as creating a new
>> settings object, because some settings - such as database settings -
>> need to be inherited from the test environment.
>
> This is really interesting, because it fits in with my personal
> vendetta against settings.py as a whole - the single reason I dislike
> settings.py is that I often find myself wanting to use different
> settings for different parts of the same application. Running tests
> feels like an extension of that desire (and is in many ways a more
> obvious use case).

Vendettas are fine and all, but they don't really help solve the
problem. We need code to do that :-)

Most of the problem here comes down to a clean way of representing the
requirements for the test. We need to be able to:
* Mark some settings as inherited from the environment
* Reset some settings to global defaults
* Provide local values for some settings

Here's a suggestion for syntax:

class MyTest(TestCase):
preserved_settings = [
'DATABASE_NAME',
'DATABASE_BACKEND'
]
test_settings = {
TEMPLATE_DIRS = (...)
}

This creates a fresh Settings object, using the existing system values
for the keys named in 'preserved_settings', the specific values
provided in 'test_settings', and global defaults otherwise. This
settings object is installed for the duration of the test, then gets
swapped out on cleanup.

I'm not even certain that preserved_settings is required - this is
likely to be a fairly constant list (e.g., just the database
settings), so we might be able to get away with just a settings
dictionary to define local overrides.

> It sounds to me like splitting tests in to application v.s.
> integration tests is going to be a lot of work - work that's worth
> doing, but it's not going to be a quick fix. I'd like to find a quick
> temporary fix for the auth-tests-fail-by-default problem that can at
> least resolve this particular issue rather than holding out for a
> large scale refactoring of the test framework. Shipping default
> registration tests with the contrib.auth app feels like the easiest
> option here, especially since default templates would be a useful
> feature of that part of the framework anyway.

Agreed that this isn't going to be a simple change.

> If we were to ship default templates, simply using {% extends base %}
> and providing an argument to the generic views that allows the user to
> specify their own base template would be enough to ensure easy
> integration with a custom design (and hence work around what I assume
> is the reason we didn't ship default templates in the first place).
>
> Does that sound reasonable?

The biggest reason I'm aware of for not shipping default templates is
that they are in no way defaults. They are, at best, examples of the
template snippet you need to put in your own template. I'm yet to see
a shipped default template for an app that didn't need heavy local
modification to suit local site layout. Shipping a "default" template
suggests a level of pluggability that simply doesn't exist in my
experience.

Regardless, default templates won't fix the problem - at least, not
completely. All you need is an end-user project that comments out the
app template loader, and the default templates won't load, and the
tests will fail again. Granted, this is becoming slightly edge case,
but if we're going to fix the problem, lets fix the problem. We don't
need to address the full problem of identifying integration tests, but
we should solve the basic conditions required for a successful app
test (i.e., defining a clean settings module).

Russ %-)

Emil Stenström

unread,
Nov 7, 2009, 2:58:01 PM11/7/09
to Django developers
Found this message through Google, asked a couple of questions on IRC,
and got the suggestions to a test_runner that excludes the contribs
tests for you. Here's the code:

settings.py
---------------------
OUR_APPS = (
'something',
'another',
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
) + OUR_APPS

TEST_RUNNER = 'local_tests.run_tests'


local_tests.py:
---------------------
from django.test.simple import run_tests as default_run_tests
from django.conf import settings

def run_tests(test_labels, *args, **kwargs):
return default_run_tests(settings.OUR_APPS, *args, **kwargs)


To run the tests:
---------------------
manage.py test

Feels like a pretty good solution imo!

--
Emil Stenström
http://friendlybit.com

On Oct 6, 9:42 pm, "Adam V." <fla...@gmail.com> wrote:
> In a Django project, I have a bash script that does:
>
> APPS=`python -c "import settings; print settings.only_our_apps()"`
> ./manage.py test $APPS
>
> In settings.py I define OUR_APP Sas a list, and then define
Reply all
Reply to author
Forward
0 new messages