I think I'll use Selenium to test my site, but how should I change to
a test database, populate the test database, etc.?
Any insight appreciated,
Todd
Does anybody have a best practice (or not-too-annoying practice) fortesting?I think I'll use Selenium to test my site, but how should I change toa test database, populate the test database, etc.?
> For unit testing, you don't really need a database. You can always
> mock the database calls, because you're not testing the data or the
> database (if postgres is up, the database is working), you're
> testing your code's behavior to the data. When I first starting
> using testing methods, I was obsessed with having perfect test
> data, but quickly realized what I really want is perfect code. I
> don't really mean perfect, but the code should always be better
> than the data.
Except that a lot of what I'm doing has interacting pages. If user #1
makes a change in a page, user #2 should see the output differently
when they visit another page. (For example, a teacher creates an
assignment and a student then sees the new assignment when they go to
the class home page.) So I'd really like to have a database backend
for testing so that I can actually run the site through its paces
with some test data to see if everything works the way it should. (In
other words, make sure that I put stuff where it's supposed to be and
get it out correctly.)
I've been trying to figure out how I could have two settings files,
one for the file I use when I'm just playing around with the site
during development and another for testing that would start clean
each time I run the tests. I thought about copying the settings.py
file over and using the --settings= option to manage.py when I do
runserver, but then I'd have to make changes to both files whenever I
add a new app or anything like that.
What I'd ideally like to do is create a testsettings.py module that
would contain all the stuff in settings.py, but would override
DATABASE_NAME, DATABASE_USER, etc. It seems like, given Python's
amazing powers of introspection and the existence of functions like
globals() and locals(), such a thing should be possible. I haven't
found it, however.
Does anyone know of a way to get all attributes of a module into
another module while overriding just a few?
Or am I making this too difficult?
Todd
I think you want tocreate something like testsettings.py, and in that
file do something like:
from myproject.settings import *
then override the specific settings you want. It won't work with
manage.py, but your tests should be able to just set the
DJANGO_SETTINGS_MODULE env variable to use the myproject.testsettings
module.
Joseph
I'd do it the other way round, leave the settings module to be the main
settings, then do something like:
try:
from myproject.localsettings import <LIST_OF_DATABASE_SETTINGS>
except:
<DEFAULT_DATABASE_SETTINGS>
Then just create a localsettings.db on the live and dev instances that
have the right settings in. I'm currently using this style setup for a
project, seems to work quite well, even using sqlite3 on the dev server
and postgres on the live server.
Cheers,
--
Brett Parker
> Does anybody have a best practice (or not-too-annoying practice) for
> testing?
>
> I think I'll use Selenium to test my site, but how should I change to
> a test database, populate the test database, etc.?
This is an area where Django can learn a huge amount from Ruby on
Rails - they've solved a whole lot of the pain points with regards to
testing against a database driven app. The two features that would be
most relevant here are fixtures and a testing environment.
Rails lets you configure a separate database for testing (in fact you
can have one for development, one for testing and one for deployment)
- unit tests automatically run against the test DB.
Fixtures are YAML files containing default data which is loaded in to
your test DB at the start of every test and reset afterwards. I'm not
overjoyed with YAML for this (it's a little verbose) but it does the
job and is very friendly to human editing, which is exactly why they
picked it.
A neat trick for Django would be a command line tool that can dump an
existing database in to fixture format - YAML or JSON or even
serialized Python objects. This could serve a dual purpose - at the
moment migrating Django application data from, for example, postgres
to mysql requires custom hacking (even though django.db lets you
interchange databases themselves with ease). Having a database-
neutral backup/dump format would provide a tool for doing exactly that.
For the moment Django's model tests demonstrate a reasonably way of
doing this stuff, but they aren't first class citizens of the Django
environment - you have to do a bit of leg work to get that kind of
thing set up for your own project. Fixing this would be another
feather in Django's cap.
Cheers,
Simon
> I think you want tocreate something like testsettings.py, and in that
> file do something like:
>
> from myproject.settings import *
>
> then override the specific settings you want. It won't work with
> manage.py, but your tests should be able to just set the
> DJANGO_SETTINGS_MODULE env variable to use the myproject.testsettings
> module.
You can also use the ./manage.py --settings=myproject.testsettings
command-line flag if you don't want to mess around with your
environment variables.
Cheers,
Simon
You might like twill as well: http://twill.idyll.org/
http://agiletesting.blogspot.com/2005/09/web-app-testing-with-python-part-3.html
--
Jeroen Ruigrok van der Werven
That did exactly what I was looking for. Now I just have to figure
out the most efficient way to pre-populate everything. I'm thinking
I'll have a 'test' app that has hooks for sticking in data that I can
call before I run tests.
Thanks everybody!
Todd
You should check out
https://simon.bofh.ms/cgi-bin/trac-django-projects.cgi/wiki/DjangoTesting
for django fixtures and unittest framework. The patch in
https://simon.bofh.ms/cgi-bin/trac-django-projects.cgi/ticket/226
updates it for the latest django trunk. The ony thing that isn't
updated for MR is django-test.py, which in my project i've replaced
with a simpler script, a fragment of which is below:
import os
import sys
import unittest
def runtests(applications):
from django.conf import settings
from stuff.testing import gather_testcases
for app_label in applications:
path = None
mod = None
mod = __import__(app_label, {}, {}, [app_label])
if app_label in settings.INSTALLED_APPS:
path = os.path.dirname(mod.__file__)
if path is None:
print "Error: did you add the application to your INSTALLED_APPS?"
sys.exit(8)
testsuite = gather_testcases(os.path.join(path, 'test', 'unittests'))
runner = unittest.TextTestRunner()
runner.run(testsuite)
Just a little teaser:
I've found a nice approach to test your views. The problem is, the http response is hard to test, since you have to either scrape the interesting content from it, or use regexps. Both is not really nice.
My approach does not check the actual http response, but the context that the view passes to the template. This means:
- changes in the template don't affect your tests
- changes in the tests are much easier to handle than with screen scraping
- test cases are easy to read, since you see all the data passed in and out
During a test run:
- you need to insert a special template tag ("ContextKeeper") into your base site template (see below)
- a HttpRequest gets created and run through Django via BaseHandler.get_response()
- the ContextKeeper fetches the view context and saves it into thread storage
- this is then used to compare the context to the expected context.
- In the context, objects are replaced by their __repr__(), but lists, dictionaries, sets, tuples stay as they are
- You can ignore context entries that you're not interested in, like all the "LANGUAGE_*" entries
In code, it looks like this:
def test_portal_initial():
# test case for GET /mailadmin/portal/
response, context = runner.get_response('/mailadmin/portal/', 'GET', {}, {})
assert runner.check_context(response, context, {'domain_title': None,
'domains': [],
'get_new_mailbox_url': 'create-mailbox/',
'kunden': ['<Kunde: xxxxx>'],
'mailbox_count': 0L,
'mailboxes': [],
'mailrule_count': 1L,
'messages': [],
'person': '<Person: xx, Herr Martin H. X.>',
'rules': ['<Mailrule: (smtp_route_mx) xxxxx.de -> mx.xxxxx.de>'],
'single_kunde': True,
'too_many_domains': False,
'too_many_mailboxes': False,
'too_many_rules': False,
'user': '<User: mir>'})
assert runner.check_response(response, {'status_code': 200, 'cookies': SimpleCookie('')})
This can the be used with py.test.
Then, there's a special middleware that writes the test cases for you while you use the browser. For each http transaction, it creates a test case. Cut and paste, a little bit tailoring and organizing, you're done.
I've also solved the problems of initializing the test database from a set of dicts, comparing dictionaries etc., and handling sessions. I'm already using it with great fun. It's only a little bit raw, but I plan to contribute it soon.
Then, I've found a way to combine py.test with doctests, including database setup. I use this to test all the functions that are not views.
Michael