I'm proposing a "--with-fixture" flag to django-admin.py, so that you
could do something like this:
django-admin.py runserver --with-fixture=mydata.json
With this command, Django would:
* Create a new test database (following the TEST_DATABASE_NAME setting).
* Import the fixture data into that fresh test database (just as the
unit test framework does).
* Change the DATABASE_NAME setting in memory to point to the test database.
* Run the development server, pointing at the test database.
* Delete the test database when the development server is stopped.
The main benefit of this feature is that it would let developers poke
around their fixture data and, essentially, walk through unit tests
manually (in a Web browser rather than programatically).
Sure, developers could just set up a separate database, with a
separate settings file, but there are at least two problems with that.
One, it requires a fair amount of overhead, which would discourage
people from doing it. Two, it violates DRY -- if the fixture data is
already available in a fixture file, why not just use that?
In the future, the next step would be to keep track of any database
changes and optionally serialize them back into the fixture when the
server is stopped. This would let people add fixture data through
their Web application itself, whether it's the Django admin site or
something else. Now *that* would be cool and useful! But I'm only
proposing the first part of this for now.
I'm happy to implement this. I just wanted to get some opinions on it
before I start.
Adrian
--
Adrian Holovaty
holovaty.com | djangoproject.com
I think you're reading my mind -- I was thinking about this just the
other day. It would really, really help... and the "writeback" idea
(serializing the data back into a fixture when the web server closes)
would be super cool as well.
A big +1 from me.
Jacob
Since the test runner already offers to delete it if a test database
already exists, why not just let the one you created --with-fixture hang
around until you run a new command that needs to get rid of it?
Otherwise this is a great idea that I currently do manually with dropdb;
createdb; manage.py loaddata; manage.py runserver;
Todd
I agree. Either this or dumping the results to a new file before nuking
the database. Poking around after the event is a pretty handy debugging
technique for "wtf?!" moments. More than once I've gone into the
runtests.py file and change SQLite's behaviour to keep the test database
around for post mortem work. I suspect this would be more of a wish if I
could also poke at the website (or when running Selenium tests).
Otherwise, I'm +1 as well.
Regards,
Malcolm
--
What if there were no hypothetical questions?
http://www.pointy-stick.com/blog/
unittests, or doctests? walking through unittests is already viable,
docttests, not really (or at least not even remotely pleasant).
> In the future, the next step would be to keep track of any database
> changes and optionally serialize them back into the fixture when the
> server is stopped. This would let people add fixture data through
> their Web application itself, whether it's the Django admin site or
> something else. Now *that* would be cool and useful! But I'm only
> proposing the first part of this for now.
That one actually worries me a bit; personally, doctests I view as
great for *verifying* documentation, but when used as api tests (as
django does), it has two failings imo-
1) doctests are really a pita if you've grown accustomed to the
capabilities of normal unittests; further, they grow unwieldly rather
quickly- a good example would be regressiontests/forms/tests.py;
I challenge anyone to try debugging a failure around line 2500 on that
beast; sure, you can try chunking everything leading up to that point,
but it still is a collosal pain in the ass to do so when compared to
running
trial snakeoil.test.test_mappings.FoldingDictTest.testNoPreserver
Basically, I'm strongly of the opinion doctests while easy to create,
aren't all that useful when something breaks for true assertions; very
least, ability to run a specific failing fixture (the above trial
command) is damn useful, but not viable with doctests.
2) Adding functionality to automatically collect/serialize a stream of
interp. commands, while shiny, is going to result in a lot of
redundancy in the tests- I strongly suspect such functionality will
result in just building up a boatload of random snippets, many hitting
the same code in the same way; while trying to brute force the test
source is one way to test for bugs, actual tests written by hand (with
an eye on the preexisting tests) is a bit saner imo.
Realize django devs take the stance that "doctest/unitest, whatever,
it's tests", but not sure if I'd recommend that route.
Debugging tool, hey, kick ass; serialization bit, eh, if it scratches
your itch...
~harring
I think there might be some misunderstanding here - I believe he's
saying to merely dump the fixture out again using the new database with
changes - so if you start runserver off foo.json and edit the database,
those edits will then be present in foo.json on exit, so you dont have
to manually load/edit/dump.
Not to do some sort of logging of python actions into a test somehow :)
--
Collin Grady
I use technology in order to hate it more properly.
-- Nam June Paik
And vice-versa. You're going to find that opinions vary quite a bit here
from the "unittests are the new sliced bread" camp to the "doctests are
teh k00l" to people who don't mind (or possibly enjoy) either.
Personally, I prefer doctests to the unittest framework for many things,
although unittests are invaluable in some cases (e.g. client tests would
be hard to write as doctests a lot of the time), so right tool for the
job and some flexibility is good to have.
Each system has its advantages and disadvantages. A couple of the
advantages of doctests are that they are easier to write, have less
setup overhead (so they often run faster as a whole) and are
self-descriptive. Trying to work out the pertinent bits of some unittest
methods is often a bit tricky because you have to mentally filter out
the test-specific setup bits (which don't necessarily belong in setUp or
tearDown in a suite).
A couple of the disadvantages include that narrowing in on a specific
line is harder, the environment stays around for the whole docstring, so
sometimes inadvertently gets corrupted by something earlier and the fact
that the output is highly ASCII based, so you can break error reporting
pretty easily. The first one if manageable by breaking things into
smaller blocks (see below), the second one is not that big a problem
until it bites you and then you fix it and the last one is because the
doctest infrastructure in Python is Broken As Designed in that respect,
so there's not a lot we can do about it.
> further, they grow unwieldly rather
> quickly- a good example would be regressiontests/forms/tests.py;
> I challenge anyone to try debugging a failure around line 2500 on that
> beast; sure, you can try chunking everything leading up to that point,
> but it still is a collosal pain in the ass to do so when compared to
> running
Fortunately, it's also not something you have to do that often (when
compared to running or reading them). It's not impossible and that file
is by far the worst example. Many files are a lot better than that. Bad
unittest examples accrete, too, and sometimes it's unavoidable. Usually
if you manage to trigger a failure in forms/tests.py it's fairly easy to
read the error, check your latest small change and narrow down what
broke it. Not always, but I'll maintain the exceptions are rare (and,
yes, I've had to do it a lot, too. But let's play the averages a bit).
We keep encouraging anybody who wants to to supply a patch to run a
particular sub-test inside a test (so forms.test.firstBit) and then it
makes it worthwhile splitting up the big string in forms.tests into half
a dozen or so littlier tests that can be run in isolation. At the
moment, it's not that useful to do (having them all in one file is handy
because it's fast to read and search -- we've only split out things
unrelated to the main form tests into other files in that directory).
Readability, maintainability and general runtime speed are other aspects
of the same area that we are trading off against here.
Regards,
Malcolm
--
Despite the cost of living, have you noticed how popular it remains?
http://www.pointy-stick.com/blog/
+1 to the general idea.
However, this syntax (--with-fixture) doesn't really lend itself well
to having multiple fixtures. When you specify fixtures for test, you
specify a list, not a single fixture. My test harnesses generally
consist of a number of smaller fixtures, rather than one big fixture
that is used for all tests.
I would suggest that rather than trying to make the --with-fixture
flag handle all this, it would be better to do this as a top level
command, i.e.:
django-admin.py testserver mydata.json accounts.json categories.json
This would also draw attention to the fact that the database that will
be used for the server and the data in that database are a testing
facility, not the main server and main database.
> * Delete the test database when the development server is stopped.
As noted by others, I don't think this is wise. Todd's suggestion of
clearing the database at the start of execution strikes me as a better
approach.
> In the future, the next step would be to keep track of any database
> changes and optionally serialize them back into the fixture when the
> server is stopped.
Cool, yes - but but also very difficult. Again, multiple fixtures are
the problem here.
Yours,
Russ Magee
This makes sense -- I hadn't considered the possibility of multiple
fixtures. I'm sold on the idea of a separate command.
> > * Delete the test database when the development server is stopped.
>
> As noted by others, I don't think this is wise. Todd's suggestion of
> clearing the database at the start of execution strikes me as a better
> approach.
Agreed. I *do* think it's slightly messy to leave a database on the
person's machine, but I'll make it so that Django outputs a message --
something like "Test database 'django_test_db' still exists on your
system; feel free to poke around, or delete it" -- when the server is
stopped.
> > In the future, the next step would be to keep track of any database
> > changes and optionally serialize them back into the fixture when the
> > server is stopped.
>
> Cool, yes - but but also very difficult. Again, multiple fixtures are
> the problem here.
Ah, multiple fixtures are a cruel mistress. Maybe we only enable this
feature if you're using a single fixture. If you're using multiple
fixtures, this feature wouldn't be available.
+1, definitely.
> > > In the future, the next step would be to keep track of any database
> > > changes and optionally serialize them back into the fixture when the
> > > server is stopped.
>
> > Cool, yes - but but also very difficult. Again, multiple fixtures are
> > the problem here.
>
> Ah, multiple fixtures are a cruel mistress. Maybe we only enable this
> feature if you're using a single fixture. If you're using multiple
> fixtures, this feature wouldn't be available.
Another solution would be for one to - one way or another - specify
which file & format to use with the writebacks, if none is specified,
no writeback is done. It wouldn't really make sense to write back to
different fixture files. Maybe:
python manage.py testserver fixture.xml --writeback fixture.xml
Obvious issue being that, with clean code, this would open two file
descriptors and funny errors arise. Solution to that would be to use
`--writeback` as a flag saying "the next argument isn't only a fixture
to load, but write back to it as well."
Just my 2 SEK (Swedish kronor.)
Ludvig Ericson
I've implemented this in http://code.djangoproject.com/changeset/5912
The docs are here:
http://www.djangoproject.com/documentation/django-admin/#testserver-fixture-fixture
Was this somehow lost from trunk?
#django-admin.py runserver --with-fixture=minimal.json
...
/Users/jeremydunck/django-admin.py: error: no such option: --with-fixture
Possibly in the management reorg?
No, you need to use 'testserver' instead of 'runserver' as in the changeset
--
Collin Grady