Hi Varun,
I think there are some interesting pieces in here. Specific comments:
- I'm more optimistic than Russ that we can/should be able to get close
to 100% code coverage in Django's test suite (when coverage is summed
over all the configurations that CI runs, which I don't think the
current django-coverage job does -- finding a way to fix that would be a
useful task in itself, I think), and "analysing and improving test
coverage" is the piece of your current proposal that I find the most
interesting. In addition to improving test coverage, I would expect that
project to also uncover some bugs and some dead code paths. It would be
nice to see a bit more analysis of the current state of coverage in your
proposal, where the gaps are, and what it would take to fill them. While
I appreciate that you've broken down that part of the work into
week-long chunks, it looks a bit like you've just assumed that you can
do two top-level modules per week, which doesn't indicate that you've
put a great deal of thought into what coverage is missing and how many /
what kinds of tests you'd need to write to fill it. I would guess that
there might be several-order-of-magnitude differences in the quantity of
work needed between various top-level modules with missing coverage.
- Like Russ, I do not think that splitting up the test suite into
"unit", "integration", and "regression" categories will result in enough
benefit to justify the work.
- I am not excited about adding more options to runtests.py. I'm not
sure that Django should be gradually inventing a more and more complex
in-house test runner, when the Python community already has several
battle-tested runners that already include many of the features we want
(and have flexible plugin architectures that would allow adding any
missing features we need). IMHO the best option currently is py.test.
I'd be more interested in a proposal to explore converting the Django
test suite to run with py.test than in a proposal to add more features
to runtests.py. But that's just my opinion; there may be others on the
core team who would be opposed to such a move.
- Regarding the specific options you propose to add to runtests.py:
-- My initial thought is that running multiple databases consecutively
doesn't seem that useful; it's not hard to run two commands (you can
even put them on the same shell command-line if you want). But then I
thought that the one place this feature could come in handy is making it
easier to measure coverage across runs with all database backends.
-- Specifying specific test apps to run is something we already have;
you can just list them on the command line.
-- Specifying specific test apps to skip is something I don't think I've
ever wanted; do you have a use case for it?
-- Like I said above, I don't think the unit/regression/integration
split is worth the work, and I'm not sure what the actual use case would
be for running just one "type" and not the others.
Hope this feedback is useful to you in improving your proposal.
Carl