New feature request - Run only a random subset of tests.

Skip to first unread message


Feb 12, 2024, 8:01:05 AMFeb 12
to Django developers (Contributions to Django itself)
Django developers,

I created and was asked to start a discussion on the DevelopersMailingList. I'm requesting a new feature request - Run only a random subset of tests.

tests - add a new argument "--test-only" (int, >=0) and If run with this argument, run only this number of tests.
Sometimes there are thousands of tests, for example 6,000 tests, and I want to run only a random subset of them, for example 200 tests. It should be possible with Django.

More details can be found on

Another feature request I asked is -tests - use wildcards in labels. More details can be found on this link. But this may be more complicated to implement. 

Uri Rodberg, Speedy Net.

Jörg Breitbart

Feb 12, 2024, 8:36:12 AMFeb 12

May I ask, whats the idea behind running a random subset of tests only?
Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into
thinking everything is ok, but in reality the random test selection did
not catch affected code paths? I mean for tests - its all about
reliability, isn't it? And 200 out of 6k tests sounds like running often
into false positive test results, esp. if your test base is skewed
towards features not being affected by current changes.

I think this could still work with a better reliability / changed code
coverage, if the abstraction is a bit more complicated, e.g.:
- introduce grouping flags on tests - on module, class or even single
method scope
- on test run, declare what flags should be tested (runs all test with
given flags)
- alternatively use appropriate flags reflecting your code changes +
your test counter on top, but now it selects from the flagged tests with
higher probability to run affected tests

Ah well, just some quick thoughts on that...


Jörg Breitbart

Feb 12, 2024, 8:52:45 AMFeb 12
Adding to my last comment:

If you are looking for a more tailored unit testing with low test
pressure and still high reliability - maybe can give you
enough code insights to build a tailored test index db to only run
affected tests from current code changes. I am not aware of test
frameworks doing that currently, but it should give you high confidence
in the test results w'o running them all over and over.


Feb 12, 2024, 9:12:46 AMFeb 12
Hi Jörg,

All our tests are tested anyway with GitHub Actions. The idea is to run a subset of tests locally to catch 90% of the problems before I commit and wait 40 minutes for all the tests to run. It works most of the time. Of course the whole tests should be run before deploying to production, but running a subset of tests improves productivity in locating errors without having to wait for the full test suit to run.

(by the way, running all our tests take 90 minutes, but we skip many tests and run them in random anyway - we have 11 languages and we always test 3 specific languages + another language selected by random. This is how we reduce the time from 90 minutes to 40 minutes. And if we make changes in languages, we can wait 90 minutes and run all the tests)

I also can run specific tests if I work on a specific module. For example if I work on a specific view - I can run only tests of this view. But again, of course we run all the tests before we deploy to production.


You received this message because you are subscribed to the Google Groups "Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit


Feb 12, 2024, 9:22:57 AMFeb 12

Also, sometimes I just want to see how many tests are in a specific module, without running them. So I can just run `./ test --test-only 0 --test-all-languages` which gives me the number of tests in, or any module I need. There is no way to count them from code, because many of them are tested more than once so the only way to know how many tests are in a specific module is to run them.


On Mon, Feb 12, 2024 at 3:36 PM Jörg Breitbart <> wrote:

Adam Johnson

Feb 12, 2024, 10:10:19 AMFeb 12
I’d be against this. I think this approach would be counterproductive in most cases due to the high probability of a false positive. Including it as a core feature is not necessary when it can be added through a third party package.

Jörg Breitbart

Feb 12, 2024, 11:36:57 AMFeb 12
I also think that your requirements are too specific to be added to
django. You are prolly better suited by creating your own test picking
abstraction for this, e.g. by writing custom test suite aggregates or
using and going into tests of interest by
your own logic (assuming you are sticking to unittest for testing).

Jason Johns

Feb 17, 2024, 8:11:59 AMFeb 17
to Django developers (Contributions to Django itself)
agreed about this is a unittest/pytest specific focus, not django.

This is pretty straightforward to do with pytest, can have a custom flag and then randomize the test collection.  With unittest, not sure.

Reply all
Reply to author
0 new messages