I discovered this calling manage.py test when using a custom test runner
that makes a subprocess call, and was unable to do so, because the python
process was already consuming so much memory that os.fork() couldn't
create a new process.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745>
Django <https://code.djangoproject.com/>
The Web framework for perfectionists with deadlines.
* needs_better_patch: => 0
* needs_tests: => 0
* needs_docs: => 0
Comment:
Did you test with 1.8.1 after #24591? The solution for that ticket is a
bit different on master, so testing there for comparison would also be
useful.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:1>
Comment (by jpulec):
Testing with 1.8.1 seems to yield the same results. Somewhere around 1GB
of virtual memory is being held after rendering the model migrations, and
then my test runner fails to fork. However, running with the flag
`--keepdb` runs smoothly as I would expect.
I'm working on trying to run against master, but I have so many
dependencies that break against master that it might take some time.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:2>
* type: Bug => Cleanup/optimization
* stage: Unreviewed => Accepted
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:3>
Comment (by jpulec):
For reference, I finally managed to run my tests against master. They
pass, but I think the margin is pretty slim. The VM I do all my testing in
has 2GB of RAM. On 1.8 and 1.8.1 my running test holds onto ~47% of my ram
according to top. On master that number comes down closer to 37% so it
runs for my use case.
However, it still shows that running all those migrations on creation of a
test database gobbles up all that ram and doesn't seem to release it right
away.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:4>
Comment (by marr75):
I've encountered the exact same problem. Our virtual machines used for
development would require 2GB of memory just to run migrations (as opposed
to the 512 they use now). We cannot upgrade until this is resolved.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:5>
* status: new => assigned
* owner: nobody => MarkusH
* has_patch: 0 => 1
Comment:
Could you give the pull request https://github.com/django/django/pull/5178
a try.
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:6>
* stage: Accepted => Ready for checkin
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:7>
* status: assigned => closed
* resolution: => fixed
Comment:
In [changeset:"5aa55038ca9ac44b440b56d1fc4e79c876e51393" 5aa55038]:
{{{
#!CommitTicketReference repository=""
revision="5aa55038ca9ac44b440b56d1fc4e79c876e51393"
Fixed #24743, #24745 -- Optimized migration plan handling
The change partly goes back to the old behavior for forwards migrations
which should reduce the amount of memory consumption (#24745). However,
by the way the current state computation is done (there is no
`state_backwards` on a migration class) this change cannot be applied to
backwards migrations. Hence rolling back migrations still requires the
precomputation and storage of the intermediate migration states.
This improvement also implies that Django does not handle mixed
migration plans anymore. Mixed plans consist of a list of migrations
where some are being applied and others are being unapplied.
Thanks Andrew Godwin, Josh Smeaton and Tim Graham for the review as well
as everybody involved on the ticket that kept me looking into the issue.
}}}
--
Ticket URL: <https://code.djangoproject.com/ticket/24745#comment:8>