Hi all,
I had a stab at a somewhat simpler development server automatic reloading strategy
https://github.com/django/django/compare/master...ramiro:synch-reloaderIntention
is to test how an implementation of a design by Gary Bernhardt would
look. The best written description I could find is this:
https://github.com/devlocker/tychus/issues/3Gary also had posted some tweets (this is how I got interested in the topic) which seems to have been deleted since then.
Main
idea is: Actual checking of changes on the filesystem for modules
under monitoring isn't performed in a loop or by depending on a OS
kernel feature but per-HTTP request by a front-end proxy process which
is in charge of restarting the 'upstream' web server process (in our
case a dumbed-down runserver dev server) only when it detects there have
been changes.
Been meaning to try this for some time. It would
have been much harder before Tom Forbes' work on refactoring and
cleaning up the reloading code for Django 2.2. IMHO Tom's code is so very well thought that for example I just had to lightly subclass StatReload to implement this
totally different strategy.
Current form of the code is a new
experimental 'serverrun' (for lack of a better name) added to the Django
code base whose command line UI mimics 100% the runserver one.
It copies code from a few places of our code base: The runserver command, the WSGI app hosting code, etc.
I
decided to implement this as a new built-in command for now a) to ease
experimentation and b) because it needs some minor changes to the
'runserver' command to handle cosmetic details (logging). If the idea
is accepted (read further below for reasons in favor of this) then maybe
we can switch runserver to this code. Or if the idea isn't deemed
appropate for Django core them I might implement it as an standalone
django app/project.
If the idea of a smarter stat()-based FS
status monitor like this gets actually tested and validated in the field
(i.e. by users with big source code trees) it could allow us to
possibly stop needing to depend on all of:
* watchman
* pyinotify
* watchdog
(and removing our support code for them from the Django code base).
Also, this would mean:
* Setup simplification for final users (no third party Python libraries or system daemon to install)
*
Better cross-platform portability for Django (we go back to
piggy-backing stat() from the stdlib as our only way yo trigger code
reloading).
Additionally, as the reloading is performed fully (by
restarting the whole HTTP server) and is triggered from another process
(the transparent http proxy one) we can drop some contortions we
currently need to make:
- Having to wait for the app registry stabilization
- Avoiding race conditions with the url resolver
I suspect there could be power efficiency advantages too as:
* The scanning for changes is triggered by HTTP requests which should be less frequent than periodically every N seconds.
*
If the developer modifies more than one file before switching to the
browser there is need of only one FS scan to cater for all these
changes, which is performed just in time for the first HTTP request so
the code executed to render/serve it is 100% accurate in regard to
actually reflecting the state of the code on disk.
Similar projects include:
- serveit:
https://github.com/garybernhardt/serveit- tychus:
https://github.com/devlocker/tychus- wsgiwatch:
https://github.com/dpk/wsgiwatchFeedback is welcome!
Regards,
--
Ramiro Morales
@ramiromorales