In my project I'm dealing with quite a lot (+15000) of files that are uploaded.
During testing and development, I run with the file-watch option enabled so that code is being reloaded when I make changes.
I noticed that due to the amount of files being added to my app folder, the filewatch consumes quite a lot of CPU cycles (CPU usage goes to >90%)
By replacing the python watchgod library by its succesor watchfiles, the CPU dropped back down to only a few percent!
Comparing two py4web runs using the linux time command: one with watchfiles, the other with watchgod, it is clear that the CPU time used by watchfiles is only a fraction of the cpu time used by watchgod
With respect to py4web itself, it only requires changing watchgod to watchfiles in the import:
diff --git a/py4web/core.py b/py4web/core.py
index 4713f460..5a0ac430 100644
--- a/py4web/core.py
+++ b/py4web/core.py
@@ -38,7 +38,7 @@ from collections import OrderedDict
from contextlib import redirect_stderr, redirect_stdout
import portalocker
-from watchgod import awatch
+from watchfiles import awatch
# Optional web servers for speed
try:
In addition watchfiles also supports things such as putting a filter on the filewatches, so you can easily exclude specific directories or filenames with specific extensions from the watch.