I have a cron job to `update_index` every hour on the production server. About half the time, though, the update fails with `Error updating <appname> using default`. I'm unable to reproduce this error locally, i.e., if I run `update_index` many, many times locally, it never fails. This has happened in two situations: first, when I add an additional field (which is many-to-many) to the `templates/search/indexes/<appname>/<modelname>_text.txt` file, and second, when I double the amount of indexing by indexing the `templates/search/indexes/<appname>/<modelname>_text.txt` file as two different field types. So my guess is maybe that on the production server, this indexing is too much for the CPU to handle or something and locally it's not? I'm not entirely sure.
My idea for fixing this is 1) to upgrade django-haystack from 2.4.1 to 2.5 because I see in the most recent changelog, "update_index will retry after backend failures." and/or 2) to lower the batch_size from the default of 1000 to something lower, like 250. Do these potential solutions make sense? Does anyone else have any suggestions or questions?
Thank you community!