Google Groups

Re: Related manager remove() and clear() methods - backwards incompatible changes

Anssi Kääriäinen Nov 16, 2013 11:02 AM
Posted in group: Django developers (Contributions to Django itself)
On Thursday, October 24, 2013 11:40:37 PM UTC+3, Anssi Kääriäinen wrote:
Here is the full list of changes that potential for breaking user code:
  - If related object's default manager has default filtering, then .remove() and .clear() will not clear those items that are filtered out.
  - Reverse ForeignKey .remove() will not use .save() - it will use .update() - so no model save signals, and overridden is missed, too.
  - GFK.remove() and GFK.clear() will use queryset.delete()  - so model.delete() is not called anymore (signals are sent in this case as QuerySet.delete() does that).

Loic's list of fixes & changes is also a good summary of this ticket:

I haven't figured any other way to deal with this ticket than just committing the changes. I hope the backwards incompatibilities aren't that severe. The second issue above seems to be the hardest one for users - if you relied on model save signals in your code the code will now be broken *and* you don't have any viable upgrade path if you relied on post_save signal.

I am considering addition of pre/post update signals. These signals would give an upgrade path for those who are hit with backwards incompatibility problems from #21169. The signals would of course be a good addition for those who happen to need them. The signals shouldn't affect the performance of your project if you don't use the signals.

The idea is that pre_update listeners get a queryset that isn't executed. Accessing that queryset might be costly, but if it isn't accessed, there isn't much cost of adding a pre_update signals. For post_update the signal handler would be given a list of PK values (the original queryset doesn't work for post_update, the update might cause different instances to be returned than was updated, consider qs.filter(deleted=False).update(deleted=True))

The problem here is that if you are updating a lot of rows (millions+), then even fetching the pk value can be too much. Maybe an update(__signals=False) flag could be added? It is also easy to optimize away the pk fetching if there aren't any post_update listeners.

Any objections to committing the fixes for Any feedback for pre/post_update idea?

 - Anssi