I always consider/calculate memory dynamics in my routines, since a ndb entity is maximum 1MB's, I always fetch/handle elements in batches of 20 or 40
Consider such a routine:
1) Handles 400 entities
2) Chunks 400 keys into 20's
3) In a for
3a) fetches 20 entities async
3b) gets the result
3c) computes
3d) gc.collect() ?
I think 3d should be unnecessary, yet it seems to help, since those elements are out of scope when the for re-iterates, they should be auto collected
There is also a possibility that the datastore/ndb has a leak somewhere, because If I run similar routines like this too much, I see occasional instance/memory overflows