--
You received this message because you are subscribed to the Google Groups "Google App Engine" group.
To post to this group, send email to google-a...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
To unsubscribe from this group, send email to google-appengi...@googlegroups.com.
In some cases I simply leave a version of the 'old' and 'new' logic in
place. When dealing with new data I use the new logic; when dealing
with existing data I use the version of the logic corresponding to the
entity or set of entities I am dealing with. Of course the structure
of my code itself facilitates this method, but it has proven very easy
to make even significant schema changes. Basically I push the interim
version then kick off my upgrade handlers. Works really well if you
do not want much, if any, downtime. YMMV.
I like the idea of using namespaces to version the data too. But
figuring out how to either keep stuff in sync or going offline for a
full conversion could be tricky. I suppose you could add some type of
change indicator that gets set on _all_ of your models, then increment
it each time you run a conversion. That would let you identify what
has changed since your last run... possibly minimizing downtime?
Robert
You could also forgo the index and instead loop over all entities of
that kind to do a bulk update. Or, you could simply leave code that
knows how to handle the old schema in place, and update entities as
you encounter them.
Robert