If I make a synthetic key, it appears that I can avoid this:
class Joker(db.Model):
unused = db.StringProperty()
def __init__(self):
m = hashlib.sha1()
m.update(str(time.time()))
name = base64.b64encode(m.digest())
logging.debug("name="+name)
db.Model.__init__(self, key_name=name)
1) GOOG folks - are there any performance downsides to taking this approach?
2) If no, are there any other environmental factors that might be
fodder for the hash (user, etc)?
Thanks,
Jeff
Currently, one must put() in order to have obj.key() be valid. In some
flows, I find my self having to put() object twice for this reason.
If I make a synthetic key, it appears that I can avoid this:
class Joker(db.Model):
unused = db.StringProperty()
def __init__(self):
m = hashlib.sha1()
m.update(str(time.time()))
name = base64.b64encode(m.digest())
logging.debug("name="+name)
db.Model.__init__(self, key_name=name)
1) GOOG folks - are there any performance downsides to taking this approach?
2) If no, are there any other environmental factors that might be
fodder for the hash (user, etc)?
Thanks,
Jeff
thanks - I got bit by those __init__ nuances over the weekend. I ended
up passing an optional flag to the __init__ to say "this is really a
new() vs a datastore reconstitution". I del the optional flag from
kwargs before calling the super __init__. In the datastore
reconstitution case, I do nothing but call the super __init__.
Does that cover the __init__ gotchas, or am I digging my own grave by
not converting to a distinct create function?