>
> Hello,
>
> Several people already wrote something about memcached + SqlAlchemy.
>
> Remember, Mike Nelson wrote a mapper extention, it is available at:
> http://www.ajaxlive.com/repo/mcmapper.py
> http://www.ajaxlive.com/repo/mcache.py
>
> I've rewritten it a bit to fit 0.4 release of SA.
>
> Any response and comments are welcome, since I am not sure I am doing
> right things in the code :) I dont like that dirty tricks with
> deleting _state, etc. Maybe it could be done better?
what happens if you just leave "_state" alone ? there shouldnt be any
need to mess with _state (nor _entity_name). the only attribute
worth deleting for the cache operation is "_sa_session_id" so that the
instance isnt associated with any particular session when it gets
cached. Id also consider using session.merge(dont_load=True) which is
designed for use with caches (and also watch out for that log.debug(),
debug() calls using the standard logging module are notoriously slow).
> It has some problems with deferred fetch on inherited mapper because
> of some issues of SA (I've found them in Trac).
the only trac ticket for this is #490, which with our current
extension architecture is pretty easy to fix so its resolved in 3967 -
MapperExtensions are now fully inherited. If you apply the same
MapperExtension explicitly to a base mapper and a subclass mapper,
using the same ME instance will have the effect of it being applied
only once (and using two different ME instances will have the effect
of both being applied to the subclass separately).
is it not reasonable to ask that objects which are to be serialized
and cached not have any deferred columns ? (or they are explicitly
loaded before caching )?
> So, to be cached, an object should fetch all its deferred columns (if
> any) and provide all of them at __getstate__. Right?
that would work.
> And if an instance from cache has nothing for one of its deferred
> column values, then referencing these properties after merge wont load
> them from DB, but just fail?
as far as merge failing, I need to see what the exact mechanics of
that error message are. For a polymorphic "deferred" in particular,
its a major chunk of an object's state that is deferred, i.e.
everything corresponding to the joined tables, and the callables are
currently established at the per-instance level. So it may be
necessary now for merge to still "fail" if unloadable deferreds are
detected, although we can and should provide a nicer error message.
Some longer term solutions to the "pickled" issue include trying to be
more aggressive about placing class-level attribute loaders which dont
need to be serialized, placing "hints" in the _state which could help
the _state to reconstruct the per-instance deferred callables, or we
might even be able to get the _state to call deferreds during
serialization without the need for an explicit __getstate__, but then
you are caching all that additional state.
>
> And if an instance from cache has nothing for one of its deferred
> column values, then referencing these properties after merge wont load
> them from DB, but just fail?
>
I rearranged instance-level deferred loaders to be serializable
instances in r3968. you can now pickle an instance + its _state and
restore, and all deferred/lazy loaders will be restored as well. I
didnt yet test it specifically with merge() but give it a try, you
shoudnt be getting that error anymore...the pickling issue from ticket
#870 is also no longer present.
what did you remove exactly ? there are some attributes on the
instance, such as _instance_key and _entity_name, which should not be
erased. also any attribute which doesnt have a deferred or expired
flag on it shouldnt be erased either. if you want to remove
attributes, use session.expire(instance, ['key1', 'key2', ...]). a
test script illustrating pickling/unpickling, which uses update(), is
attached.
>
> 5) obj = s.merge(obj, dont_load=True) (with a fresh session s)
merge is still not working, it raises an exception in this case. will
have a fix soon.
>
> merge worked without an exception this time.
merge is working rudimentally for objects with unloaded scalar/
instance/collection attributes in r3974. whats not yet happening is
the merging of the various query.options() that may be present on the
original deferred loader, which means the merged instance wont
necessarily maintain the exact eager/lazy/deferred loading of the
original, but this is not especially critical for the basic idea to
work.
example script using merge attached.