> Den 18/12/2014 kl. 12.20 skrev Erik Cederstrand <
erik+...@cederstrand.dk>:
>
> Hi list,
>
> I'm using Django as a hub to synchronize data between various external systems. To do this, I have some long-running Django management commands that are started as cron jobs. To ensure that only one job is working on a data set at any one time, I have implemented a locking system using the Django cache system.
>
> Due to my less-than-stellar programming talent, a cron job may need to be killed once in a while. To avoid leaving orphaned locks in the cache, I want to clean up any locks before exiting, and I may not know which cache keys exist at the time of exit. Therefore, I want to write a signal handler that searches the Django cache for any entries that can be traced to the current process (my cache keys can be used for this purpose) and delete those.
>
> AFAICS, the Django cache API can't be used for this. There's cache.clear() but I don't want to delete all cache entries. I'm using the database backend, so I'm thinking I could access the database table directly and issue any custom SQL on that. But it does feel hackish, so maybe one of you have a better approach? Maybe I got the whole thing backwards?
Just to wrap this up, I wrote a function to get all or some cache entries. Most of the code is copy-paste from django.core.cache.backends.db.
def get_cache_entries(key_prefix=None):
db = router.db_for_read(cache.cache_model_class)
table = connections[db].ops.quote_name(cache._table)
with connections[db].cursor() as cursor:
sql = "SELECT cache_key, value FROM %s" % table
params = None
if key_prefix:
sql += " WHERE cache_key LIKE %s"
params = (key_prefix + '%',)
cursor.execute(sql, params=params)
rows = cursor.fetchall()
res = {}
for row in rows:
value = connections[db].ops.process_clob(row[1])
res[row[0]] = pickle.loads(base64.b64decode(force_bytes(value)))
return res
Erik