Using celery but not getting up to date version of the objects

23 views
Skip to first unread message

Jo G

unread,
Feb 1, 2017, 9:54:07 AM2/1/17
to substanced-users
So I am using celery to maintain a solr index of my zodb database. I am using substanced cms and zeo. 

Celery is running under supervisor and I am using deferred indexing as described in the substanced documentation.

Once I set celery going the first batch of tasks will execute beautifully but any of the next batch will say they are successful but don't show any of the changes I have made in zodb.  I had assumed this was something to do with my celery setup but I think it may be zeo returning the old unchanged object and not the new one.

I read this post and it seems very similar.



substanced object changed - event modified triggered which has two subscribers.
1 - index catalog of substanced

2- make updates to solr

Process 1 is run via a script sd_drain_indexing controlled by supervisor

Process 2 is run using celery which is passed the UUID of the substanced object that has changed.

To allow the indexing to happen in 1 I am delaying 2 for two minutes.

However 2 doesn't seem to pick up the latest version of the object. If I restart celery then I get the latest version but otherwise I am getting a 'stale' version of the object.

How can I ensure that I am getting a current version of the object?  I am using the following code in the __call__ method of my basetask.



class TalbotTask(BaseTask):
    """Abstract base class for all tasks in my app."""
    #

    def __call__(self, *args, **kwargs):
        if getattr(self, "registry", None) is None:
            registry = App.conf['PYRAMID_REGISTRY']
            env = bootstrap(registry.settings[u'ini_file_celery'])
            root = env['root']
            self.registry = registry
            self.root = root
            self.solrFeed = TalbotSolrFeeder(self.registry, TALBOT_LOG)
            self.objectmap = find_objectmap(self.root)
        super(BaseTask, self).__call__(*args, **kwargs)



I then tried adding in an after commit hook but think this isn't relevant as the commit I want to pick up is the one from sd_drain_indexing.

I also took out the if statement in the above code so I always recreated the root. This worked initially but falls over because I have 'too many open files'. 

Any help?

Reply all
Reply to author
Forward
0 new messages