I patched two production servers with changes.
I use one server/scrapyd instance to populate shared queue with new records and consume, and the 2nd one only consume records from shared queue.
So i have 2 scrapyd instances on different servers consume records from same shared queue.
This scenario worked well for some hours but i am in front of this bug now.
https://github.com/scrapy/scrapyd/issues/40My error log
Traceback (most recent call last):
File "/root/mysite/local/lib/python2.7/site-packages/twisted/internet/base.py", line 825, in runUntilCurrent
call.func(*call.args, **
call.kw)
File "/root/mysite/local/lib/python2.7/site-packages/twisted/internet/task.py", line 239, in __call__
d = defer.maybeDeferred(self.f, *self.a, **
self.kw)
File "/root/mysite/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 149, in maybeDeferred
result = f(*args, **kw)
File "/root/mysite/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1331, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/root/mysite/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1185, in _inlineCallbacks
result = g.send(result)
File "/root/mysite/local/lib/python2.7/site-packages/scrapyd/poller.py", line 24, in poll
returnValue(self.dq.put(self._message(msg, p)))
File "/root/mysite/local/lib/python2.7/site-packages/scrapyd/poller.py", line 33, in _message
d = queue_msg.copy()
exceptions.AttributeError: 'NoneType' object has no attribute 'copy'
Instead of 2 spiders like original bug, i guess i have same bug cause of 2 scrapyd instances with same spider queue.
Is there something that i can do to avoid it?