TypeError: 'Rule' object is not iterable ...

992 views
Skip to first unread message

Scrapy_lover

unread,
Jun 29, 2012, 6:47:35 PM6/29/12
to scrapy...@googlegroups.com
When trying to crawl a website ,i got  the following error  >> any help please ?

script code 
----------------------------------------------------------------------------------------------------------
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item

class MySpider(CrawlSpider):
    name = 'example.com'
    allowed_domains = ['http://testaspnet.vulnweb.com/default.aspx']
    start_urls = ['http://testaspnet.vulnweb.com/default.aspx']
   
    rules = (
                Rule(SgmlLinkExtractor(allow=('//*[@id="Form"]' ) )))
               
    def parse_item(self, response):
        self.log('%s' % response.url)
        hxs = HtmlXPathSelector(response)
        item = Item()
   
        item['text'] = hxs.select("//input[(@id or @name) and (@type = 'text' or @type = 'password' or @type = 'file')]").extract()
       
        return item
        
  --------------------------------------------------------------------------------------------------------------------------------------
But it gave me the following error  :

home@home-pc:~/isa$ scrapy crawl example.com
2012-06-30 00:32:11+0200 [scrapy] INFO: Scrapy 0.14.4 started (bot: isa)
2012-06-30 00:32:11+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-06-30 00:32:11+0200 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-06-30 00:32:11+0200 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-06-30 00:32:11+0200 [scrapy] DEBUG: Enabled item pipelines:
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 4, in <module>
    execute()
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 132, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 139, in _run_command
    cmd.run(args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 44, in create
    return spcls(**spider_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py", line 37, in __init__
    self._compile_rules()
  File "/usr/local/lib/python2.7/dist-packages/scrapy/contrib/spiders/crawl.py", line 83, in _compile_rules
    self._rules = [copy.copy(r) for r in self.rules]

TypeError: 'Rule' object is not iterable


Steven Almeroth

unread,
Jun 30, 2012, 11:23:56 PM6/30/12
to scrapy...@googlegroups.com
Try this:

rules = (Rule(SgmlLinkExtractor(allow=('//*[@id="Form"]'))),)

notice the extra comma near the end.

Scrapy_lover

unread,
Jul 1, 2012, 2:05:44 AM7/1/12
to scrapy...@googlegroups.com
thanks a lot

SpiritusPrana

unread,
May 21, 2015, 6:45:07 AM5/21/15
to scrapy...@googlegroups.com
The answer that keeps on giving - thank you 3 years later!
Reply all
Reply to author
Forward
0 new messages