D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\settings\deprecated.py:26: ScrapyDeprecationWarning: You are using the following settings which are deprecated or obsolete (ask scrapy-users@googlegroups.com for alternatives):
BOT_VERSION: no longer used (user agent defaults to Scrapy now)
warnings.warn(msg, ScrapyDeprecationWarning)
2015-05-14 10:08:34-0400 [scrapy] INFO: Scrapy 0.24.6 started (bot: sapui5api)
2015-05-14 10:08:34-0400 [scrapy] INFO: Optional features available: ssl, http11
2015-05-14 10:08:34-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sapui5api.spiders', 'SPIDER_MODULES': ['sapui5api.spiders'], 'USER_AGENT': 'sapui5api/1.0', 'BOT_NAME': 'sapui5api'}
2015-05-14 10:08:35-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
File "d:\python27\scripts\scrapy-script.py", line 9, in <module>
load_entry_point('scrapy==0.24.6', 'console_scripts', 'scrapy')()
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\commands\crawl.py", line 60, in run
self.crawler_process.start()
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\crawler.py", line 92, in start
if self.start_crawling():
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\crawler.py", line 124, in start_crawling
return self._start_crawler() is not None
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\crawler.py", line 139, in _start_crawler
crawler.configure()
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\crawler.py", line 47, in configure
self.engine = ExecutionEngine(self, self._spider_closed)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\core\engine.py", line 64, in __init__
self.downloader = downloader_cls(crawler)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\core\downloader\__init__.py", line 73, in __init__
self.handlers = DownloadHandlers(crawler)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\core\downloader\handlers\__init__.py", line 22, in __init__
cls = load_object(clspath)
File "D:\Python27\lib\site-packages\scrapy-0.24.6-py2.7.egg\scrapy\utils\misc.py", line 42, in load_object
raise ImportError("Error loading object '%s': %s" % (path, e))
ImportError: Error loading object 'crawler.http.PhantomJSDownloadHandler': No module named crawler.http