如何解决scarpy-redis空跑问题?

scrapy-redis框架中,reids存储的xxx:requests已经爬取完毕,但程序仍然一直运行,如何自动停止程序,而不是一直在空跑?

2017-07-03 09:17:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-03 09:18:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

[仅供参考]可以通过engine.close_spider(spider, 'reason')来停止程序的运行。

# schedluer.py
    def next_request(self):
        block_pop_timeout = self.idle_before_close
        request = self.queue.pop(block_pop_timeout)
        if request and self.stats:
            self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
        if request is None:
            self.spider.crawler.engine.close_spider(self.spider, 'queue is empty')
        return request
# 当然也可以在scrapy_redis中spiders.py模块 
    def next_requests(self):
        """Returns a request to be scheduled or none."""
        use_set = self.settings.getbool('REDIS_START_URLS_AS_SET', defaults.START_URLS_AS_SET)
        fetch_one = self.server.spop if use_set else self.server.lpop
        # XXX: Do we need to use a timeout here?
        found = 0
        # TODO: Use redis pipeline execution.
        while found < self.redis_batch_size:
            data = fetch_one(self.redis_key)
            if not data:
                # Queue empty.
                print('+++++queue is empty')
                self.crawler.engine.close_spider(self.spider, 'queue is empty')
                break
            req = self.make_request_from_data(data)
            if req:
                yield req
                found += 1
            else:
                self.logger.debug("Request not made from data: %r", data)

        if found:
            self.logger.debug("Read %s requests from '%s'", found, self.redis_key)

还有一个问题不明白:
当通过engine.close_spider(spider, 'reason')来关闭spider时,会出现几个错误之后才能关闭。

# 正常关闭
2017-07-03 18:02:38 [scrapy.core.engine] INFO: Closing spider (queue is empty)
2017-07-03 18:02:38 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'finish_reason': 'queue is empty',
 'finish_time': datetime.datetime(2017, 7, 3, 10, 2, 38, 616021),
 'log_count/INFO': 8,
 'start_time': datetime.datetime(2017, 7, 3, 10, 2, 38, 600382)}
2017-07-03 18:02:38 [scrapy.core.engine] INFO: Spider closed (queue is empty)
# 之后还会出现几个错误才关闭spider,难道spider刚启动时会启动多个线程一起抓取, 
# 然后其中一个线程关闭了spider,其他线程就找不到spider才会报错!
Unhandled Error
Traceback (most recent call last):
  File "D:/papp/project/launch.py", line 37, in <module>
    process.start()
  File "D:\Program Files\python3\lib\site-packages\scrapy\crawler.py", line 285, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "D:\Program Files\python3\lib\site-packages\twisted\internet\base.py", line 1243, in run
    self.mainLoop()
  File "D:\Program Files\python3\lib\site-packages\twisted\internet\base.py", line 1252, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "D:\Program Files\python3\lib\site-packages\twisted\internet\base.py", line 878, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "D:\Program Files\python3\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
    return self._func(*self._a, **self._kw)
  File "D:\Program Files\python3\lib\site-packages\scrapy\core\engine.py", line 137, in _next_request
    if self.spider_is_idle(spider) and slot.close_if_idle:
  File "D:\Program Files\python3\lib\site-packages\scrapy\core\engine.py", line 189, in spider_is_idle
    if self.slot.start_requests is not None:
builtins.AttributeError: 'NoneType' object has no attribute 'start_requests'
阅读 6.3k
2 个回答

怎样知道放的requests爬取完毕,这个要定义才知道
如果不复杂,可以使用内部扩展关掉!

scrapy.contrib.closespider.CloseSpider

CLOSESPIDER_TIMEOUT
CLOSESPIDER_ITEMCOUNT
CLOSESPIDER_PAGECOUNT
CLOSESPIDER_ERRORCOUNT
http://scrapy-chs.readthedocs...

新手上路,请多包涵

可以通过判断 连续时间内,redis key 是否为空 作为是否关闭的条件。 这个可以看 [[1]: https://my.oschina.net/2devil...][1], 作者写到很好,添加扩展就可以达成自动停止的目的

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题
宣传栏