Scrapy - Reactor 不可重启

新手上路,请多包涵

和:

 from twisted.internet import reactor
from scrapy.crawler import CrawlerProcess

我总是成功地运行这个过程:

 process = CrawlerProcess(get_project_settings())
process.crawl(*args)
# the script will block here until the crawling is finished
process.start()

但由于我已将此代码移至 web_crawler(self) 函数中,如下所示:

 def web_crawler(self):
    # set up a crawler
    process = CrawlerProcess(get_project_settings())
    process.crawl(*args)
    # the script will block here until the crawling is finished
    process.start()

    # (...)

    return (result1, result2)

并开始使用类实例化调用方法,例如:

 def __call__(self):
    results1 = test.web_crawler()[1]
    results2 = test.web_crawler()[0]

并运行:

 test()

我收到以下错误:

 Traceback (most recent call last):
  File "test.py", line 573, in <module>
    print (test())
  File "test.py", line 530, in __call__
    artists = test.web_crawler()
  File "test.py", line 438, in web_crawler
    process.start()
  File "/Library/Python/2.7/site-packages/scrapy/crawler.py", line 280, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1194, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 1174, in startRunning
    ReactorBase.startRunning(self)
  File "/Library/Python/2.7/site-packages/twisted/internet/base.py", line 684, in startRunning
    raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable

怎么了?

原文由 8-Bit Borges 发布,翻译遵循 CC BY-SA 4.0 许可协议

阅读 1.1k
2 个回答

您不能重新启动反应堆,但您应该能够通过分叉一个单独的进程来运行它更多次:

 import scrapy
import scrapy.crawler as crawler
from scrapy.utils.log import configure_logging
from multiprocessing import Process, Queue
from twisted.internet import reactor

# your spider
class QuotesSpider(scrapy.Spider):
    name = "quotes"
    start_urls = ['http://quotes.toscrape.com/tag/humor/']

    def parse(self, response):
        for quote in response.css('div.quote'):
            print(quote.css('span.text::text').extract_first())

# the wrapper to make it run more times
def run_spider(spider):
    def f(q):
        try:
            runner = crawler.CrawlerRunner()
            deferred = runner.crawl(spider)
            deferred.addBoth(lambda _: reactor.stop())
            reactor.run()
            q.put(None)
        except Exception as e:
            q.put(e)

    q = Queue()
    p = Process(target=f, args=(q,))
    p.start()
    result = q.get()
    p.join()

    if result is not None:
        raise result

运行两次:

 configure_logging()

print('first run:')
run_spider(QuotesSpider)

print('\nsecond run:')
run_spider(QuotesSpider)

结果:

 first run:
“The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid.”
“A day without sunshine is like, you know, night.”
...

second run:
“The person, be it gentleman or lady, who has not pleasure in a good novel, must be intolerably stupid.”
“A day without sunshine is like, you know, night.”
...

原文由 Ferrard 发布,翻译遵循 CC BY-SA 4.0 许可协议

这就是帮助我战胜 ReactorNotRestartable 错误的原因: 问题作者的最后回答

  1. pip install crochet

  2. import from crochet import setup

  3. setup() 在文件的顶部

3)删除2行:

一) d.addBoth(lambda _: reactor.stop())

b) reactor.run()

我对这个错误有同样的问题,花了 4 个多小时来解决这个问题,在这里阅读所有关于它的问题。终于找到了那个 - 并分享它。这就是我解决这个问题的方法。 Scrapy 文档 中剩下的唯一有意义的行是我的代码中的最后两行:

 #some more imports
from crochet import setup
setup()

def run_spider(spiderName):
    module_name="first_scrapy.spiders.{}".format(spiderName)
    scrapy_var = import_module(module_name)   #do some dynamic import of selected spider
    spiderObj=scrapy_var.mySpider()           #get mySpider-object from spider module
    crawler = CrawlerRunner(get_project_settings())   #from Scrapy docs
    crawler.crawl(spiderObj)                          #from Scrapy docs

此代码允许我选择要运行的蜘蛛,只需将其名称传递给 run_spider 函数,在抓取完成后 - 选择另一个蜘蛛并再次运行它。

希望这会对某人有所帮助,因为它对我有帮助:)

原文由 Chiefir 发布,翻译遵循 CC BY-SA 4.0 许可协议

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题
logo
Stack Overflow 翻译
子站问答
访问
宣传栏