使用scrapy-redis,已设置
SCHEDULER_PERSIST = True
,爬取结束后,仍自动清空redis库。清空redis库,不会自动停止爬取,仍在一直请求。
我是使用一个种子库,用master插入请求的url,slave读取source:start_urls,未使用scrapy-reids自动插入url到redis。
运行了一下scrapy-redis里面的example-project,lpush了一个url,使用scrapy crawl myspider_redis,发现也是不能自动结束,一直空跑。
settings设置如下:
SPIDER_MODULES = ['market.spiders']
NEWSPIDER_MODULE = 'market.spiders'
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_QUEUE_CLASS = "scrapy_redis.queue.SpiderPriorityQueue"
SCHEDULER_PERSIST = True
DOWNLOADER_MIDDLEWARES = {
'market.middleware.UserAgentMiddleware': 401,
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 400,
'market.redirect_middleware.Redirect_Middleware':500,
}
ITEM_PIPELINES = {
'market.pipelines.MarketPipeline': 300,
'scrapy_redis.pipelines.RedisPipeline': 301,
}
LOG_LEVEL = 'DEBUG'
# Introduce an artifical delay to make use of parallelism. to speed up the
# crawl.
DOWNLOAD_DELAY = 1
COMMANDS_MODULE = 'market.commands'
#redis
REDIS_HOST = '127.0.0.1'
REDIS_PORT = 6379
解决url取完,继续空跑,可以参考这个解决空跑
或者在 scrapy-redis/spider的next_requests加入: