pyspider降低爬取频率的问题,如何限制on_start方法中for循环的执行频率?

看文档中设置@every是用来限制多久执行一次on_start方法,我现在是暴力遍历url去爬,我需要限制for循环的执行频率

def on_start(self):
        for id in range(133, 160999+1):
            self.crawl('https://www.lagou.com/gongsi/'+ str(companyId) + '.html', js_script="""function(){setTimeout("$('.text_over').click()", 1000);}""",fetch_type='js', callback=self.index_page, save={'companyId': id})

就是如何能够控制for的执行呢,比如每1s去爬一个url,for的执行速度太快瞬间新建了很多任务,导致我的ip被封了。。。我这个暴力url办法比较笨,哪位大神有好的意见也请告诉我。

我现在用了一个代理ip,这个代理限制每秒只处理5个http请求,因为我self.crawl的回调函数中也还有self.crawl,回调函数也不知道是多久后执行的,不太懂整个请求队列以及on_start中几秒去爬一个url是合适的?

阅读 4.9k
1 个回答

The crawl rate is controlled by rate and burst with token-bucket algorithm.

  • rate - how many requests in one second

  • burst - consider this situation, rate/burst = 0.1/3, it means that the spider scrawls 1 page every 10 seconds. All tasks are finished, project is checking last updated items every minute. Assume that 3 new items are found, pyspider will "burst" and crawl 3 tasks without waiting 3*10 seconds. However, the fourth task needs wait 10 seconds.

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进