Scrapy怎么循环生成要爬取的页面url?

Scrapy怎么循环生成要爬取的页面url?
比如下面这个demo的start_requests方法,它是手动写的page1,page2:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

如果有50页,url分别是:

http://www.example.com/page/1
http://www.example.com/page/2
...
http://www.example.com/page/50

怎么生成这个url,
for循环的语法应该怎么写?

阅读 8.5k
2 个回答
urls = ["http://www.example.com/page/" + str(x) for x in range(1, 50, 1) + "/"]

其实你这demo不用yield,直接用内建的start_urls怼就行:

start_urls = ["http://www.example.com/page/" + str(x) for x in range(1, 50, 1)]

格式化字符串

for i in range(1,51):
    yield scrapy.Request('http://quotes.toscrape.com/page/%d/'%i)
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
宣传栏