scrapy这个循环怎么简化一下

官方文档有这么个示例,如下:
文档:https://docs.scrapy.org/en/la...

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

问题:
urls的page如果是从1-100,怎么写比较简化一点?

 urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
            //...
            'http://quotes.toscrape.com/page/100/',
        ]
阅读 2.3k
2 个回答

简陋的方法:

urls = ['http://quotes.toscrape.com/page/' + str(i) + '/' for i in range(1,101)]

xrange 和 yield 不会创建数组,减少了内存占用

baseUrl = "http://quotes.toscrape.com/page/%d"

for i in xrange(1,101):
    url = baseUrl % (i)
    yield scrapy.Request(url=url, callback=self.parse)
撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进