Scrapy怎么循环生成要爬取的页面url?
比如下面这个demo的start_requests
方法,它是手动写的page1,page2:
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
def start_requests(self):
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
如果有50页,url分别是:
http://www.example.com/page/1
http://www.example.com/page/2
...
http://www.example.com/page/50
怎么生成这个url,
for循环的语法应该怎么写?
其实你这demo不用
yield
,直接用内建的start_urls
怼就行: