小弟新手,在看着教程写scrapy爬虫时使用xpath().extract()[0]的方法获取内容,报IndexError: list index out of range错误,求问大神怎么解决,急求答案在线等。(试过去掉.extract()[0],会报出TypeError: Request url must be str or unicode错误)。代码如下:
cnblog_spider.py
# -*- coding:utf-8 -*-
# !/usr/bin/env python
import scrapy
from bs4 import BeautifulSoup
from scrapy import Selector
from p1.items import CnblogsSpiderItem
class CnblogsSpider(scrapy.spiders.Spider):
name = "cnblogs" # 爬虫的名称
allowed_domains = ["cnblogs.com"]# 允许的域名
start_urls = [
"http://www.cnblogs.com/qiyeboy/default.html? page=1"
]
def parse(self,response):
# 实现网页的解析
# 首先抽取所有的文章
papers = response.xpath(".//*[@class='day']")#.extract()
# 从每篇文章中抽取数据
#soup = BeautifulSoup(papers, "html.parser", from_encoding="utf-8")
# print papers
for paper in papers:
url = paper.xpath(".//*[@class='pastTitle']/a/@href").extract()[0]
title = paper.xpath(".//*[@class='pastTitle']/a").extract()[0]
time = paper.xpath(".//*[@class='dayTitle']/a").extract()[0]
content = paper.xpath(".//*[@class='postCon']/a").extract()[0]
# print url, title, time, content
item = CnblogsSpiderItem(url=url, title=title, time=time, content=content)
request = scrapy.Request(url=url, callback=self.parse_body)
request.meta['item'] = item # 将item暂存
yield request
#yield item
next_page = Selector(response).re(u'<a href="(\S*)">下一页</a>')
if next_page:
yield scrapy.Request(url=next_page[0], callback=self.parse)
def parse_body(self, response):
item = response.meta['item']
body = response.xpath(".//*[@class='postBody']")
item['cimage_urls'] = body.xpath('.//img//@src').extract()
yield item
应该是没爬到数据
最好的办法是在xpath().extract()[0]打个断点,看看数据取到了没
我以前也遇到过这种问题,是xpath写的不对,所以没爬到页面数据。