代码如下:
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy import Request
from newspider.items import NewspiderItem
class DatamsgSpider(CrawlSpider):
"""docstring for DatamsgSpider"""
name = 'milspider'
def __init__(self,*args,**kwargs):
super().__init__(*args,**kwargs)
self.start_urls = ["http://news.xinhuanet.com/mil/index.htm"]
self.rules = (
Rule(LinkExtractor(allow=(),
restrict_xpaths=('//a[@href]')),
callback='parse_item',
follow=True)
)
def parse_item(self,response):
url = response.url
print(url)
repck = "http://news.xinhuanet.com/mil/"
if repck in url:
yield Request(url,callback = self.parse_link)
else:
pass
def parse_link(self,response):
data = NewspiderItem()
#一些提取操作
return data
我想顺着每个页面,爬取全站的链接,但是运行后结果却是
2017-10-23 11:55:27 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: newspider)
2017-10-23 11:55:27 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'newspider', 'COMMANDS_MODULE': 'newspider.commands', 'CONCURRENT_REQUESTS': 50, 'DOWNLOAD_DELAY': 1, 'DOWNLOAD_TIMEOUT': 15, 'LOG_LEVEL': 'INFO', 'NEWSPIDER_MODULE': 'newspider.spiders', 'SPIDER_MODULES': ['newspider.spiders']}
2017-10-23 11:55:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2017-10-23 11:55:27 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-10-23 11:55:27 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-10-23 11:55:27 [scrapy.middleware] INFO: Enabled item pipelines:
['newspider.pipelines.NewspiderPipeline']
2017-10-23 11:55:27 [scrapy.core.engine] INFO: Spider opened
2017-10-23 11:55:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-10-23 11:55:28 [scrapy.core.engine] INFO: Closing spider (finished)
2017-10-23 11:55:28 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 476,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 14346,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 10, 23, 3, 55, 28, 109821),
'log_count/INFO': 13,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 10, 23, 3, 55, 27, 968207)}
2017-10-23 11:55:28 [scrapy.core.engine] INFO: Spider closed (finished)
2017-10-23 11:55:28 [scrapy.core.engine] INFO: Closing spider (finished)
2017-10-23 11:55:28 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 472,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 29402,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 10, 23, 3, 55, 28, 116144),
'log_count/INFO': 10,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 10, 23, 3, 55, 27, 975791)}
2017-10-23 11:55:28 [scrapy.core.engine] INFO: Spider closed (finished)
rules 根本没有生效,没有走 callback 函数,这是怎么回事?怎么办?
我用scrapy shell测试了下,xpath没啥问题,流程应该也没问题,也许是xpath之后需要加点处理?
类似这样
url = 'https://zh.airbnb.com' + sel.xpath('@href').extract()[0]