一、项目介绍、开发工具及环境配置
1.1 项目介绍
博客园爬虫主要针对博客园的新闻页面进行爬取数据并入库。下面是操作步骤:
1、在打开新闻页面后,对其列表页数据的标题(含文本和链接)、图案(含图片和图片链接)、各个标签进行爬取。
2、根据深度优先遍历原理,再根据列表页的标题链接进行下一步深入,爬取里面的标题、正文、发布时间、类别标签(前面这些说的都是静态页面的爬取)和阅读数、评论数、赞同数(也叫推荐数)(后面这些说的都是基于动态网页技术的)。
3、设计表结构,也即编辑字段和字段类型。同时编写入库函数,进行数据入库。
1.2 开发工具
Pycharm2019.3
Navicat for MySQL11.1.13
1.3 环境配置
使用命令行,cd到你想放置的虚拟环境(virtualenv)的路径下,输入pip install virtualenv
这时就安装好虚拟环境了,下面我们将用指定的python3.6版本配置新项目的虚拟环境。
mkvirtualenv -p D:\Python36-64_install_location\python.exe article_spider
其中,D:\Python36-64_install_location\是python3.6的安装路径,article_spider是新项目的虚拟环境名。
下面要想进入虚拟环境(article_spider),输入workon article_spider
进入虚拟环境后,由于某些python开发包在下载过程中会出现timeout或很慢的情况,所以我们使用python的豆瓣镜像,下面下载python爬虫框架scrapy:
pip install -i https://pypi.douban.com/simple/ scrapy
当然,有些时候Windows的某些系统下会安装出错,这时登录以下网址:
https://www.lfd.uci.edu/~gohl...
这里面存放了所有Windows下容易出错的开发包,快捷键Ctrl+F快速搜索需要的安装包,下载,下载好了以后,调出命令行,cd到下载好的路径下,输入
pip install -i https://pypi.douban.com/simple 下载好的文件名称(包含后缀)
掌握以上两种pip install方式基本上就可以搞定所有python开发包的安装。
二、数据库设计
数据库包含这些字段:
标题,网址,网址Id,缓存图片路径,图片URL,点赞数,评论数,阅读数,标签,内容,发布日期
字段类型如下:
编号 | 字段名称 | 数据类型 | 是否为主键 | 说明 |
---|---|---|---|---|
1 | Title | varchar(255) | 否 | 标题 |
2 | Url | varchar(500) | 否 | 网址 |
3 | Url_object_id | varchar(50) | 是 | 网址的Id |
4 | Front_image_path | varchar(200) | 否 | 缓存图片路径 |
5 | Front_image_url | varchar(500) | 否 | 图片URL |
6 | Praise_nums | Int(11) | 否 | 点赞数 |
7 | Comment_nums | Int(11) | 否 | 评论数 |
8 | Fav_nums | Int(11) | 否 | 阅读数 |
9 | Tags | varchar(255) | 否 | 标签 |
10 | Content | longtext | 否 | 内容 |
11 | Create_date | datetime | 否 | 发布日期 |
三、代码实现
在main函数里设置添加爬虫(爬虫名字叫cnblogs)的文件路径和执行开始基于scrapy框架的爬虫命令:
import sys
import os
from scrapy.cmdline import execute # 执行scrapy的命令
if __name__ == '__main__':
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
execute(["scrapy","crawl","cnblogs"])
到cnblogs.py(注意这里的文件名要与main里面对应的爬虫名一样)里编写核心代码:
import re
import json
import scrapy
from urllib import parse
from scrapy import Request
from CnblogsSpider.utils import common
from CnblogsSpider.items import CnblogsArticleItem, ArticleItemLoader
class CnblogsSpider(scrapy.Spider):
name = 'cnblogs'
allowed_domains = ['news.cnblogs.com'] # allowed_domains域名,也即允许的范围
start_urls = ['http://news.cnblogs.com/'] # 启动main,进入爬虫,start_urls的html就下载好了
custom_settings = { # 覆盖settings以防止其他爬虫被追踪
"COOKIES_ENABLED":True
}
def start_requests(self): # 入口可以模拟登录拿到cookie
import undetected_chromedriver.v2 as uc
browser=uc.Chrome() #自动启动Chrome
browser.get("https://account.cnblogs.com/signin")
input("回车继续:")
cookies=browser.get_cookies() # 拿到cookie并转成dict
cookie_dict={}
for cookie in cookies:
cookie_dict[cookie['name']]=cookie['value']
for url in self.start_urls:
headers ={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.81 Safari/537.36'
} # 设置headers进一步防止浏览器识别出爬虫程序
yield scrapy.Request(url, cookies=cookie_dict, headers=headers, dont_filter=True) # 将cookie交给scrapy
# 以上是模拟登录代码
def parse(self, response):
url = response.xpath('//div[@id="news_list"]//h2[@class="news_entry"]/a/@href').extract_first("")
post_nodes = response.xpath('//div[@class="news_block"]') # selectorlist
for post_node in post_nodes: # selector
image_url = post_node.xpath('.//div[@class="entry_summary"]/a/img/@src').extract_first("") # 用xpath选取元素并提取出字符串类型的url
if image_url.startswith("//"):
image_url="https:"+image_url
post_url = post_node.xpath('.//h2[@class="news_entry"]/a/@href').extract_first("") # 注意要加点号,表示选取一个区域内部的另一个区域
yield Request(url=parse.urljoin(response.url, post_url), meta={"front_image_url": image_url},
callback=self.parse_detail)
# 提取下一页的URL并交给scrapy进行下载
next_url = response.xpath('//a[contains(text(),"Next >")]/@href').extract_first("")
yield Request(url=parse.urljoin(response.url, next_url), callback=self.parse)
def parse_detail(self, response):
match_re = re.match(".*?(\d+)", response.url)
if match_re:
post_id = match_re.group(1)
# title = response.xpath('//div[@id="news_title"]/a/text()').extract_first("")
# create_date = response.xpath('//*[@id="news_info"]//*[@class="time"]/text()').extract_first("")
# match_re = re.match(".*?(\d+.*)", create_date)
# if match_re:
# create_date = match_re.group(1)
# content = response.xpath('//div[@id="news_content"]').extract()[0]
# tag_list = response.xpath('//div[@class="news_tags"]/a/text()').extract()
# tags = ",".join(tag_list)
# article_item = CnblogsArticleItem()
# article_item["title"] = title
# article_item["create_date"] = create_date
# article_item["content"] = content
# article_item["tags"] = tags
# article_item["url"] = response.url
# if response.meta.get("front_image_url", ""):
# article_item["front_image_url"] = [response.meta.get("front_image_url", "")]
# else:
# article_item["front_image_url"] = []
item_loader=ArticleItemLoader(item=CnblogsArticleItem(),response=response)
# item_loader.add_xpath('title','//div[@id="news_title"]/a/text()')
# item_loader.add_xpath('create_date', '//*[@id="news_info"]//*[@class="time"]/text()')
# item_loader.add_xpath('content', '//div[@id="news_content"]')
# item_loader.add_xpath('tags', '//div[@class="news_tags"]/a/text()')
item_loader.add_xpath("title", "//div[@id='news_title']/a/text()")
item_loader.add_xpath("create_date", "//*[@id='news_info']//*[@class='time']/text()")
item_loader.add_xpath("content", "//div[@id='news_content']")
item_loader.add_xpath("tags", "//div[@class='news_tags']/a/text()")
item_loader.add_value("url",response.url)
if response.meta.get("front_image_url", ""):
item_loader.add_value("front_image_url",response.meta.get("front_image_url", ""))
# article_item=item_loader.load_item()
yield Request(url=parse.urljoin(response.url, "/NewsAjax/GetAjaxNewsInfo?contentId={}".format(post_id)),
meta={"article_item": item_loader,"url":response.url}, callback=self.parse_nums)
def parse_nums(self, response):
j_data = json.loads(response.text)
item_loader = response.meta.get("article_item", "")
# praise_nums = j_data["DiggCount"]
# fav_nums = j_data["TotalView"]
# comment_nums = j_data["CommentCount"]
item_loader.add_value("praise_nums",j_data["DiggCount"])
item_loader.add_value("fav_nums", j_data["TotalView"])
item_loader.add_value("comment_nums", j_data["CommentCount"])
item_loader.add_value("url_object_id", common.get_md5(response.meta.get("url","")))
# article_item["praise_nums"] = praise_nums
# article_item["fav_nums"] = fav_nums
# article_item["comment_nums"] = comment_nums
# article_item["url_object_id"] = common.get_md5(article_item["url"])
article_item = item_loader.load_item()
yield article_item
这里所说的动态网页技术的处理过程如下:
按F12调出开发者模式,刷新后找network,找有Ajax字样的name,点击后查看对应的网址并转入对应的网址就可看出,里面有json格式的数据。核心关键代码如下:
def parse_detail(self, response):
match_re = re.match(".*?(\d+)", response.url)
if match_re:
post_id = match_re.group(1)
item_loader.add_value("url",response.url)
yield Request(url=parse.urljoin(response.url, "/NewsAjax/GetAjaxNewsInfo?contentId={}".format(post_id)),
meta={"article_item": item_loader,"url":response.url}, callback=self.parse_nums)
def parse_nums(self, response):
j_data = json.loads(response.text)
item_loader = response.meta.get("article_item", "")
item_loader.add_value("praise_nums",j_data["DiggCount"])
item_loader.add_value("fav_nums", j_data["TotalView"])
item_loader.add_value("comment_nums", j_data["CommentCount"])
item_loader.add_value("url_object_id", common.get_md5(response.meta.get("url","")))
article_item = item_loader.load_item()
yield article_item
items.py处理数据:
import re
import scrapy
from scrapy.loader import ItemLoader
from scrapy.loader.processors import Join, MapCompose, TakeFirst, Identity
class CnblogsspiderItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pass
def date_convert(value):
match_re = re.match(".*?(\d+.*)", value)
if match_re:
return match_re.group(1)
else:
return "1970-07-01"
# def remove_tags(value):
# #去掉tag中提取的评论,如遇到评论删除评论这个数据,再用MapCompose()传递过来
# if "评论" in value:
# return ""
# else:
# return value
class ArticleItemLoader(ItemLoader):
default_output_processor = TakeFirst() # 将list的第一个值以字符串格式输出且仅输出第一个
class CnblogsArticleItem(scrapy.Item):
title=scrapy.Field()
create_date=scrapy.Field(
input_processor=MapCompose(date_convert) # 对数字进行正则处理
)
url=scrapy.Field()
url_object_id=scrapy.Field()
front_image_url=scrapy.Field(
output_processor=Identity() # 采用原来的格式
)
front_image_path=scrapy.Field()
praise_nums=scrapy.Field()
comment_nums=scrapy.Field()
fav_nums=scrapy.Field()
tags=scrapy.Field(
output_processor=Join(separator=",") # 将list们join起来
)
content=scrapy.Field()
pipelines.py里处理进入数据库的方式:
import scrapy
import requests
import MySQLdb
from MySQLdb.cursors import DictCursor
from twisted.enterprise import adbapi
from scrapy.exporters import JsonItemExporter
from scrapy.pipelines.images import ImagesPipeline
class CnblogsSpiderPipeline(object):
def process_item(self, item, spider):
return item
class ArticleImagePipeline(ImagesPipeline):
def get_media_requests(self, item, info):
for image_url in item['front_image_url']:
yield scrapy.Request(image_url)
def item_completed(self, results, item, info): # 图片下载过程中的拦截
if "front_image_url" in item:
image_file_path=""
for ok,value in results:
image_file_path=value["path"]
item["front_image_path"]=image_file_path
return item
# def get_media_requests(self, item, info):
# for image_url in item['front_image_url']:
# yield self.Request(image_url)
# class ArticleImagePipeline(ImagesPipeline):
# def item_completed(self, results, item, info):
# if "front_image_url" in item:
# for ok, value in results:
# image_file_path = value["path"]
# item["front_image_path"] = image_file_path
#
# return item
class JsonExporterPipeline(object):
# 第一步,打开文件
def __init__(self):
self.file = open("articleexport.json", "wb") # w写入a追加
self.exporter=JsonItemExporter(self.file,encoding="utf-8",ensure_ascii=False)
self.exporter.start_exporting()
def process_item(self, item, spider):
self.exporter.export_item(item)
return item
def spider_closed(self, spider):
self.exporter.finish_exporting()
self.file.close()
class MysqlTwistedPipline(object):
def __init__(self, dbpool):
self.dbpool = dbpool
@classmethod
def from_settings(cls, settings):
dbparms = dict(
host = settings["MYSQL_HOST"],
db = settings["MYSQL_DBNAME"],
user = settings["MYSQL_USER"],
passwd = settings["MYSQL_PASSWORD"],
charset='utf8',
cursorclass=DictCursor,
use_unicode=True,
)
dbpool = adbapi.ConnectionPool("MySQLdb", **dbparms)
return cls(dbpool)
def process_item(self, item, spider):
# 使用twisted将mysql插入变成异步执行
query = self.dbpool.runInteraction(self.do_insert, item)
query.addErrback(self.handle_error, item, spider) # 处理异常
return item
def handle_error(self, failure, item, spider):
# 处理异步插入的异常
print (failure)
def do_insert(self, cursor, item):
# 执行具体的插入
# 根据不同的item 构建不同的sql语句并插入到mysql中
# insert_sql, params = item.get_insert_sql()
insert_sql = """
insert into cnblogs_article(title, url, url_object_id, front_image_url, front_image_path, praise_nums, comment_nums, fav_nums, tags, content, create_date)
values (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) ON DUPLICATE KEY UPDATE praise_nums=VALUES(praise_nums)
""" # 发生主键冲突时用praise_nums更新praise_nums
# 便于排查
params = list()
# params.append(item["title"]) # 为防止抛异常,设置下面的做法,允许为空
params.append(item.get("title", ""))
params.append(item.get("url", ""))
params.append(item.get("url_object_id", ""))
# params.append(item.get("front_image_url", "")) # 不改的话,传过来的是个list,故当为空列表时要转化成字符串,用,join转为字符串
front_image = ",".join(item.get("front_image_url", []))
params.append(front_image)
params.append(item.get("front_image_path", ""))
params.append(item.get("praise_nums", 0))
params.append(item.get("comment_nums", 0))
params.append(item.get("fav_nums", 0))
params.append(item.get("tags", ""))
params.append(item.get("content", ""))
params.append(item.get("create_date", "1970-07-01"))
cursor.execute(insert_sql, tuple(params)) # list强转成tuple
settings.py配置全局设置:
import os
# Scrapy settings for CnblogsSpider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'CnblogsSpider'
SPIDER_MODULES = ['CnblogsSpider.spiders']
NEWSPIDER_MODULE = 'CnblogsSpider.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'CnblogsSpider (+http://www.yourdomain.com)'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'CnblogsSpider.middlewares.CnblogsspiderSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'CnblogsSpider.middlewares.CnblogsspiderDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
# 'CnblogsSpider.pipelines.CnblogsspiderPipeline': 300,
#}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
ITEM_PIPELINES = {
'CnblogsSpider.pipelines.ArticleImagePipeline':1,
'CnblogsSpider.pipelines.MysqlTwistedPipline':2,
'CnblogsSpider.pipelines.JsonExporterPipeline':3,
'CnblogsSpider.pipelines.CnblogsSpiderPipeline': 300
}
IMAGES_URLS_FILED="front_image_url"
project_dir=os.path.dirname(os.path.abspath(__file__))
IMAGES_STORE=os.path.join(project_dir,'images')
MYSQL_HOST = "127.0.0.1"
MYSQL_DBNAME = "article_spider"
MYSQL_USER = "root"
MYSQL_PASSWORD = "root"
SQL_DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
SQL_DATE_FORMAT = "%Y-%m-%d"
common.py处理url自动生成md5格式:
import hashlib
def get_md5(url):
if isinstance(url,str):
url=url.encode("utf-8")
m=hashlib.md5()
m.update(url)
return m.hexdigest()
最后运打开Navicat for MySQL,连接好数据库后,运行main文件,爬虫就开始运行并入库啦~
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。