0

linux 下抓取微信公众号文章遇到验证问题!!!!!!!!

这是我要抓取的人民日报链接:http://mp.weixin.qq.com/profile?src=3&timestamp=1492739045&ver=1&signature=bSSQMK1LY77M4O22qTi37cbhjhwNV7C9V4aor9HLhAvbGc2ybWX*qg3WqxntZ7iq0kvYe87oPpcSJKFdmGMx5g==
1:首先浏览器上访问是正常的。
2:linux下访问提示需要验证,以下是简单的代码

url = http://mp.weixin.qq.com/profile?src=3&timestamp=1492738883&ver=1&signature=bSSQMK1LY77M4O22qTi37cbhjhwNV7C9V4aor9HLhAvbGc2ybWX*qg3WqxntZ7iq2xTLUTfxAMzK79UGvalY1A==
response = urllib2.urlopen(url)
print response.read()

访问的结果如下:

补充说明下公众号链接的获取方式:
1:先访问链接:http://weixin.sogou.com/weixi...
2:再获取人民日报公众号的链接进行跳转。

4个回答

0

都不模拟header请求头的,就能抓取吗,建议先模拟request header再试一下

0
# coding: utf-8

import requests

headers = {}
headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0'

url = 'http://mp.weixin.qq.com/profile?src=3&timestamp=1492739045&ver=1&signature=bSSQMK1LY77M4O22qTi37cbhjhwNV7C9V4aor9HLhAvbGc2ybWX*qg3WqxntZ7iq0kvYe87oPpcSJKFdmGMx5g=='
r = requests.get(url, headers=headers)
print r.text
+1
coding: utf-8

import urllib2

headers = {}

headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0'

url = 'http://mp.weixin.qq.com/profi...×tamp=1492739045&ver=1&signature=bSSQMK1LY77M4O22qTi37cbhjhwNV7C9V4aor9HLhAvbGc2ybWX*qg3WqxntZ7iq0kvYe87oPpcSJKFdmGMx5g=='

request = urllib2.Request(url,headers=headers)

response = urllib2.urlopen(request)

print(response.read())

DonnieGo · 4月21日

展开评论
0

现在在请求中加了header后,返回的错误是这样的。请各位大神麻烦再支下招

0

用request可以,本地环境Mac OSX , python3.6.1

import requests

headers = {'user-agent' : 'Mozilla/5.0'}
respon = requests.get('http://mp.weixin.qq.com/profile?src=3&timestamp=1492831080&ver=1&signature=bSSQMK1LY77M4O22qTi37cbhjhwNV7C9V4aor9HLhAvbGc2ybWX*qg3WqxntZ7iqB7vsPUlOS3zhl-8n5FUODg==', headers = headers)
respon.encoding = 'utf-8'
print(respon.text)

内容在红色框那一行

撰写答案