python爬取分页问题

我爬取的思路是先寻找所有网页,然后再请求所有网页,并将他们的内容用beautifulsoup解析出来,最后写进csv文件里面,但是却报错了.这是为什么呢?是我的思路出了问题吗?求各位大神帮助,我的代码如下:

# -*- coding:utf-8 -*-
import requests
from bs4 import BeautifulSoup
import csv

user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'
url = 'http://finance.qq.com'

def get_url(url):
    links = []
    page_number = 1
    while page_number <=36:
        link = url+'/c/gdyw_'+str(page_number)+'.htm'
        links.append(link)
        page_number = page_number + 1
    return links

all_link = get_url(url)

def get_data(all_link):
    response = requests.get(all_link)
    soup = BeautifulSoup(response.text,'lxml')
    soup = soup.find('div',{'id':'listZone'}).findAll('a')
    return soup

def main():
    with open("test.csv", "w") as f:
        f.write("url\t titile\n")
        for item in get_data(all_link):
            f.write("{}\t{}\n".format(url + item.get("href"), item.get_text()))

if __name__ == "__main__":
    main()

报错内容:

Traceback (most recent call last):
File "D:/Python34/write_csv.py", line 33, in <module>
main()
File "D:/Python34/write_csv.py", line 29, in main
for item in get_data(all_link):
File "D:/Python34/write_csv.py", line 21, in get_data
response = requests.get(all_link)
File "D:Python34libsite-packagesrequestsapi.py", line 71, in get
return request('get', url, params=params, **kwargs)
File "D:Python34libsite-packagesrequestsapi.py", line 57, in request
return session.request(method=method, url=url, **kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "D:Python34libsite-packagesrequestssessions.py", line 579, in send
adapter = self.get_adapter(url=request.url)
File "D:Python34libsite-packagesrequestssessions.py", line 653, in get_adapter
raise InvalidSchema("No connection adapters were found for '%s'" % url)

阅读 12.3k
3 个回答

不能直接requests.get一个list的吧

http://docs.python-requests.o...

url – URL for the new Request object.

应该来个for循环一个个来


update:

我给你改了下程序: 至少Python3可以跑了 Python2试了下unicode问题懒的改了

def get_data(all_link):
    for uri in all_link:
        response = requests.get(uri)
        soup = BeautifulSoup(response.text,'lxml')
        soup = soup.find('div',{'id':'listZone'}).findAll('a')
        for small_soup in soup:
            yield small_soup

重写这段

是直接报错还是已经处理过一些连接后报的错,你在每处理一个连接后输出一下序号和当前的URL

是你请求的url没有带上http://这样的头吧,打印一下url看看。

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题