python菜鸟 想做一个简单的爬虫 求教程

python菜鸟 想做一个简单的爬虫 求教程 ps:一般公司做爬虫采集的话常用什么语言

阅读 22.3k
23 个回答
  • 爬内容,通常来说就是HTTP请求,requests +1
  • 爬下来的网页就是做一些字符串处理,获取你要的信息。beautifulsoup、正则表达式、str.find()都可以

一般网页以上两点就可以了,对于ajax请求的网站,你可能爬不到想要内容,去找他的Api可能更方便。

直接给题主贴一个可以使用的抓取脚本吧,目的是获取豆瓣正在上映影片的豆瓣id和影片标题,脚本依赖于beautifulsoup库,需要安装,beautifulsoup中文文档

补充:如果题主是希望构建一个能对站点进行抓取或者可以自定义抓取指定页面这类真正的爬虫程序的话,还是推荐题主研究 scrapy

抓取python示例代码:

#!/usr/bin/env python
#coding:UTF-8

import urllib
import urllib2
import traceback

from bs4 import BeautifulSoup
from lxml import etree as ET

def fetchNowPlayingDouBanInfo():
    doubaninfolist = []

    try:
        #使用proxy时,请取消屏蔽
#         proxy_handler = urllib2.ProxyHandler({"http" : '172.23.155.73:8080'})
#         opener = urllib2.build_opener(proxy_handler)
#         urllib2.install_opener(opener)      

        url = "http://movie.douban.com/nowplaying/beijing/"

        #设置http-useragent
        useragent = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36'}
        req = urllib2.Request(url, headers=useragent)    

        page = urllib2.urlopen(req, timeout=10)
        html_doc = page.read()

        soup = BeautifulSoup(html_doc, "lxml")

        try:

            nowplaying_ul = soup.find("div", id="nowplaying").find("ul", class_="lists")

            lilist = nowplaying_ul.find_all("li", class_="list-item")
            for li in lilist:
                doubanid = li["id"]
                title = li["data-title"]

                doubaninfolist.append({"douban_id" : doubanid, "title" : title, "coverinfolist" : [] })

        except TypeError, e:
            print('(%s)TypeError: %s.!' % (url, traceback.format_exc()))
        except Exception:
            print('(%s)generic exception: %s.' % (url, traceback.format_exc()))

    except urllib2.HTTPError, e:
        print('(%s)http request error code - %s.' % (url, e.code))
    except urllib2.URLError, e:
        print('(%s)http request error reason - %s.' % (url, e.reason))
    except Exception:
        print('(%s)http request generic exception: %s.' % (url, traceback.format_exc()))

    return doubaninfolist

if __name__ =="__main__":
   doubaninfolist = fetchNowPlayingDouBanInfo()
   print doubaninfolist

简单的,不用框架的,可以看看requests和beautifulsoup这两个库,如果熟悉python语法,看完这两个,差不多能写个简单的爬虫了。


一般公司搞爬虫,我见过的,多用java或者python。

百度搜索python + 爬虫

简单的爬虫,其实用框架最简单了,看看网上的入门贴
推荐scrapy

网终上确实有许多的关于Python如何写一个简单爬虫的文章,但这些文章大多只能算是一个例子,能真正应用的还是挺少的。爬虫我认为就是获取内容、分析内容、再存储就OK了,如果只是才接触的话,可以直接Google之就行了。如果是深入的研究的话,可以在Github上找找代码来看下。

我自己对于Python也只是一知半解,希望有所帮助。

贴一段爬天猫的代码:

def areaFlow(self, parturl, tablename, date):
        while True:
            url = parturl + self.lzSession + '&days=' + str(date) + '..' + str(date)
            print url
            try:
                html = urllib2.urlopen(url, timeout=30)
            except Exception, ex:
                writelog(str(ex))
                writelog(str(traceback.format_exc()))
                break;
            responegbk = html.read()
            try:
                respone = responegbk.encode('utf8')
            except Exception, ex:
                writelog(str(ex))
            # 如果lzSession过期则会返回errcode:500的错误
            if respone.find('"errcode":500') != -1:
                print 'nodata'
                break;
            # 如果时间不对则返回errcode:100的错误
            elif respone.find('"errcode":100') != -1:
                print 'login error'
                self.catchLzsession()
            else:
                try:
                    resstr = re.findall(r'(?<=\<)(.*?)(?=\/>)', respone, re.S)
                    writelog('地域名称    浏览量    访问量')
                    dictitems = []
                    for iarea in resstr:
                        items = {}
                        areaname = re.findall(r'(?<=name=\\\")(.*?)(?=\\\")', iarea, re.S)
                        flowamount = re.findall(r'(?<=浏览量:)(.*?)(?=&lt)', iarea, re.S)
                        visitoramount = re.findall(r'(?<=访客数:)(.*?)(?=\\\")', iarea, re.S)
                        print '%s %s %s' % (areaname[0], flowamount[0], visitoramount[0])
                        items['l_date'] = str(self.nowDate)
                        items['vc_area_name'] = str(areaname[0])
                        items['i_flow_amount'] = str(flowamount[0].replace(',', ''))
                        items['i_visitor_amount'] = str(visitoramount[0].replace(',', ''))
                        items['l_catch_datetime'] = str(self.nowTime)
                        dictitems.append(items)
                    writeInfoLog(dictitems)
                    insertSqlite(self.sqlite, tablename, dictitems)
                    break
                except Exception,ex:
                    writelog(str(ex))
                    writelog(str(traceback.format_exc()))
            time.sleep(1)

Scrapy是比较好的选择,相对比较简单,这里有入门教程

可以先用一个爬虫框架实现业务逻辑,如scrapy,然后根据自己的需求,慢慢的替换掉框架。最后,你就会发现, 你自己实现了一个爬虫框架

抓取内容可以使用 urllib/urllib2/requests,推荐requests。
分析内容可以使用 BeautifulSoup,也可以使用正则或者暴力的字符串解析。

新手上路,请多包涵

http://cuiqingcai.com/1052.html

最近在学习Python爬虫,感觉非常有意思,真的让生活可以方便很多。学习过程中我把一些学习的笔记总结下来,还记录了一些自己实际写的一些小爬虫,在这里跟大家一同分享,希望对Python爬虫感兴趣的童鞋有帮助,如果有机会期待与大家的交流。

一、Python入门

  1. Python爬虫入门一之综述

  2. Python爬虫入门二之爬虫基础了解

  3. Python爬虫入门三之Urllib库的基本使用

  4. Python爬虫入门四之Urllib库的高级用法

  5. Python爬虫入门五之URLError异常处理

  6. Python爬虫入门六之Cookie的使用

  7. Python爬虫入门七之正则表达式

二、Python实战

  1. Python爬虫实战一之爬取糗事百科段子

  2. Python爬虫实战二之爬取百度贴吧帖子

  3. Python爬虫实战三之计算大学本学期绩点

  4. Python爬虫实战四之抓取淘宝MM照片

  5. Python爬虫实战五之模拟登录淘宝并获取所有订单

三、Python进阶

  1. Python爬虫进阶一之爬虫框架Scrapy安装配置

目前暂时是这些文章,随着学习的进行,会不断更新哒,敬请期待~

希望对大家有所帮助,谢谢!

转载请注明:静觅 » Python爬虫学习系列教程

简单的话可以用:获取网页可以用beautifulsoup,正则,urllib2,来获取
深入的话,可以看一些开源框架,比如Python的scrapy等等
也可以看看一些视频教程,比如极客学院的
一句话,多练。。。

新手上路,请多包涵

贴一个爬虫给你:
因为要做观点,观点的屋子类似于知乎的话题,所以得想办法把他给爬下来,搞了半天最终还是妥妥的搞定了,代码是python写的,不懂得麻烦自学哈!懂得直接看代码,绝对可用

#coding:utf-8
"""
@author:haoning
@create time:2015.8.5
"""
from __future__ import division  # 精确除法
from Queue import Queue
from __builtin__ import False
import json
import os
import re
import platform
import uuid
import urllib
import urllib2
import sys
import time
import MySQLdb as mdb
from bs4 import BeautifulSoup
 
reload(sys)
sys.setdefaultencoding( "utf-8" )
 
headers = {
   'User-Agent' : 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0',
   'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
   'X-Requested-With':'XMLHttpRequest',
   'Referer':'https://www.zhihu.com/topics',
   'Cookie':'__utma=51854390.517069884.1416212035.1416212035.1416212035.1; q_c1=c02bf44d00d240798bfabcfc95baeb56|1455778173000|1416205243000; _za=b1c8ae35-f986-46a2-b24a-cb9359dc6b2a; aliyungf_tc=AQAAAJ1m71jL1woArKqF22VFnL/wRy6C; _xsrf=9d494558f9271340ab24598d85b2a3c8; cap_id="MDNiMjcwM2U0MTRhNDVmYjgxZWVhOWI0NTA2OGU5OTg=|1455864276|2a4ce8247ebd3c0df5393bb5661713ad9eec01dd"; n_c=1; _alicdn_sec=56c6ba4d556557d27a0f8c876f563d12a285f33a'
}
 
DB_HOST = '127.0.0.1'
DB_USER = 'root'
DB_PASS = 'root'
 
queue= Queue() #接收队列
nodeSet=set()
keywordSet=set()
stop=0
offset=-20
level=0
maxLevel=7
counter=0
base=""
 
conn = mdb.connect(DB_HOST, DB_USER, DB_PASS, 'zhihu', charset='utf8')
conn.autocommit(False)
curr = conn.cursor()
 
def get_html(url):
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req,None,3) #在这里应该加入代理
        html = response.read()
        return html
    except:
        pass
    return None
 
def getTopics():
    url = 'https://www.zhihu.com/topics'
    print url
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req) #鍦ㄨ繖閲屽簲璇ュ姞鍏ヤ唬鐞�
        html = response.read().decode('utf-8')
        print html
        soup = BeautifulSoup(html)
        lis = soup.find_all('li', {'class' : 'zm-topic-cat-item'})
         
        for li in lis:
            data_id=li.get('data-id')
            name=li.text
            curr.execute('select id from classify_new where name=%s',(name))
            y= curr.fetchone()
            if not y:
                curr.execute('INSERT INTO classify_new(data_id,name)VALUES(%s,%s)',(data_id,name))
        conn.commit()
    except Exception as e:
        print "get topic error",e
         
 
def get_extension(name): 
    where=name.rfind('.')
    if where!=-1:
        return name[where:len(name)]
    return None
 
 
def which_platform():
    sys_str = platform.system()
    return sys_str
 
def GetDateString():
    when=time.strftime('%Y-%m-%d',time.localtime(time.time()))
    foldername = str(when)
    return foldername
 
def makeDateFolder(par,classify):
    try:
        if os.path.isdir(par):
            newFolderName=par + '//' + GetDateString() + '//'  +str(classify)
            if which_platform()=="Linux":
                newFolderName=par + '/' + GetDateString() + "/" +str(classify)
            if not os.path.isdir( newFolderName ):
                os.makedirs( newFolderName )
            return newFolderName
        else:
            return None
    except Exception,e:
        print "kk",e
    return None
 
def download_img(url,classify):
    try:
        extention=get_extension(url)
        if(extention is None):
            return None
        req = urllib2.Request(url)
        resp = urllib2.urlopen(req,None,3)
        dataimg=resp.read()
        name=str(uuid.uuid1()).replace("-","")+"_www.guandn.com"+extention
        top="E://topic_pic"
        folder=makeDateFolder(top, classify)
        filename=None
        if folder is not None:
            filename  =folder+"//"+name
        try:
            if "e82bab09c_m" in str(url):
                return True
            if not os.path.exists(filename):
                file_object = open(filename,'w+b')
                file_object.write(dataimg)
                file_object.close()
                return '/room/default/'+GetDateString()+'/'+str(classify)+"/"+name
            else:
                print "file exist"
                return None
        except IOError,e1:
            print "e1=",e1
            pass
    except Exception as e:
        print "eee",e
        pass
    return None #如果没有下载下来就利用原来网站的链接
 
def getChildren(node,name):
    global queue,nodeSet
    try:
        url="https://www.zhihu.com/topic/"+str(node)+"/hot"
        html=get_html(url)
        if html is None:
            return
        soup = BeautifulSoup(html)
        p_ch='父话题'
        node_name=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
        topic_cla=soup.find('div', {'class' : 'child-topic'})
        if topic_cla is not None:
            try:
                p_ch=str(topic_cla.text)
                aList = soup.find_all('a', {'class' : 'zm-item-tag'}) #获取所有子节点
                if u'子话题' in p_ch:
                    for a in aList:
                        token=a.get('data-token')
                        a=str(a).replace('\n','').replace('\t','').replace('\r','')
                        start=str(a).find('>')
                        end=str(a).rfind('</a>')
                        new_node=str(str(a)[start+1:end])
                        curr.execute('select id from rooms where name=%s',(new_node)) #先保证名字绝不相同
                        y= curr.fetchone()
                        if not y:
                            print "y=",y,"new_node=",new_node,"token=",token
                            queue.put((token,new_node,node_name))
            except Exception as e:
                print "add queue error",e
    except Exception as e:
        print "get html error",e
         
     
 
def getContent(n,name,p,top_id):
    try:
        global counter
        curr.execute('select id from rooms where name=%s',(name)) #先保证名字绝不相同
        y= curr.fetchone()
        print "exist?? ",y,"n=",n
        if not y:
            url="https://www.zhihu.com/topic/"+str(n)+"/hot"
            html=get_html(url)
            if html is None:
                return
            soup = BeautifulSoup(html)
            title=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
            pic_path=soup.find('a',{'id':'zh-avartar-edit-form'}).find('img').get('src')
            description=soup.find('div',{'class':'zm-editable-content'})
            if description is not None:
                description=description.text
                 
            if (u"未归类" in title or u"根话题" in title): #允许入库,避免死循环
                description=None
                 
            tag_path=download_img(pic_path,top_id)
            print "tag_path=",tag_path
            if (tag_path is not None) or tag_path==True:
                if tag_path==True:
                    tag_path=None
                father_id=2 #默认为杂谈
                curr.execute('select id from rooms where name=%s',(p))
                results = curr.fetchall()
                for r in results:
                    father_id=r[0]
                name=title
                curr.execute('select id from rooms where name=%s',(name)) #先保证名字绝不相同
                y= curr.fetchone()
                print "store see..",y
                if not y:
                    friends_num=0
                    temp = time.time()
                    x = time.localtime(float(temp))
                    create_time = time.strftime("%Y-%m-%d %H:%M:%S",x) # get time now
                    create_time
                    creater_id=None
                    room_avatar=tag_path
                    is_pass=1
                    has_index=0
                    reason_id=None 
                    #print father_id,name,friends_num,create_time,creater_id,room_avatar,is_pass,has_index,reason_id
                    ######################有资格入库的内容
                    counter=counter+1
                    curr.execute("INSERT INTO rooms(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id)VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id))
                    conn.commit() #必须时时进入数据库,不然找不到父节点
                    if counter % 200==0:
                        print "current node",name,"num",counter
    except Exception as e:
        print "get content error",e      
 
def work():
    global queue
    curr.execute('select id,node,parent,name from classify where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        node=r[1]
        parent=r[2]
        name=r[3]
        try:
            queue.put((node,name,parent)) #首先放入队列
            while queue.qsize() >0:
                n,p=queue.get() #顶节点出队
                getContent(n,p,top_id)
                getChildren(n,name) #出队内容的子节点
            conn.commit()
        except Exception as e:
            print "what's wrong",e 
             
def new_work():
    global queue
    curr.execute('select id,data_id,name from classify_new_copy where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        data_id=r[1]
        name=r[2]
        try:
            get_topis(data_id,name,top_id)
        except:
            pass
 
 
def get_topis(data_id,name,top_id):
    global queue
    url = 'https://www.zhihu.com/node/TopicsPlazzaListV2'
    isGet = True;
    offset = -20;
    data_id=str(data_id)
    while isGet:
        offset = offset + 20
        values = {'method': 'next', 'params': '{"topic_id":'+data_id+',"offset":'+str(offset)+',"hash_id":""}'}
        try:
            msg=None
            try:
                data = urllib.urlencode(values)
                request = urllib2.Request(url,data,headers)
                response = urllib2.urlopen(request,None,5)
                html=response.read().decode('utf-8')
                json_str = json.loads(html)
                ms=json_str['msg']
                if len(ms) <5:
                    break
                msg=ms[0]
            except Exception as e:
                print "eeeee",e
            #print msg
            if msg is not None:
                soup = BeautifulSoup(str(msg))
                blks = soup.find_all('div', {'class' : 'blk'})
                for blk in blks:
                    page=blk.find('a').get('href')
                    if page is not None:
                        node=page.replace("/topic/","") #将更多的种子入库
                        parent=name
                        ne=blk.find('strong').text
                        try:
                            queue.put((node,ne,parent)) #首先放入队列
                            while queue.qsize() >0:
                                n,name,p=queue.get() #顶节点出队
                                size=queue.qsize()
                                if size > 0:
                                    print size
                                getContent(n,name,p,top_id)
                                getChildren(n,name) #出队内容的子节点
                            conn.commit()
                        except Exception as e:
                            print "what's wrong",e 
        except urllib2.URLError, e:
            print "error is",e
            pass
             
         
if __name__ == '__main__':
    i=0
    while i<400:
        new_work()
        i=i+1

说下数据库的问题,我这里就不传附件了,看字段自己建立,因为这确实太简单了,我是用的mysql,你看自己的需求自己建。

有什么不懂得麻烦去去转盘网找我,因为这个也是我开发的,上面会及时更新qq群号,这里不留qq号啥的,以免被系统给K了。

可以看看崔庆才大神的《Python3网络爬虫开发实战》

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题