使用 beautifulSoup,Python 在 h3 和 div 标签中抓取文本

新手上路,请多包涵

我没有使用 python、BeautifulSoup、Selenium 等的经验,但我很想从网站上抓取数据并存储为 csv 文件。我需要的单个数据样本编码如下(单行数据)。

 <div class="box effect">
<div class="row">
<div class="col-lg-10">
    <h3>HEADING</h3>
        <div><i class="fa user"></i>&nbsp;&nbsp;NAME</div>
        <div><i class="fa phone"></i>&nbsp;&nbsp;MOBILE</div>
        <div><i class="fa mobile-phone fa-2"></i>&nbsp;&nbsp;&nbsp;NUMBER</div>
        <div><i class="fa address"></i>&nbsp;&nbsp;&nbsp;XYZ_ADDRESS</div>
    <div class="space">&nbsp;</div>

<div style="padding:10px;padding-left:0px;"><a class="btn btn-primary btn-sm" href="www.link_to_another_page.com"><i class="fa search-plus"></i> &nbsp;more info</a></div>

</div>
<div class="col-lg-2">

</div>
</div>
</div>

我需要的输出是 Heading,NAME,MOBILE,NUMBER,XYZ_ADDRESS

我发现这些数据没有 id 或类,但作为一般文本存在于网站中。为此,我正在分别尝试 BeautifulSoup 和 Python Selenium,因为我没有看到任何教程,所以我被困在这两种方法中提取,引导我从这些和标签中提取文本

我的代码使用 BeautifulSoup

 import urllib2
from bs4 import BeautifulSoup
import requests
import csv

MAX = 2

'''with open("lg.csv", "a") as f:
  w=csv.writer(f)'''
##for i in range(1,MAX+1)
url="http://www.example_site.com"

page=requests.get(url)
soup = BeautifulSoup(page.content,"html.parser")

for h in soup.find_all('h3'):
    print(h.get('h3'))

我的硒代码

import csv
from selenium import webdriver
MAX_PAGE_NUM = 2
driver = webdriver.Firefox()
for i in range(1, MAX_PAGE_NUM+1):
  url = "http://www.example_site.com"
  driver.get(url)
  name = driver.find_elements_by_xpath('//div[@class = "col-lg-10"]/h3')
  #contact = driver.find_elements_by_xpath('//span[@class="item-price"]')
#  phone =
#  mobile =
#  address =
#  print(len(buyers))
#  num_page_items = len(buyers)
#  with open('res.csv','a') as f:
#    for i in range(num_page_items):
#      f.write(buyers[i].text + "," + prices[i].text + "\n")
  print (name)
driver.close()

原文由 Revaapriyan 发布,翻译遵循 CC BY-SA 4.0 许可协议

阅读 712
2 个回答

您可以使用 CSS 选择器来查找您需要的数据。 In your case div > h3 ~ div will find all div elements that are directly inside a div element and are proceeded by a h3 element.

 import bs4

page= """
<div class="box effect">
<div class="row">
<div class="col-lg-10">
    <h3>HEADING</h3>
    <div><i class="fa user"></i>&nbsp;&nbsp;NAME</div>
    <div><i class="fa phone"></i>&nbsp;&nbsp;MOBILE</div>
    <div><i class="fa mobile-phone fa-2"></i>&nbsp;&nbsp;&nbsp;NUMBER</div>
    <div><i class="fa address"></i>&nbsp;&nbsp;&nbsp;XYZ_ADDRESS</div>
</div>
</div>
</div>
"""

soup = bs4.BeautifulSoup(page, 'lxml')

# find all div elements that are inside a div element
# and are proceeded by an h3 element
selector = 'div > h3 ~ div'

# find elements that contain the data we want
found = soup.select(selector)

# Extract data from the found elements
data = [x.text.split(';')[-1].strip() for x in found]

for x in data:
    print(x)

编辑:要抓取标题中的文字..

 heading = soup.find('h3')
heading_data = heading.text
print(heading_data)

编辑:或者您可以使用这样的选择器一次获取标题和其他 div 元素: div.col-lg-10 > * 。这将找到 div 属于 col-lg-10 类的元素内的所有元素。

 soup = bs4.BeautifulSoup(page, 'lxml')

# find all elements inside a div element of class col-lg-10
selector = 'div.col-lg-10 > *'

# find elements that contain the data we want
found = soup.select(selector)

# Extract data from the found elements
data = [x.text.split(';')[-1].strip() for x in found]

for x in data:
    print(x)

原文由 Anonta 发布,翻译遵循 CC BY-SA 3.0 许可协议

所以它看起来很不错:

     #  -*- coding: utf-8 -*-
    # by Faguiro #
    # run using Python 3.8.6  on Linux#
    import requests
    from bs4 import BeautifulSoup

    # insert your site here
    url= input("Enter the url-->")

    #use requests
    r = requests.get(url)
    content = r.content

    #soup!
    soup = BeautifulSoup(content, "html.parser")

    #find all tag in the soup.
    heading = soup.find_all("h3")

    #print(heading) <--- result...

    #...ptonic organization!
    n=len(heading)
    for x in range(n):
        print(str.strip(heading[x].text))

依赖项:在终端(linux)上:

sudo apt-get 安装 python3-bs4

原文由 Fabiano Rocha 发布,翻译遵循 CC BY-SA 4.0 许可协议

撰写回答
你尚未登录,登录后可以
  • 和开发者交流问题的细节
  • 关注并接收问题和回答的更新提醒
  • 参与内容的编辑和改进,让解决方法与时俱进
推荐问题