Hello,大家好,我是wangzirui32,我们来学习如何抓取书籍排行榜,并生成HTML网页,开始学习吧!
- 1. 分析HTML
- 2. 爬虫程序
- 3. 生成HTML网页
- 3.1 render.py
- 3.2 books_template.html
打开榜单链接,可以看到,所有的书籍数据均在一个ul下的li标签中。
可以看到,红框标注了我们需要的数据,定位如下:
number 书籍排行 li中第一个div标签 name 书籍名称 class属性为"name"的div中的a的title属性 link 书籍链接 class属性为"name"的div中的a的href属性 comments 书籍评论数 class属性为"star"的div中的a标签 support 书籍推荐率 class属性为"tuijian"的span标签 price 书籍价格 class属性为"price_n"的span标签
再来看网址的变化,第一页的网址为:
http://bang.dangdang.com/books/bestsellers/01.00.00.00.00.00-24hours-0-0-1-1
第二页的网址为:
http://bang.dangdang.com/books/bestsellers/01.00.00.00.00.00-24hours-0-0-1-2
只要改变末尾数字,就可以实现多页的爬取。
2. 爬虫程序请安装这些第三方库,方可开启爬虫之旅:
pip install requests bs4
代码如下:
import requests
from bs4 import BeautifulSoup as bs
import json
def get_page(page):
"""根据页数获取HTML"""
url = "http://bang.dangdang.com/books/bestsellers/01.00.00.00.00.00-24hours-0-0-1-{}".format(page)
response = requests.get(url)
return response.text
def parse_page(html):
soup = bs(html, "html.parser")
books_li = soup.find("ul", https://blog.csdn.net/wangzirui32/article/details/{"class": "bang_list clearfix bang_list_mode"}).find_all("li")
books = []
for li in books_li: # 提取并保存信息
a = li.find("div", https://blog.csdn.net/wangzirui32/article/details/{"class": "name"}).a
number = li.div.text[:-1] # 去掉末尾“.”
name = a.get("title")
link = a.get("href")
support_rate = li.find("span", https://blog.csdn.net/wangzirui32/article/details/{"class": "tuijian"}).text
comments = li.find("div", https://blog.csdn.net/wangzirui32/article/details/{"class": "star"}).a.text.replace("条评论", "") # 得到评论数字
price = li.find("span", https://blog.csdn.net/wangzirui32/article/details/{"class": "price_n"}).text[1:] # 去掉价格符号
books.append(https://blog.csdn.net/wangzirui32/article/details/{
"number": number,
"name": name,
"link": link,
"support": support_rate,
"comments": comments,
"price": price
})
return books
def main():
"""入口程序"""
page = 3 # 爬取页数为3
books = []
for i in range(1, page+1):
html = get_page(i)
books += parse_page(html)
# 保存数据
with open("data.json", "w", encoding="UTF-8") as f:
json.dump(books, f, ensure_ascii=False)
if __name__ == '__main__':
main()
运行爬虫,可以看到data.json文件在目录下生成。
3. 生成HTML网页请在同一个目录下创建render.py和books_template.html。
请安装jinja2:
pip install jinja23.1 render.py
from jinja2 import Template
import json
def read_data():
"""读取数据"""
with open("data.json", "r", encoding="UTF-8") as f:
data = json.load(f)
return data
def read_template():
"""读取HTML模板"""
with open("books_template.html", encoding="UTF-8") as f:
html = f.read()
return html
def render_template(html, data):
"""渲染模板"""
template = Template(html)
result = template.render(data=data)
return result
def main():
"""入口函数"""
data = read_data()
html = read_template()
with open("books.html", "w", encoding="UTF-8") as f:
f.write(render_template(html, data))
if __name__ == '__main__':
main()
3.2 books_template.html
书籍排行榜
书籍排行榜
| 排行 | 书名 | 推荐率 | 评论数 | 价格 |
|---|---|---|---|---|
| https://blog.csdn.net/wangzirui32/article/details/{{i.number}} | https://blog.csdn.net/wangzirui32/article/details/{{i.name}} | https://blog.csdn.net/wangzirui32/article/details/{{i.support}} | https://blog.csdn.net/wangzirui32/article/details/{{i.comments}}条 | $https://blog.csdn.net/wangzirui32/article/details/{{i.price}}元 |
好了,今天的课程就到这里,我是wangzirui32,喜欢的可以点个收藏和关注,我们下次再见!



