栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

豆瓣爬取长津湖short comment info

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

豆瓣爬取长津湖short comment info

豆瓣爬取长津湖short comment info
# Changjin Lake film data from douban
# @Time: 20211006
# @Author: heheyang

import requests
from bs4 import BeautifulSoup
import re
import pandas as pd

def singlePage_crawl(url,headers,comments_info):
    """
    豆瓣单页爬取评论
    :param url: 待爬取url
    :return: 评论信息字典comments_dict
    """
    # 豆瓣反爬机制,要加上请求头
    html = requests.get(url,headers=headers).text
    soup = BeautifulSoup(html, 'html.parser')
    # 利用beautifulsoup找到具体位置
    contents_find = soup.find_all(attrs={'class': 'short'})
    contents_info_find = soup.find_all(attrs={'comment-info'})
    # 利用正则表达式提取短评
    for content in contents_find:
        comment = re.findall('(.*?)',str(content))
        if comment:
            comments_info["comments"].append(comment[0])
        else:
            comments_info["comments"].append(None)
    # 提取评论时间和用户名
    for contents_info in contents_info_find:
        # 匹配name
        name = re.findall(">(.*?)",str(contents_info))
        comments_info["name"].extend(name)
        # 匹配date和rate
        lst_tmp = re.findall('title="(.*?)"',str(contents_info))
        if len(lst_tmp) == 2:
            ratetitle,date = lst_tmp[0],lst_tmp[1]
        elif len(lst_tmp) == 1:
            ratetitle = None
            date = lst_tmp[0]
        else:
            ratetitle = None
            date = None
        comments_info["rate"].append(ratetitle)
        comments_info["date"].append(date)
    print(len(comments_info["date"]))
    return comments_info


def main():
    """
    program flow
    :return: 评论信息excel
    """
    headers = {
        "cookie": '自行添加',
        "USER-AGENT": '自行添加'
    }
    comments_info = {
        "name":[],
        "date":[],
        "rate":[],
        "comments":[]
    }
    for i in range(25):
        url = "https://movie.douban.com/subject/25845392/comments?start=%d&limit=20&status=P&sort=new_score" %(20*i)
        comments_info = singlePage_crawl(url, headers, comments_info)
    df = pd.Dataframe(comments_info)
    df.to_excel("douban_comments.xlsx")

if __name__ == '__main__':
    main()

请求头自行添加,结果保存到excel中:

和上一篇一起写了一下午,欢迎交流!需要数据文件的可以私聊!

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/300150.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号