栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

Python爬虫实战(十四)爬取某公众号web端历史所有文章介绍

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

Python爬虫实战(十四)爬取某公众号web端历史所有文章介绍

目录
    • 方法一:公众号接口爬取(全量)
      • 1.1 接口查找及分析
      • 1.2 频次限制介绍
      • 1.3 全部代码
    • 方法二:搜狗微信爬取(非全量)

今天来介绍两种web端爬取微信公众号文章方式~

方法一:公众号接口爬取(全量)

提前准备:需注册一个微信公众号

1.1 接口查找及分析

内容互动——图文消息——超链接——搜索具体公众号

在该页面后,利用Chrome浏览器的抓包功能,进行抓包,得到接口数据如下


翻页分析请求差异,可知begin控制页数,数值加5即可翻页。如下图所式(分别表示第一页和第二页的参数)

1.2 频次限制介绍

该接口存在频次限制,首次大概可以跑80-100页左右,每次封禁时间为2小时左右,封禁解除后爬取页数会低于第一次爬取的页数。当天累计3次封禁后,会直接封一天。粗略估算,一天大约可以爬200-250页左右。

1.3 全部代码
import requests
import random
import time
user_agent_list = [
    "Opera/9.80 (X11; Linux i686; Ubuntu/14.10) Presto/2.12.388 Version/12.16",
    "Mozilla/5.0 (Linux; U; Android 2.2; en-gb; GT-P1000 Build/FROYO) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
    "Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14",
    "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0 Opera 12.14",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0) Opera 12.14",
]

headers = {
    'user-agent':random.choice(user_agent_list),
    'cookie':'RK=kc48KXIfWz; ptcz=0815979737421223f2919438eddea861c09b9bef4ae8b26e3adb19613ba33269; _ga=GA1.2.1148006994.1620302104; pgv_pvid=7944856028; luin=o0729757915; lskey=0001000033238e013a04ffbb13331769d0c4aa8e5ec10026993906c13d51c53836afde314eda2c8f992d0855; o_cookie=729757915; pac_uid=1_729757915; rewardsn=; wxtokenkey=777; ua_id=6EtLjvtGfXrPvRXnAAAAAFFEt1IIqOFA_PhDuwZ6aQc=; uuid=dfe8efe0bfae300f1e2aff6193513b09; wxuin=33318412346183; rand_info=CAESILu87p4ipdTay0C9UgWG2fkTDrECmi4Fx7x9lCNDrS2r; slave_bizuin=3934210646; data_bizuin=3934210646; bizuin=3934210646; data_ticket=ijQQlhraK1zEw5aPpzCtPmpZZ48KQn/8wFNRlZTI1ha362m9oF+qN4jQk7MYmJD/; slave_sid=TW1WOHlvQ2VJUjByTXpvRXJBZXlqMDFveTdoU0tQN3NvSlVNUTBmRXpTUGFabHllQU1TMGlWa0w1eTFmbXZYdlRMbERHbFpCN3gybU9zMVNaSDk1UzFvR0JGbG9zWEdjc1NFcTFlS1dodFpJQl83VXZQV2Y0b25SWUJ4MEh5bEE2aDBpZHZvOVlIMGhVTDZs; slave_user=gh_749a4198a611; xid=0f1e601a4c68d8d952fd1e2a3e65f69a; mm_lang=zh_CN'
}
begin = '0'
params = {
    'action': 'list_ex',
    'begin': begin,
    'count': '5',
    'fakeid': 'MzA5MjMzOTY4Mw==',
    'type': '9',
    'query': '',
    'token': '1420937137',
    'lang': 'zh_CN',
    'f': 'json',
    'ajax': '1'
}
url = 'https://mp.weixin.qq.com/cgi-bin/appmsg?'
i = 0  
articles = []
while True:
    count = i*5
    params['begin'] = str(count)
    try:
        r = requests.get(url,headers=headers,params=params)
        article_list = r.json()['app_msg_list']
        for article in article_list:
            create_time = time.strftime('%Y-%m-%m',time.localtime(article['create_time']))
            title = article['title']
            link = article['link']
            articles.append([create_time,title,link])
        print('第{}页爬取完毕'.format(i))
        if r.json()['base_resp']['ret'] == 200013:
            print("frequencey control, stop at {}".format(str(begin)))
            break
        if len(r.json()['app_msg_list']) == 0:
            print("all ariticle parsed")
            break
    except Exception as e:
        print(e)
        break
    time.sleep(random.randint(2,4))
    i += 1

总结:爬取文章全,但不适合短时间内爬取多个公众号文章,成本较高

方法二:搜狗微信爬取(非全量)

网站地址:搜狗微信

# 关键词搜索爬取
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.support.wait import WebDriverWait
import time
import re
import random
import pandas as pd

opt = webdriver.ChromeOptions()
opt.add_experimental_option('excludeSwitches', ['enable-automation'])
driver = webdriver.Chrome(options=opt)
driver.get('https://weixin.sogou.com/')
wait = WebDriverWait(driver, 10)
word_input = wait.until(ec.presence_of_element_located((By.NAME, 'query')))
word_input.send_keys('金工于明明预测分值')
driver.find_element_by_xpath("//input[@class='swz']").click()
time.sleep(2)

data = []

def get_scores():
    rst = driver.find_elements_by_xpath('//div[@]/p')
    for title in rst:
        print(title.text)
        try:
            date = re.search('d+', title.text).group(0)
            scores = re.findall('预测分值:(.*?)分', title.text)[0]
            data.append([date, scores])
        except Exception as e:
            print(e)


for i in range(10):
    get_scores()
    if i == 9:
        # 访问第10页停止点击
        break
    driver.find_element_by_id("sogou_next").click()

    time.sleep(random.randint(3, 5))


driver.find_element_by_name('top_login').click()

# 等到扫码登录
while True:
    try:
        next_page = driver.find_element_by_id("sogou_next")
        break
    except Exception as e:
        time.sleep(2)
next_page.click()

# 登录完成继续爬取文章信息
while True:
    get_scores()
    try:
        driver.find_element_by_id("sogou_next").click()
        time.sleep(random.randint(3, 5))
    except Exception as e:
        break

score_data = pd.Dataframe(data, columns=['日期', '预测分值'])
score_data.to_csv('./Desktop/score_data.csv', index=False, encoding='gbk')

总结:无频次限制,但非全量,适合确定专栏(包含特定关键词)文章抓取

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/293861.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号