栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

利用python爬虫爬取豆瓣读书-文学-名著的封面

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

利用python爬虫爬取豆瓣读书-文学-名著的封面

获取至少两个页面的豆瓣读书—文学—名著的图书封面图片,将图片存到文件夹。

具体代码如下:

#dubanimage.py
import requests
from bs4 import BeautifulSoup
from urllib.request import unquote
def getHTMLText(url):
    headers={'cookie':'bid=_qmdmSYQXOc; dbcl2="220543507:+d6RaThYFJg"; __utmz=30149280.1632397619.1.1.utmcsr=open.weixin.qq.com|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmz=81379588.1632397619.1.1.utmcsr=open.weixin.qq.com|utmccn=(referral)|utmcmd=referral|utmcct=/; gr_user_id=be1c71e2-9b29-4e83-806e-d0a89b910d61; _vwo_uuid_v2=D41176C23ACA6929AB402B1888C9C63EA|3f47235256a24db9916f2acdbf59b15a; push_noty_num=0; push_doumail_num=0; ck=re1Y; _pk_ref.100001.3ac3=%5B%22%22%2C%22%22%2C1632705896%2C%22https%3A%2F%2Fopen.weixin.qq.com%2F%22%5D; _pk_ses.100001.3ac3=*; __utma=30149280.752614146.1632397619.1632397619.1632705896.2; __utmc=30149280; __utma=81379588.1363756780.1632397619.1632397619.1632705896.2; __utmc=81379588; __gads=ID=5633b581aca0b950-22a74bafefcb00c7:T=1632705894:RT=1632705894:S=ALNI_Mbtv0al0B6qc6SDaGSertyE6-nW6Q; __utmt_douban=1; __utmb=30149280.3.10.1632705896; __utmt=1; __utmb=81379588.3.10.1632705896; _pk_id.100001.3ac3=51f2d584cbefc1de.1632397618.2.1632706827.1632397657.',
       'user-agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.61 Safari/537.36 Edg/94.0.992.3",}
    try:
        r = requests.get(url,headers=headers)
        r.raise_for_status()
        r.encoding = 'utf-8'
        return r.text
        
    except:
        return ""
def parsePage(ilt,html):
    try:
        soup=BeautifulSoup(html,"html.parser")
        for img in soup.find_all('img',{"width":"90"}):
            ilt.append(img['src'])
            
    except:
        print("")
def main():
    start_url='https://book.douban.com/tag/%E5%90%8D%E8%91%97?start='
    last_url=urllib.request.unquote('https://book.douban.com/tag/%E5%90%8D%E8%91%97?start=')
#     print(last_url)
    imalist=[]
    for i in range(2):
        try:
            url=last_url+str(25*i)
            html = getHTMLText(url)
            parsePage(imalist,html)
#             print(html)
        except:
            continue
    x=0
    for u in imalist:
        r =requests.get(u)
        x=x+1
        with open('D://封面/'+str(x)+'.jpg','wb') as f:
            f.write(r.content)
            f.close()
            print("{}.jpg保存成功".format(x))
main()

运行结果如下:

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/280512.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号