栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 软件开发 > 后端开发 > Python

Python爬虫获取整个站点中的所有外部链接代码示例

Python 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

Python爬虫获取整个站点中的所有外部链接代码示例

收集所有外部链接的网站爬虫程序流程图

下例是爬取本站python绘制条形图方法代码详解的实例,大家可以参考下。

完整代码:

#! /usr/bin/env python
#coding=utf-8

import urllib2
from  bs4 import BeautifulSoup
import re
import datetime
import random

pages=set()
random.seed(datetime.datetime.now())
#Retrieves a list of all Internal links found on a page
def getInternallinks(bsObj, includeUrl):
 internallinks  =  []
 #Finds all links  that  begin  with  a  "/"
 for link  in bsObj.findAll("a", href=re.compile("^(/|.*"+includeUrl+")")):
  if link.attrs['href'] is not None:
   if link.attrs['href'] not in internallinks:
    internallinks.append(link.attrs['href'])
 return internallinks
#Retrieves a list of all external links found on a page
def getExternallinks(bsObj, excludeUrl):
 externallinks  =  []
 #Finds all links  that  start  with  "http" or "www"  that  do
 #not  contain the current URL
 for link  in bsObj.findAll("a", 
  href=re.compile("^(http|www)((?!"+excludeUrl+").)*$")):
  if link.attrs['href'] is not None:
   if link.attrs['href'] not in externallinks:
    externallinks.append(link.attrs['href'])
 return externallinks

def splitAddress(address):
 addressParts  =  address.replace("http://", "").split("/")
 return addressParts

def getRandomExternallink(startingPage):
 html=  urllib2.urlopen(startingPage)
 bsObj  =  BeautifulSoup(html)
 externallinks  =  getExternallinks(bsObj, splitAddress(startingPage)[0])
 if len(externallinks) == 0:
  internallinks  =  getInternallinks(startingPage)
  return internallinks[random.randint(0, len(internallinks)-1)]
 else:
  return externallinks[random.randint(0, len(externallinks)-1)]

def followExternalonly(startingSite):
 externallink=getRandomExternallink("https://www.jb51.net/article/130968.htm")
 print("Random  external  link  is: "+externallink)
 followExternalonly(externallink)

#Collects a list of all external URLs found on the site
allExtlinks=set()
allIntlinks=set()
def getAllExternallinks(siteUrl):
    html=urllib2.urlopen(siteUrl)
    bsObj=BeautifulSoup(html)
    internallinks  =  getInternallinks(bsObj,splitAddress(siteUrl)[0])
    externallinks  =  getExternallinks(bsObj,splitAddress(siteUrl)[0])
    for link in externallinks:
      if link not in allExtlinks:
 allExtlinks.add(link)
 print(link)
    for link in internallinks:
      if link not in allIntlinks:
 print("about to get link:"+link)
 allIntlinks.add(link)
 getAllExternallinks(link)

getAllExternallinks("https://www.jb51.net/article/130968.htm")

爬取结果如下:

总结

以上就是本文关于Python爬虫获取整个站点中的所有外部链接代码示例的全部内容,希望对大家有所帮助。感兴趣的朋友可以继续参阅本站其他相关专题,如有不足之处,欢迎留言指出。感谢朋友们对本站的支持!

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/31564.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号