在使用Python爬虫的时候经常需要对爬取的数据进行清洗,以此来过滤掉不需要的内容。对于爬取的结果为文本的数据经常采用正则(re.sub())来进行数据清洗,但是对于爬取的结果为HTML的数据如果还是采用正则来进行数据清洗的话往往会事倍功半,那么针对爬取的结果为HTML的数据又该如何进行数据清洗呢?
代码示例# -*- coding: utf-8 -*-
import scrapy
from lxml import etree
from lxml import html
from html import unescape
class TestSpider(scrapy.Spider):
name = 'test'
allowed_domains = ['www.gongkaoleida.com']
start_urls = ['https://www.gongkaoleida.com/article/869186']
# start_urls = ['https://www.gongkaoleida.com/article/869244']
def parse(self, response):
content = response.xpath('//article[@]').getall()[0].replace('n', '').replace('r', '')
# print(content)
tree = etree.HTML(content)
# 查找包含“公考雷达”的标签
str1 = tree.xpath('//p[contains(text(), "公考雷达")] | //a[contains(text(), "公考雷达")]/..')
# 查找包含“附件:”或“附件:”或“常见office文件后缀”的标签
str2 = tree.xpath('//a[contains(text(), "附件:") or contains(text(), "附件:") or contains(text(), ".doc") or contains(text(), ".xls") or contains(text(), ".ppt")]/..')
str3 = tree.xpath('//p[contains(text(), "附件:") or contains(text(), "附件:") or contains(text(), ".doc") or contains(text(), ".xls") or contains(text(), ".ppt")]')
# 数据清洗
for i in str1 + str2 + str3:
p1 = html.tostring(i)
p2 = unescape(p1.decode('utf-8'))
content = content.replace(p2, '')
print(content)



