栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 面试经验 > 面试问答

抓取下一页

面试问答 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

抓取下一页

rule
未使用,因为你没有使用
CrawlSpider

因此,你必须

requests
手动创建下一页,如下所示:

# -*- coding: utf-8 -*-import scrapyfrom scrapy.contrib.spiders import Rulefrom scrapy.linkextractors import linkExtractorfrom lxml import htmlclass Scrapy1Spider(scrapy.Spider):    name = "craiglist"    allowed_domains = ["sfbay.craigslist.org"]    start_urls = (        'http://sfbay.craigslist.org/search/npo',    )    Rules = (Rule(linkExtractor(allow=(), restrict_xpaths=('//a[@]',)), callback="parse", follow= True),)    def parse(self, response):        site = html.fromstring(response.body_as_unipre())        titles = site.xpath('//div[@]/p[@]')        print len(titles), 'AAAA'        # follow next page links        next_page = response.xpath('.//a[@]/@href').extract()        if next_page: next_href = next_page[0] next_page_url = 'http://sfbay.craigslist.org' + next_href request = scrapy.Request(url=next_page_url) yield request

或者

CrawlSpider
像这样使用:

# -*- coding: utf-8 -*-import scrapyfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import linkExtractorfrom lxml import htmlclass Scrapy1Spider(CrawlSpider):    name = "craiglist"    allowed_domains = ["sfbay.craigslist.org"]    start_urls = (        'http://sfbay.craigslist.org/search/npo',    )    rules = (Rule(linkExtractor(allow=(), restrict_xpaths=('//a[@]',)), callback="parse_page", follow= True),)    def parse_page(self, response):        site = html.fromstring(response.body_as_unipre())        titles = site.xpath('//div[@]/p[@]')        print len(titles), 'AAAA'


转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/372834.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号