栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 面试经验 > 面试问答

从脚本抓取总是在抓取后阻止脚本执行

面试问答 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

从脚本抓取总是在抓取后阻止脚本执行

spider完成后,你将需要停止反应器。你可以通过侦听

spider_closed
信号来完成此操作:

from twisted.internet import reactorfrom scrapy import log, signalsfrom scrapy.crawler import Crawlerfrom scrapy.settings import Settingsfrom scrapy.xlib.pydispatch import dispatcherfrom testspiders.spiders.followall import FollowAllSpiderdef stop_reactor():    reactor.stop()dispatcher.connect(stop_reactor, signal=signals.spider_closed)spider = FollowAllSpider(domain='scrapinghub.com')crawler = Crawler(Settings())crawler.configure()crawler.crawl(spider)crawler.start()log.start()log.msg('Running reactor...')reactor.run()  # the script will block here until the spider is closedlog.msg('Reactor stopped.')

命令行日志输出可能类似于:

stav@maia:/srv/scrapy/testspiders$ ./api2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:    {'downloader/request_bytes': 23934,...}2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.stav@maia:/srv/scrapy/testspiders$


转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/377803.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号