请勿在中覆盖
parse函数
CrawlSpider:
使用时
CrawlSpider,你不应覆盖此
parse功能。这里的
CrawlSpider文档中有一个警告:http : //doc.scrapy.org/en/0.14/topics/spiders.html#scrapy.contrib.spiders.Rule
这是因为使用
CrawlSpider,parse(任何请求的默认回调)都会发送要由
Rules处理的响应。
爬网前登录:
为了在Spider开始抓取之前进行某种初始化,你可以使用
InitSpider(继承自
CrawlSpider),并覆盖该
init_request函数。蜘蛛初始化时以及开始爬行之前,将调用此函数。
为了让Spider开始抓取,你需要致电
self.initialized。
你可以在此处阅读对此负责的代码(它具有有用的文档字符串)。
一个例子:
from scrapy.contrib.spiders.init import InitSpiderfrom scrapy.http import Request, FormRequestfrom scrapy.contrib.linkextractors.sgml import SgmllinkExtractorfrom scrapy.contrib.spiders import Ruleclass MySpider(InitSpider): name = 'myspider' allowed_domains = ['example.com'] login_page = 'http://www.example.com/login' start_urls = ['http://www.example.com/useful_page/', 'http://www.example.com/another_useful_page/'] rules = ( Rule(SgmllinkExtractor(allow=r'-w+.html$'), callback='parse_item', follow=True), ) def init_request(self): """This function is called before crawling starts.""" return Request(url=self.login_page, callback=self.login) def login(self, response): """Generate a login request.""" return FormRequest.from_response(response, formdata={'name': 'herman', 'password': 'password'}, callback=self.check_login_response) def check_login_response(self, response): """Check the response returned by a login request to see if we are successfully logged in. """ if "Hi Herman" in response.body: self.log("Successfully logged in. Let's start crawling!") # Now the crawling can begin.. return self.initialized() else: self.log("Bad times :(") # Something went wrong, we couldn't log in, so nothing happens. def parse_item(self, response): # Scrape data from page


