要抓取整个网站,你应该使用CrawlSpider而不是scrapy.Spider
这是一个例子
为了你的目的,请尝试使用如下所示的内容:
import scrapyfrom scrapy.spiders import CrawlSpider, Rulefrom scrapy.linkextractors import linkExtractorclass MySpider(CrawlSpider): name = 'example.com' allowed_domains = ['example.com'] start_urls = ['http://www.example.com'] rules = ( Rule(linkExtractor(), callback='parse_item', follow=True), ) def parse_item(self, response): filename = response.url.split("/")[-2] + '.html' with open(filename, 'wb') as f: f.write(response.body)


