有人建议我
futures为此使用包装。我尝试了,它似乎正在工作。
http://pypi.python.org/pypi/futures
这是一个例子:
"Download many URLs in parallel."import functoolsimport urllib.requestimport futuresURLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/']def load_url(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read()with futures.ThreadPoolExecutor(50) as executor: future_list = executor.run_to_futures([functools.partial(load_url, url, 30) for url in URLS])



