您可能考虑使用trie或DAWG或数据库。有几种相同的Python实现。
以下是一些相关时间供您考虑集合与列表:
import timeitimport randomwith open('/usr/share/dict/words','r') as di: # UNIX 250k unique word list all_words_set={line.strip() for line in di}all_words_list=list(all_words_set) # slightly faster if this list is sorted...test_list=[random.choice(all_words_list) for i in range(10000)] test_set=set(test_list)def set_f(): count = 0 for word in test_set: if word in all_words_set: count+=1 return countdef list_f(): count = 0 for word in test_list: if word in all_words_list: count+=1 return countdef mix_f(): # use list for source, set for membership testing count = 0 for word in test_list: if word in all_words_set: count+=1 return countprint "list:", timeit.Timer(list_f).timeit(1),"secs"print "set:", timeit.Timer(set_f).timeit(1),"secs" print "mixed:", timeit.Timer(mix_f).timeit(1),"secs"印刷品:
list: 47.4126560688 secsset: 0.00277495384216 secsmixed: 0.00166988372803 secs
即,将10000个单词的集合与250,000个单词的集合进行匹配比在相同的250,000个单词的列表中对相同的10000个单词的列表进行匹配
要快17,085X 。使用源列表和一组成员资格测试比单独使用未排序列表 快28,392X 。
对于成员资格测试,列表为O(n),而集合和字典为O(1)进行查找。
结论:对6亿行文本使用更好的数据结构!



