十年网站开发经验 + 多家企业客户 + 靠谱的建站团队
量身定制 + 运营维护+专业推广+无忧售后,网站问题一站解决
这篇文章将为大家详细讲解有关Python如何统计纯文本文件中英文单词出现个数,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
成都创新互联专业为企业提供四川网站建设、四川做网站、四川网站设计、四川网站制作等企业网站建设、网页设计与制作、四川企业网站模板建站服务,十年四川做网站经验,不只是建网站,更提供有价值的思路和整体网络服务。具体如下:
第一版: 效率低
# -*- coding:utf-8 -*- #!python3 path = 'test.txt' with open(path,encoding='utf-8',newline='') as f: word = [] words_dict= {} for letter in f.read(): if letter.isalnum(): word.append(letter) elif letter.isspace(): #空白字符 空格 \t \n if word: word = ''.join(word).lower() #转小写 if word not in words_dict: words_dict[word] = 1 else: words_dict[word] += 1 word = [] #处理最后一个单词 if word: word = ''.join(word).lower() # 转小写 if word not in words_dict: words_dict[word] = 1 else: words_dict[word] += 1 word = [] for k,v in words_dict.items(): print(k,v)
运行结果:
we 4
are 1
busy 1
all 1
day 1
like 1
swarms 1
of 6
flies 1
without 1
souls 1
noisy 1
restless 1
unable 1
to 1
hear 1
the 7
voices 1
soul 1
as 1
time 1
goes 1
by 1
childhood 1
away 2
grew 1
up 1
years 1
a 1
lot 1
memories 1
once 1
have 2
also 1
eroded 1
bottom 1
childish 1
innocence 1
regardless 1
shackles 1
mind 1
indulge 1
in 1
world 1
buckish 1
focus 1
on 1
beneficial 1
principle 1
lost 1
themselves 1
第二版:
缺点:遇到大文件要一次读入内存,性能不好
# -*- coding:utf-8 -*- #!python3 import re path = 'test.txt' with open(path,'r',encoding='utf-8') as f: data = f.read() word_reg = re.compile(r'\w+') #word_reg = re.compile(r'\w+\b') word_list = word_reg.findall(data) word_list = [word.lower() for word in word_list] #转小写 word_set = set(word_list) #避免重复查询 # words_dict = {} # for word in word_set: # words_dict[word] = word_list.count(word) # 简洁写法 words_dict = {word: word_list.count(word) for word in word_set} for k,v in words_dict.items(): print(k,v)
运行结果:
on 1
also 1
souls 1
focus 1
soul 1
time 1
noisy 1
grew 1
lot 1
childish 1
like 1
voices 1
indulge 1
swarms 1
buckish 1
restless 1
we 4
hear 1
childhood 1
as 1
world 1
themselves 1
are 1
bottom 1
memories 1
the 7
of 6
flies 1
without 1
have 2
day 1
busy 1
to 1
eroded 1
regardless 1
unable 1
innocence 1
up 1
a 1
in 1
mind 1
goes 1
by 1
lost 1
principle 1
once 1
away 2
years 1
beneficial 1
all 1
shackles 1
第三版:
# -*- coding:utf-8 -*- #!python3 import re path = 'test.txt' with open(path, 'r', encoding='utf-8') as f: word_list = [] word_reg = re.compile(r'\w+') for line in f: #line_words = word_reg.findall(line) #比上面的正则更加简单 line_words = line.split() word_list.extend(line_words) word_set = set(word_list) # 避免重复查询 words_dict = {word: word_list.count(word) for word in word_set} for k, v in words_dict.items(): print(k, v)
运行结果:
childhood 1
innocence, 1
are 1
of 6
also 1
lost 1
We 1
regardless 1
noisy, 1
by, 1
on 1
themselves. 1
grew 1
lot 1
bottom 1
buckish, 1
time 1
childish 1
voices 1
once 1
restless, 1
shackles 1
world 1
eroded 1
As 1
all 1
day, 1
swarms 1
we 3
soul. 1
memories, 1
in 1
without 1
like 1
beneficial 1
up, 1
unable 1
away 1
flies 1
goes 1
a 1
have 2
away, 1
mind, 1
focus 1
principle, 1
hear 1
to 1
the 7
years 1
busy 1
souls, 1
indulge 1
第四版:使用Counter
统计
# -*- coding:utf-8 -*- #!python3 import collections import re path = 'test.txt' with open(path, 'r', encoding='utf-8') as f: word_list = [] word_reg = re.compile(r'\w+') for line in f: line_words = line.split() word_list.extend(line_words) words_dict = dict(collections.Counter(word_list)) #使用Counter统计 for k, v in words_dict.items(): print(k, v)
运行结果:
We 1
are 1
busy 1
all 1
day, 1
like 1
swarms 1
of 6
flies 1
without 1
souls, 1
noisy, 1
restless, 1
unable 1
to 1
hear 1
the 7
voices 1
soul. 1
As 1
time 1
goes 1
by, 1
childhood 1
away, 1
we 3
grew 1
up, 1
years 1
away 1
a 1
lot 1
memories, 1
once 1
have 2
also 1
eroded 1
bottom 1
childish 1
innocence, 1
regardless 1
shackles 1
mind, 1
indulge 1
in 1
world 1
buckish, 1
focus 1
on 1
beneficial 1
principle, 1
lost 1
themselves. 1
注:这里使用的测试文本test.txt如下:
We are busy all day, like swarms of flies without souls, noisy, restless, unable to hear the voices of the soul. As time goes by, childhood away, we grew up, years away a lot of memories, once have also eroded the bottom of the childish innocence, we regardless of the shackles of mind, indulge in the world buckish, focus on the beneficial principle, we have lost themselves.
关于“Python如何统计纯文本文件中英文单词出现个数”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。