我有这个脚本可以在文本中进行单词搜索。搜索进行得非常好,结果按预期工作。我想要实现的是提取n
接近匹配的单词。例如:
世界很小,我们应该好好珍惜。
假设我正在寻找place
,我需要提取右侧的 3 个单词和左侧的 3 个单词。在这种情况下,它们将是:
left -> [is, a, small]
right -> [we, should, try]
做到这一点的最佳方法是什么?
谢谢!
我有这个脚本可以在文本中进行单词搜索。搜索进行得非常好,结果按预期工作。我想要实现的是提取n
接近匹配的单词。例如:
世界很小,我们应该好好珍惜。
假设我正在寻找place
,我需要提取右侧的 3 个单词和左侧的 3 个单词。在这种情况下,它们将是:
left -> [is, a, small]
right -> [we, should, try]
做到这一点的最佳方法是什么?
谢谢!
def search(text,n):
'''Searches for text, and retrieves n words either side of the text, which are retuned seperatly'''
word = r"\W*([\w]+)"
groups = re.search(r'{}\W*{}{}'.format(word*n,'place',word*n), text).groups()
return groups[:n],groups[n:]
这使您可以指定要捕获的任一方的字数。它通过动态构造正则表达式来工作。和
t = "The world is a small place, we should try to take care of it."
search(t,3)
(('is', 'a', 'small'), ('we', 'should', 'try'))
虽然正则表达式可以工作,但我认为这对于这个问题来说有点过分了。最好使用两个列表推导:
sentence = 'The world is a small place, we should try to take care of it.'.split()
indices = (i for i,word in enumerate(sentence) if word=="place")
neighbors = []
for ind in indices:
neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])
请注意,如果您要查找的单词在句子中连续出现多次,则此算法会将连续出现的单词包括为邻居。
例如:
在 [29] 中:邻居 = []
在[30]中:句子='世界是一个小地方地方地方,我们应该尽量照顾它。'.split()
In [31]: 句子 Out[31]: ['The', 'world', 'is', 'a', 'small', 'place', 'place', 'place,', 'we', '应该','尝试','to','take','care','of','it']
In [32]: indices = [i for i,word in enumerate(sentence) if word == 'place']
In [33]: for ind in indices:
....: neighbors.append(sentence[ind-3:ind]+sentence[ind+1:ind+4])
In [34]: neighbors
Out[34]:
[['is', 'a', 'small', 'place', 'place,', 'we'],
['a', 'small', 'place', 'place,', 'we', 'should']]
import re
s='The world is a small place, we should try to take care of it.'
m = re.search(r'((?:\w+\W+){,3})(place)\W+((?:\w+\W+){,3})', s)
if m:
l = [ x.strip().split() for x in m.groups()]
left, right = l[0], l[2]
print left, right
输出
['is', 'a', 'small'] ['we', 'should', 'try']
如果您搜索The
,它会产生:
[] ['world', 'is', 'a']
处理搜索关键字多次出现的场景。例如下面是搜索关键字的输入文本:地点出现 3 次
The world is a small place, we should try to take care of this small place by planting trees in every place wherever is possible
这是功能
import re
def extract_surround_words(text, keyword, n):
'''
text : input text
keyword : the search keyword we are looking
n : number of words around the keyword
'''
#extracting all the words from text
words = words = re.findall(r'\w+', text)
#iterate through all the words
for index, word in enumerate(words):
#check if search keyword matches
if word == keyword:
#fetch left side words
left_side_words = words[index-n : index]
#fetch right side words
right_side_words = words[index+1 : index + n + 1]
print(left_side_words, right_side_words)
调用函数
text = 'The world is a small place, we should try to take care of this small place by planting trees in every place wherever is possible'
keyword = "place"
n = 3
extract_surround_words(text, keyword, n)
output :
['is', 'a', 'small'] ['we', 'should', 'try']
['we', 'should', 'try'] ['to', 'microsot', 'is']
['also', 'take', 'care'] ['googe', 'is', 'one']
找出所有单词:
import re
sentence = 'The world is a small place, we should try to take care of it.'
words = re.findall(r'\w+', sentence)
获取您要查找的单词的索引:
index = words.index('place')
然后使用切片找到其他的:
left = words[index - 3:index]
right = words[index + 1:index + 4]