我想调整此 Text Rank 代码以在我的文本中提取关键字,其值在 0 和 1 之间进行标准化。我展示了一个简短的片段:
# Pare text by spaCy
doc = nlp(text)
# Filter sentences
sentences = self.sentence_segment(doc, candidate_pos, lower) # list of list of words
# Build vocabulary
vocab = self.get_vocab(sentences)
# Get token_pairs from windows
token_pairs = self.get_token_pairs(window_size, sentences)
# Get normalized matrix
g = self.get_matrix(vocab, token_pairs)
# Initionlization for weight(pagerank value)
pr = np.array([1] * len(vocab))
# Iteration
previous_pr = 0
for epoch in range(self.steps):
pr = (1-self.d) + self.d * np.dot(g, pr)
if abs(previous_pr - sum(pr)) < self.min_diff:
break
else:
previous_pr = sum(pr)
# Get weight for each node
node_weight = dict()
for word, index in vocab.items():
node_weight[word] = pr[index]
self.node_weight = node_weight
我看到输出是类似的:
# Output
# science - 1.717603106506989
# fiction - 1.6952610926181002
# filmmaking - 1.4388798751402918
# China - 1.4259793786986021
# Earth - 1.3088154732297723
# tone - 1.1145002295684114
# Chinese - 1.0996896235078055
# Wandering - 1.0071059904601571
# weekend - 1.002449354657688
# America - 0.9976329264870932
# budget - 0.9857269586649321
# North - 0.9711240881032547
我想将Text Rank值标准化为 0 到 1 之间,以获得最大值。
在维基百科上,我在这里 找到了这两个公式
但是如果我添加(1-self.d)/g.shape[0]
到前面的公式中,那么:
pr = (1-self.d)/g.shape[0] + self.d * np.dot(g, pr)
我仍然有一些高于 1 的值。有什么错误?