这个问题是双重的。任何一个问题的答案都将是一个适当的解决方案。非常感谢您可以将建议显示为 R 代码。
1) Syuzhet 数据包中的 NRC 词典产生了最广泛的情绪,但它似乎无法控制否定者。阅读文档后,我仍然不确定如何克服这个问题。也许通过将每个句子的正负编码词相乘,例如 I(0) AM(0) NOT(-1) ANGRY(-1) = (-1*-1) = 1。但是,我不知道如何用正确的代码编写它。
2)经过大量研究和测试,我发现 SentimentR 中的 jockers_rinker 词典可以更好地处理否定词和修改(https://github.com/trinker/sentimentr#comparing-sentimentr-syuzhet-meanr-and-stanford)。我可以使用 SentimentR 通过比较两个包的二进制情绪输出来“质量测试”Suyzhet/NRC 结果的结果。如果它们偏离太多,则 NRC 对特定的文本正文不够准确。但是,我只知道如何获得个人分数而不是每种情绪的总分(正面总和和负面总和)
您可以在此处看到我的测试结果如何在连接字符串上与带有和不带有修饰符和否定符的情绪进行比较。
#Suyzhet:
library("syuzhet")
MySentiments = c("I am happy", "I am very happy", "I am not happy","It was
bad","It is never bad", "I love it", "I hate it")
get_nrc_sentiment(MySentiment, cl = NULL, language = "english")
#Result:
anger anticipation disgust fear joy sadness surprise trust negative positive
0 1 0 0 1 0 0 1 0 1
0 1 0 0 1 0 0 1 0 1
0 1 0 0 1 0 0 1 0 1
1 0 1 1 0 1 0 0 1 0
1 0 1 1 0 1 0 0 1 0
0 0 0 0 1 0 0 0 0 1
1 0 1 1 0 1 0 0 1 0
#SentimentR:
library("sentimentr")
MySentiments = c("I am happy", "I am very happy", "I am not happy","It was
bad","It is never bad", "I love it", "I hate it")
sentiment(MySentiments, polarity_dt =
lexicon::hash_sentiment_jockers_rinker,
valence_shifters_dt = lexicon::hash_valence_shifters, hyphen
= "", amplifier.weight = 0.8, n.before = 5, n.after = 2,
question.weight = 1, adversative.weight = 0.25,
neutral.nonverb.like = FALSE, missing_value = NULL)
#Results:
element_id sentence_id word_count sentiment
1 1 3 0.4330127
2 1 4 0.6750000
3 1 4 -0.3750000
4 1 3 -0.4330127
5 1 4 0.3750000
6 1 3 0.4330127
7 1 3 -0.4330127
第一个输出似乎没有认识到“非常”、“不是”和“从不”的重要性。