1

抱歉,如果这是一个非常具体的问题,可能无法推广到其他人的问题。

背景

我希望做一些情感分析,从词典中单词的基本二进制匹配开始,然后转向更复杂的情感分析形式,利用语法规则等。

问题

为了进行一些二进制匹配——这将构成情感分析的第一阶段——我得到了两个表,一个包含单词,另一个包含这些单词的词性。

    V1     V2        V3          V4   V5
1    R     is fantastic    language <NA>
2 Java     is       far        from good
3 Data mining        is fascinating <NA>


   V1  V2  V3 V4   V5
1  NN VBZ  JJ NN <NA>
2 NNP VBZ  RB IN   JJ
3 NNP  NN VBZ JJ <NA>

我想进行一些基本的情感分析,如下所示:我想应用一个函数,该函数接受两个参数,一个词(来自第一个数据帧)及其相应的 POS 标签(来自第二个)来确定要使用的列表词在确定单词的正/负方向时。例如,fantasy 一词将与 POS 标签“JJ”一起被提取,因此仅检查形容词列表是否存在该词。

最终,我想得到一个显示匹配结果的数据框:

   V1  V2  V3 V4   V5
1  0   0   1   0   <NA>
2  0   0  -1   0   1
3  0   0   0   1   <NA>

我尝试编写自己的代码,但一直出错,之后我觉得这行不通。

#test sentences
sentences<- as.list(c("R is fantastic language", "Java is far from good", "Data mining is fascinating"))

#using the OpenNLP package
require(openNLP)

#perform tagging
taggedSentences<- tagPOS(sentences)

#split to words
individualWords<- unname(sapply(taggedSentences, function(x){strsplit(x,split=" ")}))

#Strip Tags
individualWordsClean<- unname(sapply(individualWords, function(x){gsub("/.+","",x)}))

#Strip words
individualTags<- unname(sapply(individualWords, function(x){gsub(".+/","",x)}))

#create a dataframe for words; courtesy @trinker
numberRow<- length(individualWords)
numberCol<- unname(sapply(individualWords, length))
df1<- as.data.frame(matrix(nrow=numberRow, ncol=max(numberCol)))
for (i in 1:numberRow){
df1[i,1:numberCol[i]]<- individualWordsClean [[i]]
}


#create a dataframe for tags; courtesy @trinker
numberRow<- length(individualWords)
numberCol<- unname(sapply(individualTags, length))
df2<- as.data.frame(matrix(nrow=numberRow, ncol=max(numberCol)))
for (i in 1:numberRow){
df2[i,1:numberCol[i]]<- individualTags [[i]]
}

#Create negative/positive words' lists
posAdj<- c("fantastic","fascinating","good")
negAdj<- c("bad","poor")
posNoun<- "R"
negNoun<- "Java"

#Function to match words and assign sentiment score
checkLexicon<- function(word,tag){
if (grep("JJ|JJR|JJS",tag)){
ifelse(word %in% posAdj, +1, ifelse(word  %in% negAdj, -1, 0))
}
else if(grep("NN|NNP|NNPS|NNS",tag)){
ifelse(word %in% posNoun, +1, ifelse(word %in% negNoun, -1, 0))
}
else if(grep("VBZ",tag)){
ifelse(word %in% "is","ok","none")
}
else if(grep("RB",tag)){
ifelse(word %in% "not",-1,0)
}
else if(grep("IN",tag)){
ifelse(word %in% "far",-1,0)
}
}

#Method to output a single value when used in conjuction with apply
justShow<- function(x){
    x
    }

#Main method that intends to extract word/POS tag pair, and determine sentiment score
mapply(FUN=checkLexicon, word=apply(df1,2,justShow),tag=apply(df2,2,justShow))

不幸的是,我用这种方法没有成功,收到的错误是

Error in if (grep("JJ|JJR|JJS", tag)) { : argument is of length zero

我对 R 比较陌生,但似乎我无法在apply此处使用该函数,因为它不向函数返回任何参数mapply。另外,我不确定 mapply 是否真的会产生另一个数据框。

请批评/建议。谢谢

PS。链接到 TRinker 对 R 感兴趣的笔记。

4

1 回答 1

1

错误是试图使用grepas grepl。在 Joran 指出之后,这一点得到了纠正。工作函数如下。

>df1

    V1     V2        V3          V4   V5
1    R     is fantastic    language <NA>
2 Java     is       far        from good
3 Data mining        is fascinating <NA>

>df2

   V1  V2  V3 V4   V5
1  NN VBZ  JJ NN <NA>
2 NNP VBZ  RB IN   JJ
3 NNP  NN VBZ JJ <NA>

#Function to match words and assign sentiment score
checkLexicon<- function(word,tag){
if (grepl("JJ|JJR|JJS",tag)){
ifelse(word %in% posAdj, +1, ifelse(word  %in% negAdj, -1, 0))
}
else if(grepl("NN|NNP|NNPS|NNS",tag)){
ifelse(word %in% posNoun, +1, ifelse(word %in% negNoun, -1, 0))
}
else if(grepl("VBZ",tag)){
ifelse(word %in% "is","ok","none")
}
else if(grepl("RB",tag)){
ifelse(word %in% "not",-1,0)
}
else if(grepl("IN",tag)){
ifelse(word %in% "far",-1,0)
}
}

#Method to output a single value when used in conjuction with apply
justShow<- function(x){
    x
    }

#Main method that intends to extract word/POS tag pair, and determine sentiment score
myObject<- mapply(FUN=checkLexicon, word=apply(df1,2,justShow),tag=apply(df2,2,justShow))

#Shaping the final dataframe
scoredDF<- as.data.frame(matrix(myObject,nrow=3))

  V1 V2 V3 V4   V5
1  1 ok  1  0 NULL
2 -1 ok  0  0    1
3  0  0 ok  1 NULL
于 2013-07-20T09:30:12.420 回答