我想根据 R 中的某些单词关联创建一个术语网络分析图,但我不知道如何超越绘制整个术语文档矩阵:
# Network analysis
library(igraph)
# load tdm data
# create matrix
Neg.m <- as.matrix(Ntdm_nonsparse)
# to boolean matrix
Neg.m[Neg.m>=1] <- 1
# to term adjacency matrix
# %*% is product of 2 matrices
Neg.m2 <- Neg.m %*% t(Neg.m)
Neg.m2[5:10,5:10]
# build graph with igraph ####
library(igraph)
# build adjacency graph
Neg.g <- graph.adjacency(Neg.m2, weighted=TRUE, mode="undirected")
# remove loops
Neg.g <- simplify(Neg.g)
# set labels and degrees of vertices
V(Neg.g)$label <- V(Neg.g)$name
V(Neg.g)$degree <- degree(Neg.g)
# plot layout fruchterman.reingold
layout1 <- layout.fruchterman.reingold(Neg.g)
plot(Neg.g, layout=layout1, vertex.size=20,
vertex.label.color="darkred")
无论如何,是否可以将单词关联网络分析图(以及一般的单词关联条形图)应用于以下findAssocs
数据?例如:
findAssocs(Ntdm, "verizon", .06)
$verizon
att switched switch wireless basket 09mbps 16mbps
0.16 0.13 0.11 0.11 0.10 0.09 0.09
32mbps 4gbs 5gbs cheaper ima landry nudge
0.09 0.09 0.09 0.09 0.09 0.09 0.09
sears wink collapsed expensive sprint -fine -law
0.09 0.09 0.08 0.08 0.08 0.07 0.07
11yrs 380 980 alltel callled candle cdma
0.07 0.07 0.07 0.07 0.07 0.07 0.07
concert consequence de-evolving dimas doria fluke left
0.07 0.07 0.07 0.07 0.07 0.07 0.07
london lulz lyingly niet outfits pocketbook puny
0.07 0.07 0.07 0.07 0.07 0.07 0.07
recentely redraw reinvesting reservoir satellite's shrimp stratosphere
0.07 0.07 0.07 0.07 0.07 0.07 0.07
strighten switchig switching undergo wheelchair wireless-never worth
0.07 0.07 0.07 0.07 0.07 0.07 0.07
yeap 1994 299 cheapest com' comin crushes
0.07 0.06 0.06 0.06 0.06 0.06 0.06
hhahahahahah mache metro metro-nyc must've rising sabotage
0.06 0.06 0.06 0.06 0.06 0.06 0.06
wholeheartedly
0.06
换句话说,我想可视化特定关键字与 R 中其他关键字的联系,但我不知道如何。