就这个问题而言,公认的答案是好的,但它实际上并没有解决如何估计验证数据集的困惑度以及如何使用交叉验证。
使用困惑进行简单验证
困惑度是衡量概率模型与一组新数据的匹配程度的度量。在topicmodels
R 包中,拟合函数很简单,该perplexity
函数将先前拟合的主题模型和一组新数据作为参数,并返回一个数字。越低越好。
例如,将AssociatedPress
数据拆分为训练集(75% 的行)和验证集(25% 的行):
# load up some R packages including a few we'll need later
library(topicmodels)
library(doParallel)
library(ggplot2)
library(scales)
data("AssociatedPress", package = "topicmodels")
burnin = 1000
iter = 1000
keep = 50
full_data <- AssociatedPress
n <- nrow(full_data)
#-----------validation--------
k <- 5
splitter <- sample(1:n, round(n * 0.75))
train_set <- full_data[splitter, ]
valid_set <- full_data[-splitter, ]
fitted <- LDA(train_set, k = k, method = "Gibbs",
control = list(burnin = burnin, iter = iter, keep = keep) )
perplexity(fitted, newdata = train_set) # about 2700
perplexity(fitted, newdata = valid_set) # about 4300
验证集的困惑度高于训练集,因为主题已经基于训练集进行了优化。
使用困惑度和交叉验证来确定大量主题
将这个想法扩展到交叉验证是直截了当的。将数据分成不同的子集(比如 5 个),每个子集得到一圈作为验证集,四圈作为训练集的一部分。但是,它确实是计算密集型的,尤其是在尝试大量主题时。
您可能可以使用它caret
来执行此操作,但我怀疑它还没有处理主题建模。无论如何,这是我更喜欢自己做的事情,以确保我了解正在发生的事情。
下面的代码,即使在 7 个逻辑 CPU 上进行并行处理,也需要 3.5 小时才能在我的笔记本电脑上运行:
#----------------5-fold cross-validation, different numbers of topics----------------
# set up a cluster for parallel processing
cluster <- makeCluster(detectCores(logical = TRUE) - 1) # leave one CPU spare...
registerDoParallel(cluster)
# load up the needed R package on all the parallel sessions
clusterEvalQ(cluster, {
library(topicmodels)
})
folds <- 5
splitfolds <- sample(1:folds, n, replace = TRUE)
candidate_k <- c(2, 3, 4, 5, 10, 20, 30, 40, 50, 75, 100, 200, 300) # candidates for how many topics
# export all the needed R objects to the parallel sessions
clusterExport(cluster, c("full_data", "burnin", "iter", "keep", "splitfolds", "folds", "candidate_k"))
# we parallelize by the different number of topics. A processor is allocated a value
# of k, and does the cross-validation serially. This is because it is assumed there
# are more candidate values of k than there are cross-validation folds, hence it
# will be more efficient to parallelise
system.time({
results <- foreach(j = 1:length(candidate_k), .combine = rbind) %dopar%{
k <- candidate_k[j]
results_1k <- matrix(0, nrow = folds, ncol = 2)
colnames(results_1k) <- c("k", "perplexity")
for(i in 1:folds){
train_set <- full_data[splitfolds != i , ]
valid_set <- full_data[splitfolds == i, ]
fitted <- LDA(train_set, k = k, method = "Gibbs",
control = list(burnin = burnin, iter = iter, keep = keep) )
results_1k[i,] <- c(k, perplexity(fitted, newdata = valid_set))
}
return(results_1k)
}
})
stopCluster(cluster)
results_df <- as.data.frame(results)
ggplot(results_df, aes(x = k, y = perplexity)) +
geom_point() +
geom_smooth(se = FALSE) +
ggtitle("5-fold cross-validation of topic modelling with the 'Associated Press' dataset",
"(ie five different models fit for each candidate number of topics)") +
labs(x = "Candidate number of topics", y = "Perplexity when fitting the trained model to the hold-out set")
我们在结果中看到 200 个主题太多并且有些过拟合,而 50 个主题太少。在尝试的主题数量中,100 个是最好的,在五个不同的保留集上平均困惑度最低。
![在此处输入图像描述](https://i.stack.imgur.com/44O07.png)