使用散列时,您可以提前设置输出矩阵的大小。您通过设置hash_size = 2 ^ 14
. 这与模型中指定的 ngram 窗口保持相同。但是,输出矩阵中的计数会发生变化。
(作为对以下评论的回应:)在下面,您可以找到一个带有两个非常简单字符串的最小示例,以演示 a 中使用的两个不同 ngram 窗口的不同输出hash_vectorizer
。对于二元组,我添加了 a 的输出矩阵以vocab_vectorizer
进行比较。你意识到你必须设置一个足够大的散列大小来解释所有项。如果它太小,单个术语的哈希值可能会发生冲突。
您关于始终必须比较一种方法的输出的评论vocab_vectorizer
和一种hash_vectorizer
方法会导致错误的方向,因为这样您就会失去哈希方法可能产生的效率/内存优势,从而避免生成词汇表。根据您的数据和所需的输出散列可能会将准确性(以及 dtm 中术语的可解释性)与效率相提并论。因此,散列是否合理取决于您的用例(尤其适用于大型集合的文档级别的分类任务)。
我希望这能让您大致了解散列以及您可以或不能从中得到什么。你也可以在quora、维基百科(或者这里)查看一些关于散列的帖子。或者也可以参考text2vec.org上列出的详细原始资源。
library(text2vec)
txt <- c("a string string", "and another string")
it = itoken(txt, progressbar = F)
#the following four example demonstrate the effect of the size of the hash
#and the use of signed hashes (i.e. the use of a secondary hash function to reduce risk of collisions)
vectorizer_small = hash_vectorizer(2 ^ 2, c(1L, 1L)) #unigrams only
hash_dtm_small = create_dtm(it, vectorizer_small)
as.matrix(hash_dtm_small)
# [,1] [,2] [,3] [,4]
# 1 2 0 0 1
# 2 1 2 0 0 #collision of the hash values of and / another
vectorizer_small_signed = hash_vectorizer(2 ^ 2, c(1L, 1L), signed_hash = TRUE) #unigrams only
hash_dtm_small = create_dtm(it, vectorizer_small_signed)
as.matrix(hash_dtm_small)
# [,1] [,2] [,3] [,4]
# 1 2 0 0 1
# 2 1 0 0 0 #no collision but some terms (and / another) not represented as hash value
vectorizer_medium = hash_vectorizer(2 ^ 3, c(1L, 1L)) #unigrams only
hash_dtm_medium = create_dtm(it, vectorizer_medium)
as.matrix(hash_dtm_medium)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
# 1 0 0 0 1 2 0 0 0
# 2 0 1 0 0 1 1 0 0 #no collision, all terms represented by hash values
vectorizer_medium = hash_vectorizer(2 ^ 3, c(1L, 1L), signed_hash = TRUE) #unigrams only
hash_dtm_medium = create_dtm(it, vectorizer_medium)
as.matrix(hash_dtm_medium)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
# 1 0 0 0 1 2 0 0 0
# 2 0 -1 0 0 1 1 0 0 #no collision, all terms represented as hash values
#in addition second hash function generated a negative hash value
#the following two examples deomstrate the difference between
#two hash vectorizers one with unigrams, one allowing for bigrams
#and one vocab vectorizer with bigrams
vectorizer = hash_vectorizer(2 ^ 4, c(1L, 1L)) #unigrams only
hash_dtm = create_dtm(it, vectorizer)
as.matrix(hash_dtm)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
# 1 0 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0
# 2 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0
vectorizer2 = hash_vectorizer(2 ^ 4, c(1L, 2L)) #unigrams + bigrams
hash_dtm2 = create_dtm(it, vectorizer2)
as.matrix(hash_dtm2)
# [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14] [,15] [,16]
# 1 1 0 0 1 0 0 0 0 0 0 0 1 2 0 0 0
# 2 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 0
v <- create_vocabulary(it, c(1L, 2L))
vectorizer_v = vocab_vectorizer(v) #unigrams + bigrams
v_dtm = create_dtm(it, vectorizer_v)
as.matrix(v_dtm)
# a_string and_another a another and string_string another_string string
# 1 1 0 1 0 0 1 0 2
# 2 0 1 0 1 1 0 1 1
sum(Matrix::colSums(as.matrix(hash_dtm)) > 0)
#[1] 4 - these are the four unigrams a, string, and, another
sum(Matrix::colSums(hash_dtm2) > 0)
#[1] 8 - these are the four unigrams as above plus the 4 bigrams string_string, a_string, and_another, another_string
sum(Matrix::colSums(v_dtm) > 0)
#[1] 8 - same as hash_dtm2