4

因此,我试图将 40,000 篇文章的语料库分解为文章中每个单词的 tf-idf 权重。我有大约 300MB 的评论。然而,当我尝试分析这些评论的一小部分(约 1000 条)时,我会发现内存消耗异常增加。tf-idfize 1000 条评论大约需要 600MB。这是无法接受的

正如预期的那样,堆分析显示所有内存 (~550MB) 都将分配给 ByteStrings。考虑到前 1000 条评论仅包含 50MB,这似乎很高。此外,我什至没有保留评论的全文。我尝试增加严格性(这通常可以解决问题),但它从注释中受益甚少。我还尝试了线性哈希表而不是基本哈希表,但性能是相同的。

我怀疑 foldM 的减少存在一些问题。大部分时间/alloc 都花在 extractReview 逻辑上。但我看不到任何明显的罪犯。

任何帮助,将不胜感激。

相关代码(省略了一些辅助函数):

processReview :: Int -> [Review] -> String -> IO [Review]
processReview n stack file = do !raw <- B.readFile file
                                !newr <- extractReview n raw
                                return $ newr : stack

extractReview :: Int -> B.ByteString -> IO Review
extractReview n  r = do  !new_ngrams <- count_ngrams n body
                         return $ Review {ngrams = new_ngrams, url = safeNode url, isbns = map strContent isbns} 
                     where (Just !elem) = parseXMLDoc r
                           !body = cleanUTF8 $ B8.pack $ safeNode $ findElement (QName "body" Nothing Nothing) elem
                           !isbns = findElements (QName "isbn" Nothing Nothing) elem
                           !url = findElement (QName "url" Nothing Nothing) elem
                           safeNode = maybe "" (\m -> strContent m)

count_ngrams :: Int -> BL.ByteString -> IO Ngrams
count_ngrams n rbody = do !new_list <- H.new
                          !ngrams <- foldM (\h w -> let !w' = lowercase w in if elem w' ignore_words then return h                                                                                                                               
                                                                                                     else increment_ngram 1 h w') new_list word_list
                          return ngrams
                        where !just_words = BL.filter (\c -> c == 32 || (c >= 65 && c <= 90) || (c >= 97 && c <= 122)) (rbody)
                              !word_list = BL.split 32 just_words

increment_ngram :: Int -> Ngrams -> BL.ByteString -> IO Ngrams
increment_ngram amount ns word = do count <- H.lookup ns word
                                    case count of
                                         (Just i) -> H.insert ns word (i + amount)
                                         Nothing -> H.insert ns word amount
                                    return ns

sumNgrams :: [Review] -> IO Ngrams
sumNgrams reviews = do dict <- H.new
                       mapM_ (\r -> H.mapM_ (\(k,v) -> increment_ngram 1 dict k) (ngrams r)) reviews 
                       return dict                        


main = do
       [n] <- getArgs
       ngrams <- H.new :: IO (H.BasicHashTable Review Ngrams)
       reviews <- fmap (map (\c -> "./reviews/" ++ c) . filter (isInfixOf "xml") . take 500) $ getDirectoryContents "./reviews"
       analyzed_reviews <- foldM (\stack r -> processReview (read n) stack r) [] reviews
4

0 回答 0