3

我想编写自己的朴素贝叶斯分类器,我有一个这样的文件:

(这是垃圾邮件和火腿消息的数据库,第一个单词指向垃圾邮件或火腿,直到 eoln 是来自这里的消息(大小:0.5 Mb)之前的文本http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ )

ham     Go until jurong point, crazy.. Available only in bugis n gre
at world la e buffet... Cine there got amore wat...
ham     Ok lar... Joking wif u oni...
spam    Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
ham     U dun say so early hor... U c already then say...
ham     Nah I don't think he goes to usf, he lives around here though
spam    FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv

我想制作一个像这样的哈希图: {"spam" {"go" 1, "until" 100, ...}, "ham" {......}} 哈希图,其中每个值都是频率单词图(分别用于火腿和垃圾邮件)

我知道,python 或 c++ 是如何做到的,我是通过 clojure 实现的,但是我的解决方案在大数据中失败了(stackoverflow)

我的解决方案:

(defn read_data_from_file [fname]
    (map #(split % #"\s")(map lower-case (with-open [rdr (reader fname)] 
        (doall (line-seq rdr))))))

(defn do-to-map [amap keyseq f]
    (reduce #(assoc %1 %2 (f (%1 %2))) amap keyseq))

(defn dicts_from_data [raw_data]
    (let [data (group-by #(first %) raw_data)]
        (do-to-map
            data (keys data) 
                (fn [x] (frequencies (reduce concat (map #(rest %) x)))))))

我试图找出错误的地方并写了这个

(def raw_data (read_data_from_file (first args)))
(def d (group-by #(first %) raw_data))
(def f (map frequencies raw_data))
(def d1 (reduce concat (d "spam")))
(println (reduce concat (d "ham")))

错误:

Exception in thread "main" java.lang.RuntimeException: java.lang.StackOverflowError
    at clojure.lang.Util.runtimeException(Util.java:165)
    at clojure.lang.Compiler.eval(Compiler.java:6476)
    at clojure.lang.Compiler.eval(Compiler.java:6455)
    at clojure.lang.Compiler.eval(Compiler.java:6431)
    at clojure.core$eval.invoke(core.clj:2795)
    at clojure.main$eval_opt.invoke(main.clj:296)
    at clojure.main$initialize.invoke(main.clj:315)
.....

任何人都可以帮助我使它更好/有效吗?PS对不起我的写作错误。英语不是我的母语。

4

2 回答 2

2

在匿名函数中使用apply而不是避免异常。而不是使用.reduceStackOverflow(fn [x] (frequencies (reduce concat (map #(rest %) x))))(fn [x] (frequencies (apply concat (map #(rest %) x))))

以下是稍微重构的相同代码,但具有完全相同的逻辑。已read-data-from-file更改以避免map对行序列进行两次 ping 操作。

(use 'clojure.string)
(use 'clojure.java.io)

(defn read-data-from-file [fname]
  (let [lines (with-open [rdr (reader fname)] 
                (doall (line-seq rdr)))]
    (map #(-> % lower-case (split #"\s")) lines)))

(defn do-to-map [m keyseq f]
    (reduce #(assoc %1 %2 (f (%1 %2))) m keyseq))

(defn process-words [x]
  (->> x 
    (map #(rest %)) 
    (apply concat) ; This is the only real change from the 
                   ; original code, it used to be (reduce concat).
    frequencies))

(defn dicts-from-data [raw_data]
  (let [data (group-by first raw_data)]
    (do-to-map data
               (keys data) 
               process-words)))

(-> "SMSSpamCollection.txt" read-data-from-file dicts-from-data keys)
于 2013-06-26T19:49:37.923 回答
1

要考虑的另一件事是使用(doall (line-seq ...)),它将整个单词列表读入内存。如果列表非常大,这可能会导致问题。像这样累积数据的一个方便技巧是使用reduce. 在您的情况下,我们需要reduce两次:一次在行上,然后在每行中的单词上。像这样的东西:

(defn parse-line
  [line]
  (str/split (str/lower-case line) #"\s+"))

(defn build-word-freq
  [file]
  (with-open [rdr (io/reader file)]
    (reduce (fn [accum line]
              (let [[spam-or-ham & words] (parse-line line)]
                (reduce #(update-in %1 [spam-or-ham %2] (fnil inc 0)) accum words)))
            {}
            (line-seq rdr))))
于 2013-06-29T14:09:06.887 回答