2

以下代码会遇到大输入的堆栈溢出:

{-# LANGUAGE DeriveDataTypeable, OverloadedStrings #-}
import qualified Data.ByteString.Lazy.Char8 as L


genTweets :: L.ByteString -> L.ByteString
genTweets text | L.null text = ""
               | otherwise = L.intercalate "\n\n" $ genTweets' $ L.words text
  where genTweets' txt = foldr p [] txt
          where p word [] = [word]
                p word words@(w:ws) | L.length word + L.length w <= 139 =
                                        (word `L.append` " " `L.append` w):ws
                                    | otherwise = word:words

我假设我的谓词正在构建一个 thunk 列表,但我不确定为什么或如何修复它。

using 的等效代码foldl'运行良好,但需要很长时间,因为它不断追加,并使用大量内存。

import Data.List (foldl')

genTweetsStrict :: L.ByteString -> L.ByteString
genTweetsStrict text | L.null text = "" 
                     | otherwise = L.intercalate "\n\n" $ genTweetsStrict' $ L.words text
  where genTweetsStrict' txt = foldl' p [] txt
          where p [] word = [word]
                p words word | L.length word + L.length (last words) <= 139 =
                                init words ++ [last words `L.append` " " `L.append` word]
                             | otherwise = words ++ [word]

是什么导致第一个片段建立 thunk,可以避免吗?是否可以编写第二个片段使其不依赖(++)

4

2 回答 2

4
L.length word + L.length (last words) <= 139

这就是问题。在每次迭代中,您都在遍历累加器列表,然后

init words ++ [last words `L.append` " " `L.append` word]

附加在最后。显然这需要很长时间(与累加器列表的长度成正比)。更好的解决方案是延迟生成输出列表,将处理与读取输入流交错处理(您不需要读取整个输入来输出前 140 个字符的推文)。

以下版本的程序/usr/share/dict/words在不到 1 秒的时间内处理一个相对较大的文件 ( ),同时使用 O(1) 空间:

{-# LANGUAGE OverloadedStrings, BangPatterns #-}

module Main where

import qualified Data.ByteString.Lazy.Char8 as L
import Data.Int (Int64)

genTweets :: L.ByteString -> L.ByteString
genTweets text | L.null text = ""
               | otherwise   = L.intercalate "\n\n" $ toTweets $ L.words text
  where

    -- Concatenate words into 139-character tweets.
    toTweets :: [L.ByteString] -> [L.ByteString]
    toTweets []     = []
    toTweets [w]    = [w]
    toTweets (w:ws) = go (L.length w, w) ws

    -- Main loop. Notice how the output tweet (cur_str) is generated as soon as
    -- possible, thus enabling L.writeFile to consume it before the whole
    -- input is processed.
    go :: (Int64, L.ByteString) -> [L.ByteString] -> [L.ByteString]
    go (_cur_len, !cur_str) []     = [cur_str]
    go (!cur_len, !cur_str) (w:ws)
      | lw + cur_len <= 139        = go (cur_len + lw + 1,
                                         cur_str `L.append` " " `L.append` w) ws
      | otherwise                  = cur_str : go (lw, w) ws
      where
        lw = L.length w

-- Notice the use of lazy I/O.
main :: IO ()
main = do dict <- L.readFile "/usr/share/dict/words"
          L.writeFile "tweets" (genTweets dict)
于 2013-09-03T22:05:37.227 回答
1

p word words@(w:ws)

这种模式匹配导致对“tail”的评估,当然,这是 foldr p [] (w:ws) 的结果,它是 pw ws 的结果,这导致 ws 再次对 head 进行模式匹配,等等.

请注意 foldr 和 foldl' 将以不同的方式分割文本。foldr 将首先出现最短的推文, foldl' 将使最短的推文最后出现。


我会这样做:

genTweets' = unfoldr f where
  f [] = Nothing
  f (w:ws) = Just $ g w ws $ L.length w
  g w [] _ = (w, [])
  g w ws@(w':_) len | len+1+(L.length w') > 139 = (w,ws)
  g w (w':ws') len = g (w `L.append` " " `L.append` w') ws' $ len+1+(L.length w')
于 2013-09-03T21:20:35.237 回答