1

我有一堆大的 HTML 文件,我想在它们上运行 Hadoop MapReduce 作业以查找最常用的单词。我用 Python 编写了 mapper 和 reducer,并使用 Hadoop 流来运行它们。

这是我的映射器:

#!/usr/bin/env python

import sys
import re
import string

def remove_html_tags(in_text):
'''
Remove any HTML tags that are found. 

'''
    global flag
    in_text=in_text.lstrip()
    in_text=in_text.rstrip()
    in_text=in_text+"\n"

    if flag==True: 
        in_text="<"+in_text
        flag=False
    if re.search('^<',in_text)!=None and re.search('(>\n+)$', in_text)==None: 
        in_text=in_text+">"
        flag=True
    p = re.compile(r'<[^<]*?>')
    in_text=p.sub('', in_text)
    return in_text

# input comes from STDIN (standard input)
global flag
flag=False
for line in sys.stdin:
    # remove leading and trailing whitespace, set to lowercase and remove HTMl tags
    line = line.strip().lower()
    line = remove_html_tags(line)
    # split the line into words
    words = line.split()
    # increase counters
    for word in words:
       # write the results to STDOUT (standard output);
       # what we output here will be the input for the
       # Reduce step, i.e. the input for reducer.py
       #
       # tab-delimited; the trivial word count is 1
       if word =='': continue
       for c in string.punctuation:
           word= word.replace(c,'')

       print '%s\t%s' % (word, 1)

这是我的减速器:

#!/usr/bin/env python

from operator import itemgetter
import sys

# maps words to their counts
word2count = {}

# input comes from STDIN
for line in sys.stdin:
    # remove leading and trailing whitespace
    line = line.strip()

    # parse the input we got from mapper.py
    word, count = line.split('\t', 1)
    # convert count (currently a string) to int
    try:
        count = int(count)
        word2count[word] = word2count.get(word, 0) + count
    except ValueError:
        pass

sorted_word2count = sorted(word2count.iteritems(), 
key=lambda(k,v):(v,k),reverse=True)

# write the results to STDOUT (standard output)
for word, count in sorted_word2count:
    print '%s\t%s'% (word, count)

每当我只是通过管道传输一个小样本小字符串时,例如“hello world hello hello world ...”,我就会得到一个排名列表的正确输出。但是,当我尝试使用一个小的 HTML 文件并尝试使用 cat 将 HTML 传输到我的映射器时,我收到以下错误(input2 包含一些 HTML 代码):

rohanbk@hadoop:~$ cat input2 | /home/rohanbk/mapper.py | sort | /home/rohanbk/reducer.py
Traceback (most recent call last):
  File "/home/rohanbk/reducer.py", line 15, in <module>
    word, count = line.split('\t', 1)
ValueError: need more than 1 value to unpack

谁能解释我为什么会得到这个?另外,调试 MapReduce 作业程序的好方法是什么?

4

1 回答 1

1

即使只使用以下命令,您也可以重现错误:

echo "hello - world" | ./mapper.py  | sort | ./reducer.py

问题在这里:

if word =='': continue
for c in string.punctuation:
           word= word.replace(c,'')

如果word是单个标点符号,就像上面输入的情况一样(在它被拆分之后),那么它会被转换为一个空字符串。因此,只需将空字符串的检查移至替换后即可。

于 2009-12-03T21:53:22.877 回答