5

我有一个日志文件如下

Begin ... 12-07-2008 02:00:05         ----> record1
incidentID: inc001
description: blah blah blah 
owner: abc 
status: resolved 
end .... 13-07-2008 02:00:05 
Begin ... 12-07-2008 03:00:05         ----> record2 
incidentID: inc002 
description: blah blah blahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblahblah
owner: abc 
status: resolved 
end .... 13-07-2008 03:00:05

我想使用 mapreduce 来处理这个。我想提取事件 ID、状态以及事件花费的时间

如何处理这两个记录,因为它们具有可变的记录长度,以及如果输入拆分发生在记录结束之前怎么办。

4

2 回答 2

5

您需要编写自己的输入格式和记录阅读器,以确保围绕记录分隔符正确拆分文件。

基本上,您的记录阅读器将需要寻找它的拆分字节偏移量,向前扫描(读取行)直到它找到:

  • Begin ...线 _
    • 将行读取到下一end ...行,并在开始和结束之间提供这些行作为下一条记录的输入
  • 它扫描超过拆分的结尾或找到 EOF

这在算法上类似于 Mahout 的XMLInputFormat如何处理多行 XML 作为输入 - 事实上,您可以直接修改此源代码来处理您的情况。

正如@irW 的回答中所提到的,NLineInputFormat如果您的记录每条记录的行数固定,则这是另一种选择,但对于较大的文件来说效率确实很低,因为它必须打开并读取整个文件才能发现输入格式getSplits()方法中的行偏移量。

于 2013-07-18T10:36:22.953 回答
1

在您的示例中,每条记录的行数相同。如果是这种情况,您可以使用 NLinesInputFormat,如果无法知道行数,则可能会更加困难。(有关 NlinesInputFormat 的更多信息:http: //hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/lib/NLineInputFormat.html

于 2013-07-18T10:25:50.933 回答