1

我需要从位于 HDFS 中的文件加载数据并使用 Hbase Map Reduce 将其加载到 Hbase 表中。我有一个 csv 文件,它仅包含列限定符的值,如下所示:

现在在我的 Hbase 表中,我如何从 mapReduce 程序加载这些值。以及如何自动生成 RowId。

    Class:


    public class SampleExample {

          private static final String NAME = "SampleExample "; //class Name

          static class Uploader extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> 
          {
            private long statuspoint = 100;
            private long count = 0;
            @Override
            public void map(LongWritable key, Text line, Context context)
            throws IOException {
              String [] values = line.toString().split(",");
                      /* How to read values into columnQualifier and how to generate row id */
         // put function-------------------
                               try {
                context.write(new ImmutableBytesWritable(row), put);
              } catch (InterruptedException e) {
                e.printStackTrace();
              }
              if(++count % statuspoint == 0) {
                context.setStatus("Emitting Put " + count);
              }
            }
          }
      public static Job configureJob(Configuration conf, String [] args)
          throws IOException {

                                   }
        }

错误:

12/09/17 05:23:30 INFO mapred.JobClient: Task Id : attempt_201209041554_0071_m_000000_0, Status : FAILED
java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.io.Writable, recieved org.apache.hadoop.hbase.client.Put
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
        at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at com.hbase.Administration$Uploader.map(HealthAdministration.java:51)
        at com.hbase.Administration$Uploader.map(HealthAdministration.java:1)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)

谁能帮助我,我无法弄清楚我们如何将值读入限定符。

4

3 回答 3

1
String stringLine = line.toString();
StringTokenizer stringTokenizer = new StringTokenizer(line, "\t");`

Put put = new Put(key.get());
put.add(family, column1,stringTokenizer.nextToken().getBytes());
put.add(family, column2,stringTokenizer.nextToken().getBytes());
put.add(family, column3,stringTokenizer.nextToken().getBytes());
put.add(family, column4,stringTokenizer.nextToken().getBytes());

try {
    context.write(new ImmutableBytesWritable(row), put);
} catch (InterruptedException e) {
    e.printStackTrace();
}
于 2012-09-14T06:22:22.717 回答
0

请更改您的地图并减少如下。在 Map 中仅处理行 ID,并将这个工作的行 ID 和行(原样)传递给减速器

map{
  byte[] row=Bytes.toBytes(key.get());
  try {
            context.write(new ImmutableBytesWritable(row),line);
          } catch (InterruptedException e) {
            e.printStackTrace();
          }
    }

减少更改

@Override     
reduce (ImmutableBytesWritable row , Text line ){
String stringLine=line.toString();
StringTokenizer stringTokenizer=new StringTokenizer(line, "\t");

Put put = new Put(key.getBytes());
put.add(family, column1,stringTokenizer.nextToken().getBytes());
put.add(family, column2,stringTokenizer.nextToken().getBytes());
put.add(family, column3,stringTokenizer.nextToken().getBytes());
put.add(family, column4,stringTokenizer.nextToken().getBytes());
try {
    context.write(new ImmutableBytesWritable(row), put);
  } catch (InterruptedException e) {
    e.printStackTrace();
  }

请根据上面的代码对你的代码进行适当的修改。例外是 coz ,当我们有一个 +ve 数量的 reducer 时,map 函数无法写入表(或使用 put 对象),因此 context.write(writable,put) 被转移到具有表的 reduce -name ,需要写入最终输出的位置。希望这应该解决。否则我将编写相同输入文件的工作代码并将其粘贴到此处

于 2012-09-18T05:13:31.603 回答
0

嗨,只需在 put 命令中删除 +1,如下所示 Put put = new Putkey.get()); 并删除 job.setNumReduceTasks(0) 的注释;那么它肯定有效

于 2012-09-18T05:21:15.030 回答