0

我正在尝试从包含二进制形式的数据类型 UUID 的字段的 Mongo 集合中加载数据(例如 BinData(3, "/qHWF5hGQU+w6unYcTQxWw==") )。作业失败

org.apache.pig.backend.executionengine.ExecException: ERROR 2108: \
  Could not determine data type of field: 1423ed53-5064-0000-784b-7bf2e2dd837b". 

我构建了 mongo-hadoop 1.1 版(来自 Master 分支)。https://github.com/mongodb/mongo-hadoop。它工作正常,除非有 UUID。下面是我的脚本和错误。有任何想法吗?

register '/pig/lib/mongo-java-driver-2.9.3.jar';
register '/pig/lib/mongo-hadoop-core_cdh4.3.0-1.1.0.jar';
register '/pig/lib/mongo-hadoop-pig_cdh4.3.0-1.1.0.jar';
a = LOAD 'mongodb://localhost/TestDb.SocialUser'
      USING com.mongodb.hadoop.pig.MongoLoader();
store a INTO 'a';

2013-07-10 15:03:35,630 [Thread-6] INFO  org.apache.hadoop.mapred.LocalJobRunner - Map task executor complete.
2013-07-10 15:03:35,632 [Thread-6] WARN  org.apache.hadoop.mapred.LocalJobRunner - job_local402930066_0001
java.lang.Exception: org.apache.pig.backend.executionengine.ExecException: ERROR 2108: Could not determine data type of field: 1423ed53-5064-0000-784b-7bf2e2dd837b
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:404)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 2108: \
    Could not determine data type of field: 1423ed53-5064-0000-784b-7bf2e2dd837b
  at org.apache.pig.impl.util.StorageUtil.putField(StorageUtil.java:208)
  at org.apache.pig.impl.util.StorageUtil.putField(StorageUtil.java:166)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextOutputFormat$PigLineRecordWriter.write(PigTextOutputFormat.java:68)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextOutputFormat$PigLineRecordWriter.write(PigTextOutputFormat.java:44)
  at org.apache.pig.builtin.PigStorage.putNext(PigStorage.java:296)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
  at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:558)
  at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
  at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:264)
  at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
  at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:266)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
  at java.util.concurrent.FutureTask.run(FutureTask.java:166)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:724)
2013-07-10 15:03:39,235 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
4

2 回答 2

2

MongoLoader 有一个方法 convertBSONtoPigType 用于将记录读取器返回的类型转换为与 pig 兼容的类型。如果类型不是可识别的类型——即包括 java.util.Date,则该方法默认输出对象并破坏猪。

如果您向 mongo 加载器添加一个模式,该模式为 UUID 提供一个猪数据类型的 char 数组,例如

使用 MongoLoader(myguid:chararray) 加载 '/mongodb://mongoserver/db.collection' 底层 java 代码调用对象(在本例中为 java.util.UUID)上的 .toString() 并将输出一个正常的 UUID。

你也可以改变 convertBSONtoPigType 方法来做同样的事情,例如

public static Object convertBSONtoPigType(final Object o) throws ExecException {
    if (o == null) {
        return null;
    } else if (o instanceof Number || o instanceof String) {
        return o;
    } else if (o instanceof Date) {
        return ((Date) o).getTime();
    } else if (o instanceof ObjectId) {
        return o.toString();
    } else if (o instanceof UUID) {
        return o.toString();
    }
    else if (o instanceof BasicBSONList) {
        BasicBSONList bl = (BasicBSONList) o;
        Tuple t = tupleFactory.newTuple(bl.size());
        for (int i = 0; i < bl.size(); i++) {
            t.set(i, convertBSONtoPigType(bl.get(i)));
        }
        return t;
    } else if (o instanceof Map) {
        //TODO make this more efficient for lazy objects?
        Map<String, Object> fieldsMap = (Map<String, Object>) o;
        HashMap<String, Object> pigMap = new HashMap<String, Object>(fieldsMap.size());
        for (Map.Entry<String, Object> field : fieldsMap.entrySet()) {
            pigMap.put(field.getKey(), convertBSONtoPigType(field.getValue()));
        }
        return pigMap;
    } else {
        return o;
    }

}

令我困惑的是为什么 MongoLoader 不支持具有未知模式的 UUID。原因是,UUID / BinData 是 Mongo 的一部分并且被广泛使用。

也许这是他们可以解决的问题。

无论如何-希望这会有所帮助。

问候

于 2014-01-27T21:09:32.717 回答
1

那是因为 UUID 不是 Pig 中的内置类型。这篇文章给出了解决方案。

https://groups.google.com/forum/#!topic/mongodb-user/jAijeRtOG0o

于 2014-01-27T23:33:59.467 回答