1

我想通过 Oozie 安排 Hbase Map-Reduce 作业。我面临以下问题。

How/Where  to specify these properties in oozie workflow ?
(    i> Table name for Mapper/Reducer 
     ii> scan object for Mapper         )


    Scan scan = new Scan(new Get());

    scan.setMaxVersions();

    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));
    scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(DATE));

    Job job = new Job(conf, JOB_NAME + "_" + TABLE_USER);
    // These two properties :-
    TableMapReduceUtil.initTableMapperJob(TABLE_USER, scan,
            Mapper.class, Text.class, Text.class, job);
    TableMapReduceUtil.initTableReducerJob(DETAILS_TABLE,
            Reducer.class, job);

或者

please let me know the best way to schedule a Hbase Map-Reduce Job by Oozie .

谢谢 :) :)

4

2 回答 2

3

安排 Hbase Map_Reduce 作业的最佳方式(据我说)是将其安排为 .java 文件。 它运行良好,无需编写代码将扫描更改为字符串等。所以我正在安排我的作业,如 java 文件,直到我得到更好的选择。

workflow-app xmlns="uri:oozie:workflow:0.1" name="java-main-wf">
<start to="java-node"/>
<action name="java-node">
    <java>
          <job-tracker></job-tracker>
        <name-node></name-node>
        <configuration>
            <property>
                <name>mapred.job.queue.name</name>
                <value>${queueName}</value>
            </property>
        </configuration>
        <main-class>org.apache.oozie.example.DemoJavaMain</main-class>
        <arg>Hello</arg>
        <arg>Oozie!</arg>
     <arg>This</arg>
        <arg>is</arg>
     <arg>Demo</arg>
        <arg>Oozie!</arg>

    </java>
    <ok to="end"/>
    <error to="fail"/>
</action>
<kill name="fail">
    <message>Java failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>

于 2012-09-25T09:47:56.773 回答
0

您也可以使用<Map-reduce>tag 来调度作业,但它不像 java 文件那样调度它那么容易。它需要相当大的努力,但可以考虑作为一种替代方法。

     <action name='jobSample'>
        <map-reduce>
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <configuration>
                <!-- This is required for new api usage -->
                <property>
                    <name>mapred.mapper.new-api</name>
                    <value>true</value>
                </property>
                <property>
                    <name>mapred.reducer.new-api</name>
                    <value>true</value>
                </property>
                <!-- HBASE CONFIGURATIONS -->
                <property>
                    <name>hbase.mapreduce.inputtable</name>
                    <value>TABLE_USER</value>
                </property>
                <property>
                    <name>hbase.mapreduce.scan</name>
                    <value>${wf:actionData('get-scanner')['scan']}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.property.clientPort</name>
                    <value>${hbaseZookeeperClientPort}</value>
                </property>
                <property>
                    <name>hbase.zookeeper.quorum</name>
                    <value>${hbaseZookeeperQuorum}</value>
                </property>
                <!-- MAPPER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.inputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableInputFormat</value>
                </property>
                <property>
                    <name>mapred.mapoutput.key.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapred.mapoutput.value.class</name>
                    <value>org.apache.hadoop.io.Text</value>
                </property>
                <property>
                    <name>mapreduce.map.class</name>
                    <value>com.hbase.mapper.MyTableMapper</value>
                </property>
                <!-- REDUCER CONFIGURATIONS -->
                <property>
                    <name>mapreduce.reduce.class</name>
                    <value>com.hbase.reducer.MyTableReducer</value>
                </property>
                <property>
                    <name>hbase.mapred.outputtable</name>
                    <value>DETAILS_TABLE</value>
                </property>
                 <property>
                    <name>mapreduce.outputformat.class</name>
                    <value>org.apache.hadoop.hbase.mapreduce.TableOutputFormat</value>
                </property>

                <property>
                    <name>mapred.map.tasks</name>
                    <value>${mapperCount}</value>
                </property>
                <property>
                    <name>mapred.reduce.tasks</name>
                    <value>${reducerCount}</value>
                </property>
                <property>
                    <name>mapred.job.queue.name</name>
                    <value>${queueName}</value>
                </property>
            </configuration>
        </map-reduce>
        <ok to="end" />
        <error to="fail" />
    </action>
    <kill name="fail">
        <message>Map/Reduce failed, error
            message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name='end' />

要了解有关属性名称和值的更多信息,请转储配置参数。此外,扫描属性是扫描信息的一些序列化(Base 64 编码版本),因此不确定如何指定 -

scan.addColumn(Bytes.toBytes(FAMILY),
            Bytes.toBytes(VALUE));
于 2015-06-12T09:29:41.690 回答