1

我正面临 Hive over Tez 的问题。

我可以毫无问题地选择 Hive 上存在的表

SELECT * FROM Transactions;

但是,当尝试在此表上使用聚合函数或计数 (*) 时,例如:

SELECT COUNT(*) FROM Transactions;

我正面临以下登录 Hive.log 文件

2017-08-13T10:04:27,892 INFO [4a5b6a0c-9edb-45ea-8d49-b2f4b0d2b636 main] conf.HiveConf:使用为日志 id 传入的默认值:4a5b6a0c-9edb-45ea-8d49-b2f4b0d2b6310 2017-08-3 :04:27,910 INFO [4a5b6a0c-9edb-45ea-8d49-b2f4b0d2b636 main] session.SessionState: 关闭 tez 会话时出错 java.lang.RuntimeException: java.util.concurrent.ExecutionException: org.apache.tez.dag.api.SessionNotRunning : TezSession 已经关闭。由于 AM Container for appattempt_1498057873641_0017_000002 退出,应用程序 application_1498057873641_0017 失败 2 次,exitCode:-1000 尝试失败。诊断:java.io.FileNotFoundException:文件 /tmp/hadoop-hadoop/nm-local-dir/filecache 不存在输出,查看应用跟踪页面:http://hadoop-master:8090/cluster/app/application_1498057873641_0017然后单击指向每次尝试日志的链接。. 申请失败。在 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.isOpen(TezSessionState.java:173) ~[hive-exec-2.1.1.jar:2.1.1] 在 org.apache.hadoop.hive。 ql.exec.tez.TezSessionState.toString(TezSessionState.java:135) ~[hive-exec-2.1.1.jar:2.1.1] at java.lang.String.valueOf(String.java:2994) ~[? :1.8.0_131] 在 java.lang.StringBuilder.append(StringBuilder.java:131) ~[?:1.8.0_131] 在 org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.closeIfNotDefault(TezSessionPoolManager.java :346) ~[hive-exec-2.1.1.jar:2.1.1] at org.apache.hadoop.hive.ql.session.SessionState.close(SessionState.java:1524) [hive-exec-2.1.1 .jar:2.1.1] 在 org.apache.hadoop.hive.cli.CliSessionState.close(CliSessionState.java:66) [hive-cli-2.1.1.jar:2.1.1] 在 org.apache。0_131] 在 org.apache.hadoop.util.RunJar.run(RunJar.java:234) [hadoop-common-2.8.0.jar:?] 在 org.apache.hadoop.util.RunJar.main(RunJar.java :148) [hadoop-common-2.8.0.jar:?] 引起:java.util.concurrent.ExecutionException:org.apache.tez.dag.api.SessionNotRunning:TezSession 已经关闭。由于 AM Container for appattempt_1498057873641_0017_000002 退出,应用程序 application_1498057873641_0017 失败 2 次,exitCode:-1000 尝试失败。诊断:java.io.FileNotFoundException:文件 /tmp/hadoop-hadoop/nm-local-dir/filecache 不存在输出,查看应用跟踪页面:concurrent.ExecutionException:org.apache.tez.dag.api.SessionNotRunning:TezSession 已经关闭。由于 AM Container for appattempt_1498057873641_0017_000002 退出,应用程序 application_1498057873641_0017 失败 2 次,exitCode:-1000 尝试失败。诊断:java.io.FileNotFoundException:文件 /tmp/hadoop-hadoop/nm-local-dir/filecache 不存在输出,查看应用跟踪页面:concurrent.ExecutionException:org.apache.tez.dag.api.SessionNotRunning:TezSession 已经关闭。由于 AM Container for appattempt_1498057873641_0017_000002 退出,应用程序 application_1498057873641_0017 失败 2 次,exitCode:-1000 尝试失败。诊断:java.io.FileNotFoundException:文件 /tmp/hadoop-hadoop/nm-local-dir/filecache 不存在输出,查看应用跟踪页面:http://hadoop-master:8090/cluster/app/application_1498057873641_0017然后单击指向每次尝试日志的链接。. 申请失败。在 java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_131] 在 java.util.concurrent.FutureTask.get(FutureTask.java:206) ~[?:1.8.0_131]在 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.isOpen(TezSessionState.java:168) ~[hive-exec-2.1.1.jar:2.1.1] ... 17 更多原因:org .apache.tez.dag.api.SessionNotRunning:TezSession 已经关闭。由于 AM Container for appattempt_1498057873641_0017_000002 退出,应用程序 application_1498057873641_0017 失败 2 次,exitCode:-1000 尝试失败。诊断:java.io.FileNotFoundException:文件 /tmp/hadoop-hadoop/nm-local-dir/filecache 不存在输出,查看应用跟踪页面:http://hadoop-master:8090/cluster/app/application_1498057873641_0017然后单击指向每次尝试日志的链接。. 申请失败。在 org.apache.tez.client.TezClient.waitTillReady(TezClient.java:914) ~[tez-api-0.8.4.jar:0.8.4] 在 org.apache.tez.client.TezClient.waitTillReady(TezClient. java:883) ~[tez-api-0.8.4.jar:0.8.4] at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.startSessionAndContainers(TezSessionState.java:416) ~[hive-exec -2.1.1.jar:2.1.1] 在 org.apache.hadoop.hive.ql.exec.tez.TezSessionState.access$000(TezSessionState.java:97) ~[hive-exec-2.1.1.jar:2.1 .1] 在 org.apache.hadoop.hive.ql.exec.tez.TezSessionState$1.call(TezSessionState.java:333) ~[hive-exec-2.1.1.jar:2.1.1] 在 org.apache。 hadoop.hive.ql.exec.tez.TezSessionState$1.call(TezSessionState.java:329) ~[hive-exec-2.1.1.jar:2.1.1] at java.util.concurrent.FutureTask.run(FutureTask.爪哇:266)〜[?:

我通过在所有集群节点“/tmp/hadoop-hadoop/nm-local-dir/filecache”上创建丢失的目录解决了这个问题。

然后我在尝试执行时在 Hive.log 出现错误SELECT COUNT(*) FROM Transactions;,如下所示:

2017-08-13T10:06:35,567 INFO [main] optimizer.ColumnPrunerProcFactory: RS 3 oldColExprMap: {VALUE._col0=Column[_col0]} 2017-08-13T10:06:35,568 INFO [main] optimizer.ColumnPrunerProcFactory: RS 3 newColExprMap: {VALUE._col0=Column[_col0]} 2017-08-13T10:06:35,604 INFO [213ea036-8245-4042-a5a1-ccd686ea2465 main] Configuration.deprecation: mapred.input.dir.recursive 已弃用。相反,使用 mapreduce.input.fileinputformat.input.dir.recursive 2017-08-13T10:06:35,658 INFO [main] annotation.StatsRulesProcFactory: STATS-GBY[2]: Equals 0 in number of rows.0 行将被设置到 1 2017-08-13T10:06:35,679 INFO [main] optimizer.SetReducerParallelism:reducer 数量确定为:1 2017-08-13T10:06:35,680 INFO [main] parse.TezCompiler: Cycle free: true 2017- 08-13T10:06:35, 689 INFO [213ea036-8245-4042-a5a1-ccd686ea2465 main] Configuration.deprecation:mapred.job.name 已弃用。相反,使用 mapreduce.job.name 2017-08-13T10:06:35,741 INFO [main] parse.CalcitePlanner: Completed plan generation 2017-08-13T10:06:35,742 INFO [main] ql.Driver: Semantic Analysis Completed 2017- 08-13T10:06:35,742 INFO [main] ql.Driver:返回 Hive 架构:Schema(fieldSchemas:[FieldSchema(name:c0, type:bigint, comment:null)], properties:null) 2017-08-13T10: 06:35,744 INFO [main] exec.ListSinkOperator:初始化运算符 LIST_SINK[7] 2017-08-13T10:06:35,745 INFO [main] ql.Driver:完成编译命令(queryId=hadoop_20170813100633_31ca0425-6aca-434c-8039-48bc07) ; 耗时:2.131 秒 2017-08-13T10:06:35,768 INFO [main] ql.Driver: 不推荐使用需要的。相反,使用 mapreduce.job.committer.setup.cleanup.needed 2017-08-13T10:06:35,840 INFO [main] ql.Context: 新的暂存目录是 hdfs://hadoop-master:8020/tmp/hive/hadoop /213ea036-8245-4042-a5a1-ccd686ea2465/hive_2017-08-13_10-06-33_614_5648783469307420794-1 2017-08-13T10:06:35,845 INFO [main] exec.Task: 会话已经打开 2017-08-13T10:06:35,845 :35,847 INFO [main] tez.DagUtils:本地化资源,因为它不存在:文件:/opt/apache-tez-0.8.4-bin 到目标:hdfs://hadoop-master:8020/tmp/hive/hadoop /_tez_session_dir/213ea036-8245-4042-a5a1-ccd686ea2465/apache-tez-0.8.4-bin 2017-08-13T10:06:35,850 INFO [main] tez.DagUtils:看起来另一个线程或进程正在写入同一个文件2017-08-13T10:06:35,851 INFO [main] tez.DagUtils: 等待文件 hdfs://hadoop-master: 8020/tmp/hive/hadoop/_tez_session_dir/213ea036-8245-4042-a5a1-ccd686ea2465/apache-tez-0.8.4-bin(5 次尝试,间隔 5000 毫秒)2017-08-13T10:07:00,860 错误 [主要] tez.DagUtils:找不到正在上传的 jar 2017-08-13T10:07:00,861 错误 [main] exec.Task:无法执行 tez 图。java.io.IOException:以前的作者可能无法写入 hdfs://hadoop-master:8020/tmp/hive/hadoop/_tez_session_dir/213ea036-8245-4042-a5a1-ccd686ea2465/apache-tez-0.8.4-bin。失败是因为我也不太可能写。在 org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1022) 在 org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)在 org.apache.hadoop.hive.ql.exec.tez 的 org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)。

我检查了 Hive 问题的这个 Jira 问题“ https://issues.apache.org/jira/browse/AMBARI-9821 ”,但在尝试从此表中执行 Count(*) 时仍然遇到此错误。

Tez 会议文件:

<configuration>
    <property>
        <name>tez.lib.uris</name>
        <value>hdfs://hadoop-master:8020/user/tez/apache-tez-0.8.4-bin/share/tez.tar.gz</value>
        <type>string</type>
    </property>
</configuration>

Hive 配置文件:

<configuration>
    <property>
                <name>hive.server2.thrift.http.port</name>
                <value>10001</value>
        </property>
        <property>
                <name>hive.server2.thrift.http.min.worker.threads</name>
                <value>5</value>
        </property>
        <property>
                <name>hive.server2.thrift.http.max.worker.threads</name>
                <value>500</value>
        </property>
        <property>
                <name>hive.server2.thrift.http.path</name>
                <value>cliservice</value>
        </property>
    <property>
        <name>hive.server2.thrift.min.worker.threads</name>
        <value>5</value>
    </property>
        <property>
                <name>hive.server2.thrift.max.worker.threads</name>
                <value>500</value>
        </property>
    <property>
        <name>hive.server2.transport.mode</name>
        <value>http</value>
        <description>Server transport mode. "binary" or "http".</description>
    </property>
    <property>
        <name>hive.server2.allow.user.substitution</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.server2.authentication</name>
        <value>NONE</value>
    </property>
    <property>
        <name>hive.server2.thrift.bind.host</name>
        <value>10.100.38.136</value>
    </property>
    <property>
        <name>hive.support.concurrency</name>
        <description>Enable Hive's Table Lock Manager Service</description>
        <value>true</value>
    </property>
    <property>
        <name>hive.zookeeper.quorum</name>
        <description>Zookeeper quorum used by Hive's Table Lock Manager</description>
        <value>hadoop-master,hadoop-slave1,hadoop-slave2,hadoop-slave3,hadoop-slave4,hadoop-slave5</value>
    </property>
    <property>
        <name>hive.zookeeper.client.port</name>
        <value>2181</value>
        <description>The port at which the clients will connect.</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:derby://hadoop-master:1527/metastore_db2</value>
        <description>
            JDBC connect string for a JDBC metastore.
            To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
            For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
        </description>
    </property>
    <property>
        <name>hive.metastore.warehouse.dir</name>
        <value>/user/hive/warehouse</value>
        <description>location of default database for the warehouse</description>
    </property>
    <property>
                <name>hive.server2.webui.host</name>
                <value>10.100.38.136</value>
        </property>
        <property>
                <name>hive.server2.webui.port</name>
                <value>10010</value>
        </property>
    <!--<property>
        <name>hive.metastore.local</name>
        <value>true</value>
    </property>
    <property>
        <name>hive.metastore.uris</name>
        <value/>
        <value>thrift://hadoop-master:9083</value>
        <value>file:///source/apache-hive-2.1.1-bin/bin/metastore_db/</value>
        <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>
    </property>-->
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>org.apache.derby.jdbc.ClientDriver</value>
        <description>Driver class name for a JDBC metastore</description>
    </property>
    <property>
        <name>javax.jdo.PersistenceManagerFactoryClass</name>
        <value>org.datanucleus.api.jdo.JDOPersistenceManagerFactory</value>
        <description>class implementing the jdo persistence</description>
    </property>
    <property>
        <name>datanucleus.autoStartMechanism</name>
        <value>SchemaTable</value>
    </property>
    <property>
        <name>hive.execution.engine</name>
        <value>tez</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>APP</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>mine</value>
    </property>
    <!--<property>
        <name>datanucleus.autoCreateSchema</name>
            <value>false</value>
            <description>Creates necessary schema on a startup if one doesn't exist</description>
    </property> -->
</configuration>

这也是 Yarn 的诊断:

应用程序 application_1498057873641_0018 由于 AM Container for appattempt_1498057873641_0018_000002 退出 2 次,exitCode:-103 尝试失败。诊断:容器 [pid=31779,containerID=container_1498057873641_0018_02_000 运行超出虚拟内存限制。当前使用情况:已使用 1 GB 物理内存中的 169.3 MB;使用了 2.6 GB 的 2.1 GB 虚拟内存。杀死容器。container_1498057873641_0018_02_000001的过程 - contree的转储:| - PID PPID PPID PGRPID SESSID CMD_NAME user_mode_time_time(millis)system_time(MILLIS)SYSTEM_TIME(MILLIS)(MILLIS)VMEM_USAGE(MILLIS)VMEM_USAGE(BYTES)7777777777777777777777777.-PAGES(PEAGES)fult_-cmd_cmd | cmd | cmd | cmd | cmd | -8u131/jdk1.8.0_131/bin/java -Xmx819m -Djava.io。0/logs/userlogs/application_1498057873641_0018/container_1498057873641_0018_02_000001/stderr 容器根据请求被杀死。Exit code is 143 Container exited with a non-zero exit code 143 如需更详细的输出,请查看应用程序跟踪页面:http://hadoop-master:8090/cluster/app/application_1498057873641_0018然后单击指向每次尝试日志的链接。. 申请失败。

4

1 回答 1

1

您很可能正在点击https://issues.apache.org/jira/browse/HIVE-16398。作为一种解决方法,您必须在 /usr/hdp//hive/conf/hive-env.sh 中添加以下内容

# Folder containing extra libraries required for hive compilation/execution can be controlled by:
if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
 if [ -f "${HIVE_AUX_JARS_PATH}" ]; then
 export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}
 elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
 export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
 fi
elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
 export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
fi
于 2017-08-14T07:14:01.937 回答