9

问题 -

我在 AWS EMR 中运行 1 个查询。它因抛出异常而失败 -

java.io.FileNotFoundException: File s3://xxx/yyy/internal_test_automation/2016/09/17/17156/data/feed/commerce_feed_redshift_dedup/.hive-staging_hive_2016-09-17_10-24-20_998_2833938482542362802-639 does not exist.

我在下面提到了这个问题的所有相关信息。请检查。

询问 -

INSERT OVERWRITE TABLE base_performance_order_dedup_20160917
SELECT 
*
 FROM 
(
select
commerce_feed_redshift_dedup.sku AS sku,
commerce_feed_redshift_dedup.revenue AS revenue,
commerce_feed_redshift_dedup.orders AS orders,
commerce_feed_redshift_dedup.units AS units,
commerce_feed_redshift_dedup.feed_date AS feed_date
from commerce_feed_redshift_dedup
) tb

例外 -

ERROR Error while executing queries
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1474097800415_0311_2_00, diagnostics=[Vertex vertex_1474097800415_0311_2_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: commerce_feed_redshift_dedup initializer failed, vertex=vertex_1474097800415_0311_2_00 [Map 1], java.io.FileNotFoundException: File s3://xxx/yyy/internal_test_automation/2016/09/17/17156/data/feed/commerce_feed_redshift_dedup/.hive-staging_hive_2016-09-17_10-24-20_998_2833938482542362802-639 does not exist.
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:987)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:929)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.listStatus(EmrFileSystem.java:339)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1537)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1556)
    at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1601)
    at org.apache.hadoop.fs.FileSystem$4.(FileSystem.java:1778)
    at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1777)
    at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1755)
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:239)
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:201)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:281)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:363)
    at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:486)
    at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:200)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
    at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1474097800415_0311_2_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1474097800415_0311_2_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
    at org.apache.hive.jdbc.HiveStatement.waitForOperationToComplete(HiveStatement.java:348)
    at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:251)
    at com.XXX.YYY.executors.HiveQueryExecutor.executeQueriesInternal(HiveQueryExecutor.java:234)
    at com.XXX.YYY.executors.HiveQueryExecutor.executeQueriesMetricsEnabled(HiveQueryExecutor.java:184)
    at com.XXX.YYY.azkaban.jobexecutors.impl.AzkabanHiveQueryExecutor.run(AzkabanHiveQueryExecutor.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at azkaban.jobtype.JavaJobRunnerMain.runMethod(JavaJobRunnerMain.java:192)
    at azkaban.jobtype.JavaJobRunnerMain.(JavaJobRunnerMain.java:132)
    at azkaban.jobtype.JavaJobRunnerMain.main(JavaJobRunnerMain.java:76)

Hive 配置属性,我在执行上述查询之前设置。-

set hivevar:hive.mapjoin.smalltable.filesize=2000000000
set hivevar:mapreduce.map.speculative=false
set hivevar:mapreduce.output.fileoutputformat.compress=true
set hivevar:hive.exec.compress.output=true
set hivevar:mapreduce.task.timeout=6000000
set hivevar:hive.optimize.bucketmapjoin.sortedmerge=true
set hivevar:io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hivevar:hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat
set hivevar:hive.auto.convert.sortmerge.join.noconditionaltask=false
set hivevar:FEED_DATE=20160917
set hivevar:hive.optimize.bucketmapjoin=true
set hivevar:hive.exec.compress.intermediate=true
set hivevar:hive.enforce.bucketmapjoin=true
set hivevar:mapred.output.compress=true
set hivevar:mapreduce.map.output.compress=true
set hivevar:hive.auto.convert.sortmerge.join=false
set hivevar:hive.auto.convert.join=false
set hivevar:mapreduce.reduce.speculative=false
set hivevar:PD_KEY=vijay-test-mail@XXX.pagerduty.com
set hivevar:mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
set hive.mapjoin.smalltable.filesize=2000000000
set mapreduce.map.speculative=false
set mapreduce.output.fileoutputformat.compress=true
set hive.exec.compress.output=true
set mapreduce.task.timeout=6000000
set hive.optimize.bucketmapjoin.sortedmerge=true
set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
set hive.input.format=org.apache.hadoop.hive.ql.io.BucketizedHiveInputFormat
set hive.auto.convert.sortmerge.join.noconditionaltask=false
set FEED_DATE=20160917
set hive.optimize.bucketmapjoin=true
set hive.exec.compress.intermediate=true
set hive.enforce.bucketmapjoin=true 
set mapred.output.compress=true 
set mapreduce.map.output.compress=true 
set hive.auto.convert.sortmerge.join=false 
set hive.auto.convert.join=false 
set mapreduce.reduce.speculative=false 
set PD_KEY=vijay-test-mail@XXX.pagerduty.com 
set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec

/etc/hive/conf/hive-site.xml

<configuration>

<!-- Hive Configuration can either be stored in this file or in the hadoop configuration files  -->
<!-- that are implied by Hadoop setup variables.                                                -->
<!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive    -->
<!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
<!-- resource).                                                                                 -->

<!-- Hive Execution Parameters -->


<property>
  <name>hbase.zookeeper.quorum</name>
  <value>ip-172-30-2-16.us-west-2.compute.internal</value>
  <description>http://wiki.apache.org/hadoop/Hive/HBaseIntegration</description>
</property>

<property>
  <name>hive.execution.engine</name>
  <value>tez</value>
</property>

  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://ip-172-30-2-16.us-west-2.compute.internal:8020</value>
  </property>


  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://ip-172-30-2-16.us-west-2.compute.internal:9083</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://ip-172-30-2-16.us-west-2.compute.internal:3306/hive?createDatabaseIfNotExist=true</value>
    <description>username to use against metastore database</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>org.mariadb.jdbc.Driver</value>
    <description>username to use against metastore database</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
    <description>username to use against metastore database</description>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>mrN949zY9P2riCeY</value>
    <description>password to use against metastore database</description>
  </property>

  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>

  <property>
    <name>mapred.reduce.tasks</name>
    <value>-1</value>
  </property>

  <property>
    <name>mapred.max.split.size</name>
    <value>256000000</value>
  </property>

  <property>
    <name>hive.metastore.connect.retries</name>
    <value>15</value>
  </property>

  <property>
    <name>hive.optimize.sort.dynamic.partition</name>
    <value>true</value>
  </property>

  <property>
    <name>hive.async.log.enabled</name>
    <value>false</value>
  </property>

</configuration>

/etc/tez/conf/tez-site.xml

<configuration>
    <property>
    <name>tez.lib.uris</name>
    <value>hdfs:///apps/tez/tez.tar.gz</value>
  </property>

  <property>
    <name>tez.use.cluster.hadoop-libs</name>
    <value>true</value>
  </property>

  <property>
    <name>tez.am.grouping.max-size</name>
    <value>134217728</value>
  </property>

  <property>
    <name>tez.runtime.intermediate-output.should-compress</name>
    <value>true</value>
  </property>

  <property>
    <name>tez.runtime.intermediate-input.is-compressed</name>
    <value>true</value>
  </property>

  <property>
    <name>tez.runtime.intermediate-output.compress.codec</name>
    <value>org.apache.hadoop.io.compress.LzoCodec</value>
  </property>

  <property>
    <name>tez.runtime.intermediate-input.compress.codec</name>
    <value>org.apache.hadoop.io.compress.LzoCodec</value>
  </property>

  <property>
    <name>tez.history.logging.service.class</name>
    <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
  </property>

  <property>
    <name>tez.tez-ui.history-url.base</name>
    <value>http://ip-172-30-2-16.us-west-2.compute.internal:8080/tez-ui/</value>
  </property>
</configuration>

问题 -

  1. 哪个进程删除了这个文件?对于 hive,这个文件应该只存在。(此外,此文件不是由应用程序代码创建的。)
  2. 当我运行失败的查询次数时,它通过了。为什么会有模棱两可的行为?
  3. 因为,我刚刚将 hive-exec、hive-jdbc 版本升级到 2.1.0。因此,似乎某些配置单元配置属性设置错误或缺少某些属性。你能帮我找到错误设置/错过的蜂巢属性吗?

注意 - 我将 hive-exec 版本从 0.13.0 升级到 2.1.0。在以前的版本中,所有查询都工作正常。

更新 1

当我启动另一个集群时,它运行良好。我在同一个 ETL 上测试了 3 次。

当我在新集群上再次做同样的事情时,它显示了同样的异常。无法理解,为什么会发生这种模棱两可的情况。

帮助我理解这种歧义。

我在与 Hive 打交道时很天真。因此,对此有较少的概念性想法。

更新-2-

集群公共 DNS 名称下的 hfs 日志:50070 -

2016-09-20 11:31:55,155 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy (IPC Server handler 11 on 8020): Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy 2016-09-20 11:31:55,155 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy (IPC Server handler 11 on 8020): Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}) 2016-09-20 11:31:55,155 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy (IPC Server handler 11 on 8020): Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} 2016-09-20 11:31:55,155 INFO org.apache.hadoop.ipc.Server (IPC Server handler 11 on 8020): IPC Server handler 11 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.30.2.207:56462 Call#7497 Retry#0 java.io.IOException: File /user/hive/warehouse/bc_kmart_3813.db/dp_internal_temp_full_load_offer_flexibility_20160920/.hive-staging_hive_2016-09-20_11-17-51_558_1222354063413369813-58/_task_tmp.-ext-10000/_tmp.000079_0 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

当我搜索这个异常时。我找到了这个页面 - https://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo

在我的集群中,有一个具有 32 GB 磁盘空间的数据节点。

** /etc/hive/conf/hive-default.xml.template - **

<property>
    <name>hive.exec.stagingdir</name>
    <value>.hive-staging</value>
    <description>Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans.</description>
  </property>

问题-

  1. 根据日志,在集群机器中创建 hive-staging 文件夹,根据/var/log/hadoop-hdfs/hadoop-hdfs-datanode-ip-172-30-2-189.log,那么为什么它创建相同的文件夹在s3也?

更新-3-

一些例外是类型 - LeaseExpiredException -

2016-09-21 08:53:17,995 INFO org.apache.hadoop.ipc.Server (IPC Server handler 13 on 8020): IPC Server handler 13 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 172.30.2.189:42958 Call#726 Retry#0: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive/hadoop/_tez_session_dir/6ebd2d18-f5b9-4176-ab8f-d6c78124b636/.tez/application_1474442135017_0022/recovery/1/summary (inode 20326): File does not exist. Holder DFSClient_NONMAPREDUCE_1375788009_1 does not have any open files.

4

2 回答 2

2

我解决了这个问题。让我详细解释一下。

即将到来的例外 -

  1. LeaveExpirtedException - 来自 HDFS 方面。
  2. FileNotFoundException - 从 Hive 端(当 Tez 执行引擎执行 DAG 时)

问题场景——

  1. 我们刚刚将 hive 版本从 0.13.0 升级到了 2.1.0。而且,在以前的版本中一切正常。零运行时异常。

解决问题的不同想法 -

  1. 首先想到的是,由于 NN 智能,两个线程正在处理同一块。但根据以下设置

    设置 mapreduce.map.speculative=false 设置 mapreduce.reduce.speculative=false

那是不可能的。

  1. 然后,对于以下设置,我将计数从 1000 增加到 100000 -

    设置 hive.exec.max.dynamic.partitions=100000; 设置 hive.exec.max.dynamic.partitions.pernode=100000;

那也没有用。

  1. 然后第三个想法是,肯定是在同一个过程中,创建的 mapper-1 被另一个 mapper/reducer 删除了。但是,我们在 Hveserver2、Tez 日志中没有发现任何此类日志。

  2. 最后,根本原因在于应用层代码本身。在 hive-exec-2.1.0 版本中,他们引入了新的配置属性

    “hive.exec.stagingdir”:“.hive-staging”

上述属性的描述 -

将在表位置内创建的目录名称,以支持 HDFS 加密。这将替换 ${hive.exec.scratchdir} 的查询结果,只读表除外。在所有情况下,${hive.exec.scratchdir} 仍用于其他临时文件,例如作业计划。

因此,如果应用层代码 (ETL) 中有任何并发​​作业,并且正在同一张表上执行操作(重命名/删除/移动),则可能会导致此问题。

而且,在我们的例子中,2 个并发作业在同一个表上执行“INSERT OVERWRITE”,这导致删除 1 个映射器的元数据文件,这导致了这个问题。

解析度 -

  1. 将元数据文件位置移动到表外(表位于 S3 中)。
  2. 禁用 HDFS 加密(如 stagingdir 属性的描述中所述。)
  3. 更改为您的应用程序层代码以避免并发问题。
于 2016-09-27T04:13:07.900 回答
0

根本原因是当文件从不同的会话中删除时,hadoop 仍在尝试写入它。

于 2020-12-23T03:00:35.327 回答