0

我们在 GCE 上运行 hadoop,使用 HDFS 默认文件系统,以及从/到 GCS 的数据输入/输出。

Hadoop 版本:1.2.1 连接器版本:com.google.cloud.bigdataoss:gcs-connector:1.3.0-hadoop1

观察到的行为:JT会累积等待状态的线程,导致OOM:

2015-02-06 14:15:51,206 ERROR org.apache.hadoop.mapred.JobTracker: Job initialization failed:
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:714)
        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
        at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.initialize(AbstractGoogleAsyncWriteChannel.java:318)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.create(GoogleCloudStorageImpl.java:275)
        at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.create(CacheSupplementedGoogleCloudStorage.java:145)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.createInternal(GoogleCloudStorageFileSystem.java:184)
        at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.create(GoogleCloudStorageFileSystem.java:168)
        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopOutputStream.<init>(GoogleHadoopOutputStream.java:77)
        at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.create(GoogleHadoopFileSystemBase.java:655)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:564)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:545)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:452)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:444)
        at org.apache.hadoop.mapred.JobHistory$JobInfo.logSubmitted(JobHistory.java:1860)
        at org.apache.hadoop.mapred.JobInProgress$3.run(JobInProgress.java:709)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:706)
        at org.apache.hadoop.mapred.JobTracker.initJob(Jobenter code hereTracker.java:3890)
        at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

查看 JT 日志后,我发现了以下警告:

2015-02-06 14:30:17,442 WARN org.apache.hadoop.hdfs.DFSClient: Failed recovery attempt #0 from primary datanode xx.xxx.xxx.xxx:50010
java.io.IOException: Call to /xx.xxx.xxx.xxx:50020 failed on local exception: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
        at org.apache.hadoop.ipc.Client.call(Client.java:1118)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
        at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)
        at org.apache.hadoop.hdfs.DFSClient.createClientDatanodeProtocolProxy(DFSClient.java:201)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3317)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2783)
        at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2987)
Caused by: java.io.IOException: Couldn't set up IO streams
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
        at org.apache.hadoop.ipc.Client.call(Client.java:1093)
        ... 9 more
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method)
        at java.lang.Thread.start(Thread.java:714)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:635)
        ... 12 more

这似乎类似于这里的 hadoop bug 报告:https ://issues.apache.org/jira/browse/MAPREDUCE-5606

我通过禁用将作业日志保存到输出路径来尝试提出的解决方案,它以丢失日志为代价解决了问题:)

我还在 JT 上运行了 jstack,它显示了数百个 WAITING 或 TIMED_WAITING 线程,如下所示:

pool-52-thread-1" prio=10 tid=0x00007feaec581000 nid=0x524f in Object.wait() [0x00007fead39b3000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0x000000074d86ba60> (a java.io.PipedInputStream)
        at java.io.PipedInputStream.read(PipedInputStream.java:327)
        - locked <0x000000074d86ba60> (a java.io.PipedInputStream)
        at java.io.PipedInputStream.read(PipedInputStream.java:378)
        - locked <0x000000074d86ba60> (a java.io.PipedInputStream)
        at com.google.api.client.util.ByteStreams.read(ByteStreams.java:181)
        at com.google.api.client.googleapis.media.MediaHttpUploader.setContentAndHeadersOnCurrentReque
st(MediaHttpUploader.java:629)
        at com.google.api.client.googleapis.media.MediaHttpUploader.resumableUpload(MediaHttpUploader.
java:409)
        at com.google.api.client.googleapis.media.MediaHttpUploader.upload(MediaHttpUploader.java:336)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:419)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(Abstr
actGoogleClientRequest.java:343)
        at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogl
eClientRequest.java:460)
        at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.run(AbstractGo
ogleAsyncWriteChannel.java:354)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
   Locked ownable synchronizers:
        - <0x000000074d864918> (a java.util.concurrent.ThreadPoolExecutor$Worker)

JT 似乎很难通过 GCS 连接器与 GCS 保持通信。

请指教,

谢谢

4

1 回答 1

0

目前,FSDataOutputStreamHadoop 的 GCS 连接器中的每次打开都会消耗一个线程,直到它关闭,因为在 OutputStream 的用户间歇性地写入字节时,需要一个单独的线程运行“可恢复的”HttpRequests。在大多数情况下(例如在单个 Hadoop 任务中),只有一个长期存在的输出流,可能还有一些用于编写小型元数据/标记文件等的短期输出流。

一般来说,您遇到的 OOM 有两个可能的原因:

  1. 你有很多排队的工作;每个提交的作业都持有一个未关闭的 OutputStream,因此会消耗一个“等待”线程。但是,既然您提到您只需要排队约 10 个工作,这不应该是根本原因。
  2. 某些原因导致 PrintWriter 对象“泄漏”,这些对象最初是在logSubmitted中创建并添加到 fileManager。通常,终端事件(如logFinished将在通过 将它们从地图中删除之前正确地关闭()所有 PrintWriter)markCompleted,但理论上它们可能是这里或那里的错误,可能导致其中一个输出流在没有被关闭()的情况下泄漏。例如,虽然我没有机会验证这个断言,但似乎 IOException 试图执行类似 logMetaInfo 的操作会“removeWriter”而不关闭它

我已经验证,至少在正常情况下,OutputStream 似乎可以正确关闭,并且我的示例 JobTracker 在成功运行大量作业后显示了一个干净的 jstack。

TL;DR:关于为什么某些资源可能会泄漏并最终阻止创建必要的线程,有一些工作理论。您应该考虑hadoop.job.history.user.location同时更改到某个 HDFS 位置,作为在没有将作业日志放置在 GCS 上的情况下保留作业日志的一种方式。

于 2015-02-12T00:13:18.387 回答