我有一个 Flink v1.2、3 个 JobManagers、2 个 TaskManagers 的设置。我想将 hdfs 用于后端状态和检查点以及 zookeeper storageDir
state.backend:文件系统
state.backend.fs.checkpointdir: hdfs:///[ip:port]/flink-checkpoints
state.checkpoints.dir: hdfs:///[ip:port]/external-checkpoints
高可用性:zookeeper
高可用性.zookeeper.storageDir: hdfs:///[ip:port]/recovery
在我登录的 JobManager 中,我有
2017-03-22 17:41:43,559 INFO org.apache.flink.configuration.GlobalConfiguration - Loading configuration property: high-availability.zookeeper.client.acl, open
2017-03-22 17:41:43,680 ERROR org.apache.flink.runtime.jobmanager.JobManager - Error while starting up JobManager
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310)
at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67)
at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114)
at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488)
at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643)
at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595)
at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242)
at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020)
at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131)
at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076)
at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971)
at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969)
at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
2017-03-22 17:41:43,694 WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
2017-03-22 17:41:43,694 ERROR org.apache.flink.runtime.jobmanager.JobManager - Failed to run JobManager.
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310)
at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67)
at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114)
at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488)
at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643)
at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595)
at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242)
at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020)
at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131)
at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076)
at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971)
at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969)
at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969)
at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
2017-03-22 17:41:43,697 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2017-03-22 17:41:43,704 INFO akka.remote.RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
2
Hadoop 作为单节点集群安装在我在设置中设置的 VM 上。为什么 Flink 要求配置额外的参数?(顺便说一句,它们不在官方文档中)