我想知道 Apache Beam 是否支持 windows azure storage blob files(wasb) IO。现在还有支持吗?
我之所以问是因为我已经部署了一个 apache Beam 应用程序来在 Azure Spark 集群上运行一项作业,并且基本上不可能从关联的存储容器与该 spark 集群 IO wasb 文件。有没有替代的解决方案?
上下文:我正在尝试在我的 Azure Spark 集群上运行WordCount 示例。已经按照此处所述设置了一些组件,相信这会对我有所帮助。下面是我设置 hadoop 配置的代码部分:
final SparkPipelineOptions options = PipelineOptionsFactory.create().as(SparkPipelineOptions.class);
options.setAppName("WordCountExample");
options.setRunner(SparkRunner.class);
options.setSparkMaster("yarn");
JavaSparkContext context = new JavaSparkContext();
Configuration conf = context.hadoopConfiguration();
conf.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
conf.set("fs.azure.account.key.<storage-account>.blob.core.windows.net",
"<key>");
options.setProvidedSparkContext(context);
Pipeline pipeline = Pipeline.create(options);
但不幸的是,我一直以以下错误结束:
java.lang.IllegalStateException: Failed to validate wasb://<storage-container>@<storage-account>.blob.core.windows.net/user/spark/kinglear.txt
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:288)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:195)
at org.apache.beam.sdk.runners.PipelineRunner.apply(PipelineRunner.java:76)
at org.apache.beam.runners.spark.SparkRunner.apply(SparkRunner.java:129)
at org.apache.beam.sdk.Pipeline.applyInternal(Pipeline.java:400)
at org.apache.beam.sdk.Pipeline.applyTransform(Pipeline.java:323)
at org.apache.beam.sdk.values.PBegin.apply(PBegin.java:58)
at org.apache.beam.sdk.Pipeline.apply(Pipeline.java:173)
at spark.example.WordCount.main(WordCount.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.io.IOException: Unable to find handler for wasb://<storage-container>@<storage-account>.blob.core.windows.net/user/spark/kinglear.txt
at org.apache.beam.sdk.util.IOChannelUtils.getFactory(IOChannelUtils.java:187)
at org.apache.beam.sdk.io.TextIO$Read$Bound.apply(TextIO.java:283)
... 13 more
我正在考虑在这种情况下为 Azure 存储 Blob 实现自定义 IO,如果将其作为解决方案,我想与社区核实这是否是替代解决方案。