目前,我正在使用 azure-storage-blob 和 hadoop-azure 包将文件从 Blob 存储下载到本地。
...
String url = "https://blob_storage_url";
String filename = url.replaceFirst("https.*/", "");
// Setup the cloud storage account
String storageConnectionString = "...";
CloudStorageAccount account = CloudStorageAccount.parse(storageConnectionString);
// Create a blob service client
CloudBlobClient blobClient = account.createCloudBlobClient();
// Get a reference to a container
CloudBlobContainer container = blobClient.getContainerReference(containerName);
for (ListBlobItem blobItem : container.listBlobs(filename)) {
// If the item is a blob, not a virtual directory
if (blobItem instanceof CloudBlockBlob) {
// Download the file
CloudBlockBlob retrievedBlob = (CloudBlockBlob) blobItem;
retrievedBlob.downloadToFile(filename);
}
}
...
这些下载的文件实际上是 XML 文件。然后,我必须处理每个内容。为此,我使用 spark-xml_2.11 (com.databricks.spark.xml) 包。
StructType schema = new StructType()
.add("attr1", DataTypes.StringType, false)
.add("attr2", DataTypes.IntegerType, false)
... other_structFields_or_structTypes;
Dataset<Row> dataset = sparkSession.read()
.format("com.databricks.spark.xml")
.schema(schema)
.load(filename)
load() 方法需要一个路径(由本地或分布式文件系统支持的数据)。那么,是否可以直接从 Blob 存储中加载它们?
我找到了本指南https://docs.databricks.com/spark/latest/data-sources/azure/azure-storage.html但是第一个选项,将 Azure Blob 存储容器装载到 DBFS,需要一个 Databrick 集群。
对于第二个选项,直接访问 Azure Blob 存储,我之前测试过设置帐户访问密钥。
sparkSession.sparkContext().hadoopConfiguration().set(
"fs.azure.account.key.<my-storage-account-name>.blob.core.windows.net",
"<my-storage-account-access-key>"
);
StructType schema = new StructType()
.add("attr1", DataTypes.StringType, false)
.add("attr2", DataTypes.IntegerType, false)
... other_structFields_or_structTypes;
Dataset<Row> dataset = sparkSession.read()
.format("com.databricks.spark.xml")
.schema(schema)
.load(filename) # also I tried with the full url
但是引发了以下异常:
"java.io.IOException: No FileSystem for scheme: https".
此外,我尝试将协议更改为 wasbs,但再次引发了类似的异常:
"java.io.IOException: No FileSystem for scheme: wasbs".
请问,有什么建议或意见吗?