我已经成功部署了一个 Azure 函数,可以将数据输入传递到该函数并对该数据进行操作。我的目标是捕获这些输入(用户需要的数据集的表名),然后从 Blob 存储中下载这些数据集。对于下载部分,我有几段代码允许我从 Azure Datatalake 成功下载给定文件(当我在本地运行该 Python 代码时),但是当我将该代码放入 Azure 函数中执行时,没有下载已启动 - 我认为这可能是因为 Azure 函数没有引用需要将文件下载到的接收器。
当从 Azure 函数构造和触发 SAS URL 时,有什么方法可以将数据保存到本地磁盘?
%python
import logging
import azure.functions as func
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('The API has initialized and will execute now.')
# Open the payload sent to this function
req_body = req.get_json()
# Save the data request format type
# dataset_format = req.params.get('dataset_format')
dataset_format = req_body.get('dataset_format')
logging.info("******** - Checkpoint 1 - **********")
# dataset list passed in as parameter
datasets = req_body.get('datasets')
dataset_1 = datasets[0]
dataset_2 = datasets[1]
dataset_3 = datasets[2]
# Download Option 1 (preference - when executing the SAS URL from a browser, it shows the downloads tab at the bottom of the browser with the downloaded file/s)
import webbrowser
sas_url = "https://apptestdatalake.blob.core.windows.net/**filesystem_name**/**blob_name**/Iris.csv?**sas token**"
webbrowser.open(sas_url)
# Download Option 2
from azure.storage.blob import BlobClient
download_file_path = "C:/Users/**user**/Downloads/Requested Downloads/"
print("\nDownloading blob data to \n\t" + download_file_path)
try:
os.makedirs(os.path.dirname("C:/Users/**user**/Downloads/Requested Downloads/Iris.csv"))
except:
pass
with open(download_file_path, "wb") as download_file:
blob_client = BlobClient.from_blob_url(sas_url)
download_stream = blob_client.download_blob().readall()
download_file.write(download_stream)
print("Download Complete!")
logging.info("******** - Checkpoint 2 - **********")
return func.HttpResponse(f"Hello! You've requested the {dataset_1}, {dataset_2}, {dataset_3} in {dataset_format}. This script has run successfully and your download(s) are complete!")```