我有以下代码用于密钥保管库以检索机密并能够在存储帐户备份中使用它们。密钥保管库的以下代码如下
keyvault_name = f'keyvault-link'
KeyVaultName = "name"
credential = DefaultAzureCredential()
client = SecretClient(vault_url=keyvault_name, credential=credential)
另一方面,我有一个存储帐户代码,如下所示:
connection_string = client.get_secret("GATEWAY-Connection-String") # The connection string for the source container
account_key = client.get_secret("GATEWAY-Account-Key") # The account key for the source container
# source_container_name = 'newblob' # Name of container which has blob to be copied
table_service_out = TableService(account_name=client.get_secret("GATEWAY-Account-Name-out"), account_key=client.get_secret("GATEWAY-Account-Key-out"))
table_service_in = TableService(account_name=client.get_secret("GATEWAY-Account-Name-in"), account_key=client.get_secret("GATEWAY-Account-Key-in"))
# Create client
client = BlobServiceClient.from_connection_string(connection_string)
client = BlobServiceClient.from_connection_string(connection_string)
all_containers = client.list_containers(include_metadata=True)
for container in all_containers:
# Create sas token for blob
sas_token = generate_account_sas(
account_name = client.account_name,
account_key = account_key,
resource_types = ResourceTypes(object=True, container=True),
permission= AccountSasPermissions(read=True,list=True),
# start = datetime.now(),
expiry = datetime.utcnow() + timedelta(hours=24) # Token valid for 4 hours
)
print("==========================")
print(container['name'], container['metadata'])
# print("==========================")
container_client = client.get_container_client(container.name)
# print(container_client)
blobs_list = container_client.list_blobs()
for blob in blobs_list:
# Create blob client for source blob
source_blob = BlobClient(
client.url,
container_name = container['name'],
blob_name = blob.name,
credential = sas_token
)
target_connection_string = client.get_secret("GATEWAY-Target-Connection-String")
target_account_key = client.get_secret("GATEWAY-Target-Account-Key")
source_container_name = container['name']
target_blob_name = blob.name
target_destination_blob = container['name'] + today
print(target_blob_name)
# print(blob.name)
target_client = BlobServiceClient.from_connection_string(target_connection_string)
container_client = target_client.get_container_client(target_destination_blob)
if not container_client.exists():
container_client.create_container()
new_blob = target_client.get_blob_client(target_destination_blob, target_blob_name)
new_blob.start_copy_from_url(source_blob.url)
print("COPY TO: " + target_connection_string)
print(f"TRY: saving blob {target_blob_name} into {target_destination_blob} ")
# except:
# # Create new blob and start copy operation.
# new_blob = target_client.get_blob_client(target_destination_blob, target_blob_name)
# new_blob.start_copy_from_url(source_blob.url)
# print("COPY TO: " + target_connection_string)
# print(f"EXCEPT: saving blob {target_blob_name} into {target_destination_blob} ")
#query 100 items per request, in case of consuming too much menory load all data in one time
query_size = 1000
#save data to storage2 and check if there is lefted data in current table,if yes recurrence
#save data to storage2 and check if there is lefted data in current table,if yes recurrence
def queryAndSaveAllDataBySize(source_table_name, target_table_name,resp_data:ListGenerator ,table_out:TableService,table_in:TableService,query_size:int):
for item in resp_data:
tb_name = source_table_name + today
#remove etag and Timestamp appended by table service
del item.etag
del item.Timestamp
print("INSERT data:" + str(item) + "into TABLE:"+ tb_name)
table_in.insert_or_replace_entity(target_table_name,item)
if resp_data.next_marker:
data = table_out.query_entities(table_name=source_table_name,num_results=query_size,marker=resp_data.next_marker)
queryAndSaveAllDataBySize(source_table_name, target_table_name, data,table_out,table_in,query_size)
tbs_out = table_service_out.list_tables()
print(tbs_out)
for tb in tbs_out:
table = tb.name + today
#create table with same name in storage2
table_service_in.create_table(table_name=table, fail_on_exist=False)
#first query
data = table_service_out.query_entities(tb.name,num_results=query_size)
queryAndSaveAllDataBySize(tb.name, table,data,table_service_out,table_service_in,query_size)
通常在这一行
table_service_out = TableService(account_name=client.get_secret("GATEWAY-Account-Name-out"), account_key=client.get_secret("GATEWAY-Account-Key-out"))
参数将值作为字符串,秘密检索它作为字符串返回,所以我认为这会起作用,但是当我运行代码时,我收到以下错误
Traceback (most recent call last):
File "/Users/user/Desktop/AzCopy/blob.py", line 1581, in <module>
table_service_out = TableService(account_name=table_out, account_key=table_out_key)
File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/tableservice.py", line 173, in __init__
service_params = _TableServiceParameters.get_service_parameters(
File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/common/_connection.py", line 116, in get_service_parameters
params = _ServiceParameters(service,
File "/Users/user/miniforge3/lib/python3.9/site-packages/azure/cosmosdb/table/common/_connection.py", line 70, in __init__
self.account_key = self.account_key.strip()
AttributeError: 'KeyVaultSecret' object has no attribute 'strip'