2

我已成功通过 AWS 进行身份验证,并使用 Bucket 对象上的“put_object”方法上传文件。现在我想使用 multipart API 来为大文件完成此操作。我在这个问题中找到了公认的答案: How to save S3 object to a file using boto3

但是在尝试实施时,我遇到了“未知方法”错误。我究竟做错了什么?我的代码如下。谢谢!

## Get an AWS Session
self.awsSession = Session(aws_access_key_id=accessKey,
aws_secret_access_key=secretKey,
aws_session_token=session_token,
region_name=region_type)  

 ...          

# Upload the file to S3
s3 = self.awsSession.resource('s3')
s3.Bucket('prodbucket').put_object(Key=fileToUpload, Body=data) # WORKS
#s3.Bucket('prodbucket').upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
#s3.upload_file(dataFileName, 'prodbucket', fileToUpload) # DOESNT WORK
4

2 回答 2

2

upload_file 方法尚未移植到存储桶资源。现在,您需要直接使用客户端对象来执行此操作:

client = self.awsSession.client('s3')
client.upload_file(...)
于 2015-06-19T16:58:53.293 回答
0

Libcloud S3 包装器透明地为您处理所有部分的拆分和上传。

使用 upload_object_via_stream 方法这样做:

from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

# Path to a very large file you want to upload
FILE_PATH = '/home/user/myfile.tar.gz'

cls = get_driver(Provider.S3)
driver = cls('api key', 'api secret key')

container = driver.get_container(container_name='my-backups-12345')

# This method blocks until all the parts have been uploaded.
extra = {'content_type': 'application/octet-stream'}

with open(FILE_PATH, 'rb') as iterator:
    obj = driver.upload_object_via_stream(iterator=iterator,
                                          container=container,
                                          object_name='backup.tar.gz',
                                          extra=extra)

有关 S3 Multipart 功能的官方文档,请参阅AWS 官方博客

于 2016-02-22T19:26:15.373 回答