136

我想使用 python 复制 s3 存储桶中的文件。

例如:我有存储桶名称 = 测试。在存储桶中,我有 2 个文件夹,名称为“dump”和“input”。现在我想使用python将文件从本地目录复制到S3“转储”文件夹......有人可以帮助我吗?

4

15 回答 15

116

尝试这个...

import boto
import boto.s3
import sys
from boto.s3.key import Key

AWS_ACCESS_KEY_ID = ''
AWS_SECRET_ACCESS_KEY = ''

bucket_name = AWS_ACCESS_KEY_ID.lower() + '-dump'
conn = boto.connect_s3(AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY)


bucket = conn.create_bucket(bucket_name,
    location=boto.s3.connection.Location.DEFAULT)

testfile = "replace this with an actual filename"
print 'Uploading %s to Amazon S3 bucket %s' % \
   (testfile, bucket_name)

def percent_cb(complete, total):
    sys.stdout.write('.')
    sys.stdout.flush()


k = Key(bucket)
k.key = 'my test file'
k.set_contents_from_filename(testfile,
    cb=percent_cb, num_cb=10)

[更新] 我不是 python 专家,所以感谢您对 import 语句的提醒。另外,我不建议将凭据放在您自己的源代码中。如果您在 AWS 中运行它,请使用带有实例配置文件的 IAM 凭证 ( http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html ),并在您的开发/测试环境,使用 AdRoll 中的全息图(https://github.com/AdRoll/hologram

于 2013-02-26T11:04:56.557 回答
87
import boto3

s3 = boto3.resource('s3')
BUCKET = "test"

s3.Bucket(BUCKET).upload_file("your/local/file", "dump/file")
于 2017-11-03T15:17:29.067 回答
50

没必要那么复杂:

s3_connection = boto.connect_s3()
bucket = s3_connection.get_bucket('your bucket name')
key = boto.s3.key.Key(bucket, 'some_file.zip')
with open('some_file.zip') as f:
    key.send_file(f)
于 2015-06-29T10:00:18.170 回答
36

我用过这个,实现起来非常简单

import tinys3

conn = tinys3.Connection('S3_ACCESS_KEY','S3_SECRET_KEY',tls=True)

f = open('some_file.zip','rb')
conn.upload('some_file.zip',f,'my_bucket')

https://www.smore.com/labs/tinys3/

于 2014-12-24T08:48:46.093 回答
31

在带有凭据的会话中将文件上传到 s3。

import boto3

session = boto3.Session(
    aws_access_key_id='AWS_ACCESS_KEY_ID',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY',
)
s3 = session.resource('s3')
# Filename - File to upload
# Bucket - Bucket to upload to (the top level directory under AWS S3)
# Key - S3 object name (can contain subdirectories). If not specified then file_name is used
s3.meta.client.upload_file(Filename='input_file_path', Bucket='bucket_name', Key='s3_output_key')
于 2019-02-08T15:22:27.053 回答
17
from boto3.s3.transfer import S3Transfer
import boto3
#have all the variables populated which are required below
client = boto3.client('s3', aws_access_key_id=access_key,aws_secret_access_key=secret_key)
transfer = S3Transfer(client)
transfer.upload_file(filepath, bucket_name, folder_name+"/"+filename)
于 2017-01-31T12:34:05.487 回答
13

这是一个三班轮。只需按照boto3 文档中的说明进行操作即可。

import boto3
s3 = boto3.resource(service_name = 's3')
s3.meta.client.upload_file(Filename = 'C:/foo/bar/baz.filetype', Bucket = 'yourbucketname', Key = 'baz.filetype')

一些重要的论点是:

参数:

  • 文件名( str) -- 要上传的文件的路径。
  • Bucket ( str) -- 要上传到的存储桶的名称。
  • Key ( str) -- 您要分配给 s3 存储桶中文件的名称。这可能与文件名相同或您选择的不同名称,但文件类型应保持不变。

    注意:我假设您已按照 boto3 文档~\.aws中的最佳配置实践中的建议将凭据保存在文件夹中。

  • 于 2018-11-29T00:20:09.540 回答
    12

    这也将起作用:

    import os 
    import boto
    import boto.s3.connection
    from boto.s3.key import Key
    
    try:
    
        conn = boto.s3.connect_to_region('us-east-1',
        aws_access_key_id = 'AWS-Access-Key',
        aws_secret_access_key = 'AWS-Secrete-Key',
        # host = 's3-website-us-east-1.amazonaws.com',
        # is_secure=True,               # uncomment if you are not using ssl
        calling_format = boto.s3.connection.OrdinaryCallingFormat(),
        )
    
        bucket = conn.get_bucket('YourBucketName')
        key_name = 'FileToUpload'
        path = 'images/holiday' #Directory Under which file should get upload
        full_key_name = os.path.join(path, key_name)
        k = bucket.new_key(full_key_name)
        k.set_contents_from_filename(key_name)
    
    except Exception,e:
        print str(e)
        print "error"   
    
    于 2017-02-01T08:19:25.987 回答
    6

    使用 boto3

    import logging
    import boto3
    from botocore.exceptions import ClientError
    
    
    def upload_file(file_name, bucket, object_name=None):
        """Upload a file to an S3 bucket
    
        :param file_name: File to upload
        :param bucket: Bucket to upload to
        :param object_name: S3 object name. If not specified then file_name is used
        :return: True if file was uploaded, else False
        """
    
        # If S3 object_name was not specified, use file_name
        if object_name is None:
            object_name = file_name
    
        # Upload the file
        s3_client = boto3.client('s3')
        try:
            response = s3_client.upload_file(file_name, bucket, object_name)
        except ClientError as e:
            logging.error(e)
            return False
        return True
    

    更多信息:- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html

    于 2019-08-22T11:10:50.707 回答
    5
    import boto
    from boto.s3.key import Key
    
    AWS_ACCESS_KEY_ID = ''
    AWS_SECRET_ACCESS_KEY = ''
    END_POINT = ''                          # eg. us-east-1
    S3_HOST = ''                            # eg. s3.us-east-1.amazonaws.com
    BUCKET_NAME = 'test'        
    FILENAME = 'upload.txt'                
    UPLOADED_FILENAME = 'dumps/upload.txt'
    # include folders in file path. If it doesn't exist, it will be created
    
    s3 = boto.s3.connect_to_region(END_POINT,
                               aws_access_key_id=AWS_ACCESS_KEY_ID,
                               aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
                               host=S3_HOST)
    
    bucket = s3.get_bucket(BUCKET_NAME)
    k = Key(bucket)
    k.key = UPLOADED_FILENAME
    k.set_contents_from_filename(FILENAME)
    
    于 2017-03-02T13:23:42.570 回答
    2

    对于上传文件夹示例,如下代码和 S3 文件夹图片 在此处输入图像描述

    import boto
    import boto.s3
    import boto.s3.connection
    import os.path
    import sys    
    
    # Fill in info on data to upload
    # destination bucket name
    bucket_name = 'willie20181121'
    # source directory
    sourceDir = '/home/willie/Desktop/x/'  #Linux Path
    # destination directory name (on s3)
    destDir = '/test1/'   #S3 Path
    
    #max size in bytes before uploading in parts. between 1 and 5 GB recommended
    MAX_SIZE = 20 * 1000 * 1000
    #size of parts when uploading in parts
    PART_SIZE = 6 * 1000 * 1000
    
    access_key = 'MPBVAQ*******IT****'
    secret_key = '11t63yDV***********HgUcgMOSN*****'
    
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            host = '******.org.tw',
            is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    bucket = conn.create_bucket(bucket_name,
            location=boto.s3.connection.Location.DEFAULT)
    
    
    uploadFileNames = []
    for (sourceDir, dirname, filename) in os.walk(sourceDir):
        uploadFileNames.extend(filename)
        break
    
    def percent_cb(complete, total):
        sys.stdout.write('.')
        sys.stdout.flush()
    
    for filename in uploadFileNames:
        sourcepath = os.path.join(sourceDir + filename)
        destpath = os.path.join(destDir, filename)
        print ('Uploading %s to Amazon S3 bucket %s' % \
               (sourcepath, bucket_name))
    
        filesize = os.path.getsize(sourcepath)
        if filesize > MAX_SIZE:
            print ("multipart upload")
            mp = bucket.initiate_multipart_upload(destpath)
            fp = open(sourcepath,'rb')
            fp_num = 0
            while (fp.tell() < filesize):
                fp_num += 1
                print ("uploading part %i" %fp_num)
                mp.upload_part_from_file(fp, fp_num, cb=percent_cb, num_cb=10, size=PART_SIZE)
    
            mp.complete_upload()
    
        else:
            print ("singlepart upload")
            k = boto.s3.key.Key(bucket)
            k.key = destpath
            k.set_contents_from_filename(sourcepath,
                    cb=percent_cb, num_cb=10)
    

    PS:更多参考网址

    于 2018-12-03T08:23:35.557 回答
    2

    如果您的系统上安装了aws 命令行界面,则可以使用 pythons子进程库。例如:

    import subprocess
    def copy_file_to_s3(source: str, target: str, bucket: str):
       subprocess.run(["aws", "s3" , "cp", source, f"s3://{bucket}/{target}"])
    

    同样,您可以将该逻辑用于各种 AWS 客户端操作,例如下载或列出文件等。也可以获取返回值。这样就不需要导入boto3了。我想它的使用不是这样的,但在实践中我觉得这样很方便。这样,您还可以获得控制台中显示的上传状态 - 例如:

    Completed 3.5 GiB/3.5 GiB (242.8 MiB/s) with 1 file(s) remaining
    

    要根据您的意愿修改该方法,我建议您查看子流程参考以及AWS Cli 参考

    注意:这是我对类似问题的回答的副本。

    于 2020-11-06T11:41:40.570 回答
    1

    我有一些东西在我看来有更多的顺序:

    import boto3
    from pprint import pprint
    from botocore.exceptions import NoCredentialsError
    
    
    class S3(object):
        BUCKET = "test"
        connection = None
    
        def __init__(self):
            try:
                vars = get_s3_credentials("aws")
                self.connection = boto3.resource('s3', 'aws_access_key_id',
                                                 'aws_secret_access_key')
            except(Exception) as error:
                print(error)
                self.connection = None
    
    
        def upload_file(self, file_to_upload_path, file_name):
            if file_to_upload is None or file_name is None: return False
            try:
                pprint(file_to_upload)
                file_name = "your-folder-inside-s3/{0}".format(file_name)
                self.connection.Bucket(self.BUCKET).upload_file(file_to_upload_path, 
                                                                          file_name)
                print("Upload Successful")
                return True
    
            except FileNotFoundError:
                print("The file was not found")
                return False
    
            except NoCredentialsError:
                print("Credentials not available")
                return False
    
    
    

    这里有三个重要的变量,BUCKET const、file_to_uploadfile_name

    BUCKET: 是您的 S3 存储桶的名称

    file_to_upload_path: 必须是您要上传的文件的路径

    file_name: 是您的存储桶中的结果文件和路径(这是您添加文件夹或其他任何内容的位置)

    有很多方法,但您可以在这样的另一个脚本中重用此代码

    import S3
    
    def some_function():
        S3.S3().upload_file(path_to_file, final_file_name)
    
    于 2020-06-29T23:31:09.743 回答
    0

    您还应该提及内容类型以省略文件访问问题。

    import os
    image='fly.png'
    s3_filestore_path = 'images/fly.png'
    filename, file_extension = os.path.splitext(image)
    content_type_dict={".png":"image/png",".html":"text/html",
                   ".css":"text/css",".js":"application/javascript",
                   ".jpg":"image/png",".gif":"image/gif",
                   ".jpeg":"image/jpeg"}
    
    content_type=content_type_dict[file_extension]
    s3 = boto3.client('s3', config=boto3.session.Config(signature_version='s3v4'),
                      region_name='ap-south-1',
                      aws_access_key_id=S3_KEY,
                      aws_secret_access_key=S3_SECRET)
    s3.put_object(Body=image, Bucket=S3_BUCKET, Key=s3_filestore_path, ContentType=content_type)
    
    于 2020-10-01T06:56:57.500 回答
    0
    xmlstr = etree.tostring(listings,  encoding='utf8', method='xml')
    conn = boto.connect_s3(
            aws_access_key_id = access_key,
            aws_secret_access_key = secret_key,
            # host = '<bucketName>.s3.amazonaws.com',
            host = 'bycket.s3.amazonaws.com',
            #is_secure=False,               # uncomment if you are not using ssl
            calling_format = boto.s3.connection.OrdinaryCallingFormat(),
            )
    conn.auth_region_name = 'us-west-1'
    
    bucket = conn.get_bucket('resources', validate=False)
    key= bucket.get_key('filename.txt')
    key.set_contents_from_string("SAMPLE TEXT")
    key.set_canned_acl('public-read')
    
    于 2018-11-13T19:59:12.173 回答