23

我正在尝试切换到使用 amazon s3 来托管我们的 django 项目的静态文件。我正在使用 django、boto、django-storage 和 django-compressor。当我在开发服务器上运行 collect static 时,出现错误

socket.error: [Errno 104] Connection reset by peer 

我所有的静态文件的大小都是 74MB,看起来不算太大。有没有人见过这个,或者有任何调试技巧?

这是完整的跟踪。

Traceback (most recent call last):
  File "./manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
    utility.execute()
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
    self.execute(*args, **options.__dict__)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
    output = self.handle(*args, **options)
  File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle
    return self.handle_noargs(**options)
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 163, in handle_noargs
    collected = self.collect()
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 113, in collect
    handler(path, prefixed_path, storage)
  File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 303, in copy_file
    self.storage.save(prefixed_path, source_file)
  File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 45, in save
    name = self._save(name, content)
  File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 392, in _save
    self._save_content(key, content, headers=headers)
  File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 403, in _save_content
    rewind=True, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1222, in set_contents_from_file
    chunked_transfer=chunked_transfer, size=size)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 714, in send_file
    chunked_transfer=chunked_transfer, size=size)
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 890, in _send_file_internal
    query_args=query_args
  File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 547, in make_request
    retry_handler=retry_handler
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 966, in make_request
    retry_handler=retry_handler)
  File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 927, in _mexe
    raise e
socket.error: [Errno 104] Connection reset by peer

更新:我没有如何调试此错误的答案,但后来这才停止发生,这让我认为它可能与 S3 上的某些东西有关。

4

4 回答 4

17

tl;dr

If your bucket is not in the default region, you need to tell boto which region to connect to, e.g. if your bucket is in us-west-2 you need to add the following line to settings.py:

 AWS_S3_HOST = 's3-us-west-2.amazonaws.com'

Long explanation:

It's not a permission problem and you should not set your bucket permissions to 'Authenticated users'.

This problem happens if you create your bucket in a region which is not the default one (in my case I was using us-west-2).

If you don't use the default region and you don't tell boto in which region your bucket resides, boto will connect to the default region and S3 will reply with a 307 redirect to the region where the bucket belongs.

Unfortunately, due to this bug in boto:

https://github.com/boto/boto/issues/2207

if the 307 reply arrives before boto has finished uploading the file, boto won't see the redirect and will keep uploading to the default region. Eventually S3 closes the socket resulting into a 'Connection reset by peer'.

It's a kind of race condition which depends on the size of the object being uploaded and the speed of your internet connection, which explains why it happens randomly.

There are two possible reasons why the OP stopped seeing the error after some time:

- he later created a new bucket in the default region and the problem went away by itself. 
- he started uploading only small files, which are fast enough to be fully uploaded by the time S3 replies with 307
于 2016-08-18T13:46:58.623 回答
5

这是您第一次创建新存储桶时会出现的问题,您必须等待几个小时或几分钟才能开始上传。我不知道为什么 s3 会这样。为了证明尝试创建一个新的存储桶,请将您的 django 存储指向它,当您尝试从 django 项目上传任何内容时,您将看到它显示连接对等重置,但等待几个小时或几分钟再试一次它会起作用。重复同样的步骤,看看。

于 2016-09-22T07:19:31.137 回答
0

我只是在尝试设置第二个 S3 存储桶以用于测试/开发时遇到了这个问题,并且有助于部署旧版本的应用程序

我不知道为什么这会有所帮助,但是对于那些事后以这种方式阅读的人(像我一样,几个小时前),值得尝试部署不同的应用程序版本。

于 2015-08-28T21:48:17.770 回答
-1

您必须将存储桶权限设置为Authenticated Users列表 + 上传/删除或者您可以在 amazon 的 IAM 部分创建特定用户并仅为该特定用户设置访问权限

前段时间这对我有帮助:为 Django 设置 S3

于 2015-01-21T11:31:31.163 回答