我一直在通过S3Fox与 Amazon S3 进行交互,但我似乎无法删除我的存储桶。我选择了一个存储桶,点击删除,在弹出窗口中确认删除,然后......没有任何反应。我应该使用其他工具吗?
23 回答
终于可以使用新的生命周期(过期)规则功能一次性删除所有文件。您甚至可以从 AWS 控制台执行此操作。
只需在 AWS 控制台中右键单击存储桶名称,选择“属性”,然后在页面底部的选项卡行中选择“生命周期”和“添加规则”。创建生命周期规则,将“Prefix”字段设置为空白(空白表示存储桶中的所有文件,或者您可以将其设置为“a”以删除名称以“a”开头的所有文件)。将“天”字段设置为“1”。而已。完毕。假设文件超过一天,它们都应该被删除,那么您可以删除存储桶。
我只是第一次尝试这个,所以我还在等看看文件被删除的速度有多快(它不是即时的,但大概应该在 24 小时内发生)以及我是否会为一个删除命令或 5000 万个删除付费命令...交叉手指!
请记住,S3 存储桶需要为空才能删除。好消息是大多数 3rd 方工具可以自动执行此过程。如果您在使用 S3Fox 时遇到问题,我建议您尝试用于 GUI 的 S3FM 或用于命令行的 S3Sync。亚马逊有一篇很棒的文章描述了如何使用 S3Sync。设置变量后,关键命令是
./s3cmd.rb deleteall <your bucket name>
删除包含大量单个文件的存储桶往往会使许多 S3 工具崩溃,因为它们试图显示目录中所有文件的列表。你需要想办法批量删除。我为此目的找到的最好的 GUI 工具是 Bucket Explorer。它以 1000 个文件块删除 S3 存储桶中的文件,并且在尝试打开 s3Fox 和 S3FM 等大型存储桶时不会崩溃。
我还找到了一些可用于此目的的脚本。我还没有尝试过这些脚本,但它们看起来很简单。
红宝石
require 'aws/s3'
AWS::S3::Base.establish_connection!(
:access_key_id => 'your access key',
:secret_access_key => 'your secret key'
)
bucket = AWS::S3::Bucket.find('the bucket name')
while(!bucket.empty?)
begin
puts "Deleting objects in bucket"
bucket.objects.each do |object|
object.delete
puts "There are #{bucket.objects.size} objects left in the bucket"
end
puts "Done deleting objects"
rescue SocketError
puts "Had socket error"
end
end
PERL
#!/usr/bin/perl
use Net::Amazon::S3;
my $aws_access_key_id = 'your access key';
my $aws_secret_access_key = 'your secret access key';
my $increment = 50; # 50 at a time
my $bucket_name = 'bucket_name';
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, });
my $bucket = $s3->bucket($bucket_name);
print "Incrementally deleting the contents of $bucket_name\n";
my $deleted = 1;
my $total_deleted = 0;
while ($deleted > 0) {
print "Loading up to $increment keys...\n";
$response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
$total_deleted += $deleted;
print "Deleting $deleted keys($total_deleted total)...\n";
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
$bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n";
}
}
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
来源:Tarkblog
希望这可以帮助!
recent versions of s3cmd have --recursive
e.g.,
~/$ s3cmd rb --recursive s3://bucketwithfiles
使用 s3cmd:创建一个新的空目录 s3cmd sync --delete-removed empty_directory s3://yourbucket
This may be a bug in S3Fox, because it is generally able to delete items recursively. However, I'm not sure if I've ever tried to delete a whole bucket and its contents at once.
The JetS3t project, as mentioned by Stu, includes a Java GUI applet you can easily run in a browser to manage your S3 buckets: Cockpit. It has both strengths and weaknesses compared to S3Fox, but there's a good chance it will help you deal with your troublesome bucket. Though it will require you to delete the objects first, then the bucket.
Disclaimer: I'm the author of JetS3t and Cockpit
SpaceBlock还使删除 s3 存储桶变得简单 - 右键单击存储桶,删除,在传输视图中等待作业完成,完成。
这是我维护的免费和开源的 windows s3 前端,所以不要脸的插件警报等。
如果您安装了ruby (和ruby gems),请安装aws-s3 gem
gem install aws-s3
或者
sudo gem install aws-s3
创建一个文件delete_bucket.rb
:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
AWS::S3::Bucket.delete("bucket_name", :force => true)
并运行它:
ruby delete_bucket.rb
由于Bucket#delete
为我返回了很多超时异常,因此我扩展了脚本:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
while AWS::S3::Bucket.find("bucket_name")
begin
AWS::S3::Bucket.delete("bucket_name", :force => true)
rescue
end
end
如果您使用亚马逊的控制台并且一次性需要清理一个存储桶:您可以浏览到您的存储桶,然后选择顶部键,然后滚动到底部,然后按键盘上的 shift 键,然后单击底部的键。它将在两者之间全选,然后您可以右键单击并删除。
我已经实现了bucket-destroy,这是一个多线程实用程序,它可以执行删除存储桶所需的一切操作。我处理非空桶,以及启用版本的桶键。
您可以在此处阅读博客文章http://bytecoded.blogspot.com/2011/01/recursive-delete-utility-for-version.html以及此处的说明http://code.google.com/p/bucket-破坏/
我已经成功地删除了一个在键名、版本键和 DeleteMarker 键中包含双“//”的存储桶。目前,我正在一个包含约 40,000,000 个的存储桶上运行它,到目前为止,我已经能够在 m1.large 上的几个小时内删除 1,200,000 个。请注意,该实用程序是多线程的,但(尚未)实现洗牌(这将水平缩放,在多台机器上启动该实用程序)。
可以用来避免此问题的一种技术是将所有对象放在存储桶中的“文件夹”中,让您只需删除文件夹,然后继续删除存储桶。此外, http: //s3tools.org 提供的 s3cmd 工具可用于删除其中包含文件的存储桶:
s3cmd rb --force s3://bucket-name
我想最简单的方法是使用S3fm,这是 Amazon S3 的免费在线文件管理器。无需安装应用程序,无需第三方网站注册。直接从 Amazon S3 运行,安全方便。
只需选择您的存储桶并点击删除。
这是一个难题。我的解决方案在http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt。它在顶部的评论中描述了我确定可能出错的所有事情。这是脚本的当前版本(如果我更改它,我会在 URL 上放一个新版本,但可能不在此处)。
#!/usr/bin/perl
# Copyright (c) 2010 Jonathan Kamens.
# Released under the GNU General Public License, Version 3.
# See <http://www.gnu.org/licenses/>.
# $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $
# Deleting an Amazon S3 bucket is hard.
#
# * You can't delete the bucket unless it is empty.
#
# * There is no API for telling Amazon to empty the bucket, so you have to
# delete all of the objects one by one yourself.
#
# * If you've recently added a lot of large objects to the bucket, then they
# may not all be visible yet on all S3 servers. This means that even after the
# server you're talking to thinks all the objects are all deleted and lets you
# delete the bucket, additional objects can continue to propagate around the S3
# server network. If you then recreate the bucket with the same name, those
# additional objects will magically appear in it!
#
# It is not clear to me whether the bucket delete will eventually propagate to
# all of the S3 servers and cause all the objects in the bucket to go away, but
# I suspect it won't. I also suspect that you may end up continuing to be
# charged for these phantom objects even though the bucket they're in is no
# longer even visible in your S3 account.
#
# * If there's a CR, LF, or CRLF in an object name, then it's sent just that
# way in the XML that gets sent from the S3 server to the client when the
# client asks for a list of objects in the bucket. Unfortunately, the XML
# parser on the client will probably convert it to the local line ending
# character, and if it's different from the character that's actually in the
# object name, you then won't be able to delete it. Ugh! This is a bug in the
# S3 protocol; it should be enclosing the object names in CDATA tags or
# something to protect them from being munged by the XML parser.
#
# Note that this bug even affects the AWS Web Console provided by Amazon!
#
# * If you've got a whole lot of objects and you serialize the delete process,
# it'll take a long, long time to delete them all.
use threads;
use strict;
use warnings;
# Keys can have newlines in them, which screws up the communication
# between the parent and child processes, so use URL encoding to deal
# with that.
use CGI qw(escape unescape); # Easiest place to get this functionality.
use File::Basename;
use Getopt::Long;
use Net::Amazon::S3;
my $whoami = basename $0;
my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key
--bucket=name [--processes=#] [--wait=#] [--nodelete]
Specify --processes to indicate how many deletes to perform in
parallel. You're limited by RAM (to hold the parallel threads) and
bandwidth for the S3 delete requests.
Specify --wait to indicate seconds to require the bucket to be verified
empty. This is necessary if you create a huge number of objects and then
try to delete the bucket before they've all propagated to all the S3
servers (I've seen a huge backlog of newly created objects take *hours* to
propagate everywhere). See the comment at the top of the script for more
information about this issue.
Specify --nodelete to empty the bucket without actually deleting it.\n";
my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait);
my $procs = 1;
my $delete = 1;
die if (! GetOptions(
"help" => sub { print $usage; exit; },
"access-key-id=s" => \$aws_access_key_id,
"secret-access-key=s" => \$aws_secret_access_key,
"bucket=s" => \$bucket_name,
"processess=i" => \$procs,
"wait=i" => \$wait,
"delete!" => \$delete,
));
die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name));
my $increment = 0;
print "Incrementally deleting the contents of $bucket_name\n";
$| = 1;
my(@procs, $current);
for (1..$procs) {
my($read_from_parent, $write_to_child);
my($read_from_child, $write_to_parent);
pipe($read_from_parent, $write_to_child) or die;
pipe($read_from_child, $write_to_parent) or die;
threads->create(sub {
close($read_from_child);
close($write_to_child);
my $old_select = select $write_to_parent;
$| = 1;
select $old_select;
&child($read_from_parent, $write_to_parent);
}) or die;
close($read_from_parent);
close($write_to_parent);
my $old_select = select $write_to_child;
$| = 1;
select $old_select;
push(@procs, [$read_from_child, $write_to_child]);
}
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
my $deleted = 1;
my $total_deleted = 0;
my $last_start = time;
my($start, $waited);
while ($deleted > 0) {
$start = time;
print "\nLoading ", ($increment ? "up to $increment" :
"as many as possible")," keys...\n";
my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()})
or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
if (! $deleted) {
if ($wait and ! $waited) {
my $delta = $wait - ($start - $last_start);
if ($delta > 0) {
print "Waiting $delta second(s) to confirm bucket is empty\n";
sleep($delta);
$waited = 1;
$deleted = 1;
next;
}
else {
last;
}
}
else {
last;
}
}
else {
$waited = undef;
}
$total_deleted += $deleted;
print "\nDeleting $deleted keys($total_deleted total)...\n";
$current = 0;
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
while (! &send(escape($key_name) . "\n")) {
print "Thread $current died\n";
die "No threads left\n" if (@procs == 1);
if ($current == @procs-1) {
pop @procs;
$current = 0;
}
else {
$procs[$current] = pop @procs;
}
}
$current = ($current + 1) % @procs;
threads->yield();
}
print "Sending sync message\n";
for ($current = 0; $current < @procs; $current++) {
if (! &send("\n")) {
print "Thread $current died sending sync\n";
if ($current = @procs-1) {
pop @procs;
last;
}
$procs[$current] = pop @procs;
$current--;
}
threads->yield();
}
print "Reading sync response\n";
for ($current = 0; $current < @procs; $current++) {
if (! &receive()) {
print "Thread $current died reading sync\n";
if ($current = @procs-1) {
pop @procs;
last;
}
$procs[$current] = pop @procs;
$current--;
}
threads->yield();
}
}
continue {
$last_start = $start;
}
if ($delete) {
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
}
sub send {
my($str) = @_;
my $fh = $procs[$current]->[1];
print($fh $str);
}
sub receive {
my $fh = $procs[$current]->[0];
scalar <$fh>;
}
sub child {
my($read, $write) = @_;
threads->detach();
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
while (my $key = <$read>) {
if ($key eq "\n") {
print($write "\n") or die;
next;
}
chomp $key;
$key = unescape($key);
if ($key =~ /[\r\n]/) {
my(@parts) = split(/\r\n|\r|\n/, $key, -1);
my(@guesses) = shift @parts;
foreach my $part (@parts) {
@guesses = (map(($_ . "\r\n" . $part,
$_ . "\r" . $part,
$_ . "\n" . $part), @guesses));
}
foreach my $guess (@guesses) {
if ($bucket->get_key($guess)) {
$key = $guess;
last;
}
}
}
$bucket->delete_key($key) or
die $s3->err . ": " . $s3->errstr . "\n";
print ".";
threads->yield();
}
return;
}
我是 Bucket Explorer 团队的开发团队成员之一,我们将根据用户的选择提供不同的选项来删除 Bucket... 1) 快速删除 - 此选项将以 1000 个为单位从存储桶中删除您的数据。 2) 永久删除 - 此选项将删除队列中的对象。
另一个无耻的插件:当我不得不删除 250,000 个项目时,我厌倦了等待单个 HTTP 删除请求,所以我编写了一个 Ruby 脚本,它执行多线程并在很短的时间内完成:
http://github.com/sfeley/s3nuke/
由于线程的处理方式,这在 Ruby 1.9 中运行得更快。
我用 Python 编写了一个脚本,它成功地删除了我的 9000 个对象。请参阅此页面:
亚马逊最近添加了一项新功能“多对象删除”,该功能允许通过单个 API 请求一次删除多达 1,000 个对象。这应该可以简化从存储桶中删除大量文件的过程。
此处提供了新功能的文档:http: //docs.amazonwebservices.com/AmazonS3/latest/dev/DeletingMultipleObjects.html
我总是最终使用他们的 C# API 和小脚本来做到这一点。我不确定为什么 S3Fox 不能做到这一点,但目前该功能似乎已被破坏。不过,我确信许多其他 S3 工具也可以做到这一点。
首先删除存储桶中的所有对象。然后您可以删除存储桶本身。
显然,无法删除其中包含对象的存储桶,S3Fox 不会为您执行此操作。
我自己在使用 S3Fox 时也遇到过其他小问题,比如这个,现在使用基于 Java 的工具jets3t,该工具在错误情况方面更受欢迎。应该还有其他人。
这就是我使用的。只是简单的红宝石代码。
case bucket.size
when 0
puts "Nothing left to delete"
when 1..1000
bucket.objects.each do |item|
item.delete
puts "Deleting - #{bucket.size} left"
end
end
我将不得不看看其中一些替代文件管理器。我已经使用(并且喜欢)BucketExplorer,您可以从 - 令人惊讶的 - http://www.bucketexplorer.com/获得它。
这是一个 30 天的免费试用期,然后(目前)每个许可证的费用为 49.99 美元(购买封面上的费用为 49.95 美元)。
尝试https://s3explorer.appspot.com/来管理您的 S3 帐户。
使用亚马逊网络管理控制台。使用谷歌浏览器提高速度。删除对象的速度比 firefox 快得多(大约快 10 倍)。有 60 000 个对象要删除。