我已经粘贴在我拼凑在一起来完成此任务的模块下方。它有效,但如果它足够重要,我仍然会改进一些事情。我只是把我的想法留在了代码中。
require "fileutils"
# IDEA: I think it would make more sense to create another module
# which I could mix into Job for copying attachments. Really, the
# logic for iterating over attachments should be in Job. That way,
# this class could become a more generalized class for copying
# files whether we are on local or remote storage.
#
# The only problem with that is that I would like to not create
# a new connection to AWS every time I copy a file. If I do then
# I could be opening loads of connections if I iterate over an
# array and copy each item. Once I get that part fixed, this
# refactoring should definitely happen.
module UploadCopier
# Take a job which is a reprint (ie. it's original_id
# is set to the id of another job) and copy all of
# the original jobs remote files over for the reprint
# to use.
#
# Otherwise, if a user edits the reprints attachment
# files, the files of the original job would also be
# changed in the process.
def self.copy_attachments_for(reprint)
case storage
when :file
UploadCopier::LocalUploadCopier.copy_attachments_for(reprint)
when :fog
UploadCopier::S3UploadCopier.copy_attachments_for(reprint)
end
end
# IDEA: Create another method which takes a block. This method
# can check which storage system we're using and then call
# the block and pass in the reprint. Would DRY this up a bit more.
def self.copy(old_path, new_path)
case storage
when :file
UploadCopier::LocalUploadCopier.copy(old_path, new_path)
when :fog
UploadCopier::S3UploadCopier.copy(old_path, new_path)
end
end
def self.storage
# HACK: I should ask CarrierWave what method to use
# rather than relying on the config variable.
APP_CONFIG[:carrierwave][:storage].to_sym
end
class S3UploadCopier
# Copy the originals of a certain job's attachments over
# to a location associated with the reprint.
def self.copy_attachments_for(reprint)
reprint.attachments.each do |attachment|
orig_path = attachment.original_full_storage_path
# We can pass :fog in here without checking because
# we know it's :fog since we're in the S3UploadCopier.
new_path = attachment.full_storage_path
copy(orig_path, new_path)
end
end
# Copy a file from one place to another within a bucket.
def self.copy(old_path, new_path)
# INFO: http://goo.gl/lmgya
object_at(old_path).copy_to(new_path)
end
private
def self.object_at(path)
bucket.objects[path]
end
# IDEA: THis will be more flexible if I go through
# Fog when I open the connection to the remote storage.
# My credentials are already configured there anyway.
# Get the current s3 bucket currently in use.
def self.bucket
s3 = AWS::S3.new(access_key_id: APP_CONFIG[:aws][:access_key_id],
secret_access_key: APP_CONFIG[:aws][:secret_access_key])
s3.buckets[APP_CONFIG[:fog_directory]]
end
end
# This will only be used in development when uploads are
# stored on the local file system.
class LocalUploadCopier
# Copy the originals of a certain job's attachments over
# to a location associated with the reprint.
def self.copy_attachments_for(reprint)
reprint.attachments.each do |attachment|
# We have to pass :file in here since the default is :fog.
orig_path = attachment.original_full_storage_path
new_path = attachment.full_storage_path(:file)
copy(orig_path, new_path)
end
end
# Copy a file from one place to another within the
# local filesystem.
def self.copy(old_path, new_path)
FileUtils.mkdir_p(File.dirname(new_path))
FileUtils.cp(old_path, new_path)
end
end
end
我这样使用它:
# Have to save the record first because it needs to have a DB ID.
if @cloned_job.save
UploadCopier.copy_attachments_for(@cloned_job)
end