5

这个问题的背景是基于我正在开发的虚拟文件系统。我使用的概念是不同类型存储类型的虚拟路径提供程序,即本地文件系统、保管箱和亚马逊 s3。我的虚拟文件基类如下所示:

public abstract class CommonVirtualFile : VirtualFile {
    public virtual string Url {
        get { throw new NotImplementedException(); }
    }
    public virtual string LocalPath {
        get { throw new NotImplementedException(); }
    }
    public override Stream Open() {
        throw new NotImplementedException();
    }
    public virtual Stream Open(FileMode fileMode) {
        throw new NotImplementedException();
    }
    protected CommonVirtualFile(string virtualPath) : base(virtualPath) { }
}

第二个 Open 方法的实现是我的问题所在。如果我们查看我的本地文件系统实现,即在磁盘上保存文件,它看起来像这样:

public override Stream Open(FileMode fileMode) {
    return new FileStream("The_Path_To_The_File_On_Disk"), fileMode);
}

如果我想在本地文件系统上保存一个文件,它看起来像这样:

    const string virtualPath = "/assets/newFile.txt";
    var file = HostingEnvironment.VirtualPathProvider.GetFile(virtualPath) as CommonVirtualFile;
    if (file == null) {
        var virtualDir = VirtualPathUtility.GetDirectory(virtualPath);
        var directory = HostingEnvironment.VirtualPathProvider.GetDirectory(virtualDir) as CommonVirtualDirectory;
        file = directory.CreateFile(VirtualPathUtility.GetFileName(virtualPath));
    }
    byte[] fileContent;
    using (var fileStream = new FileStream(@"c:\temp\fileToCopy.txt", FileMode.Open, FileAccess.Read)) {
        fileContent = new byte[fileStream.Length];
        fileStream.Read(fileContent, 0, fileContent.Length);
    }
    // write the content to the local file system
    using (Stream stream = file.Open(FileMode.Create)) {
        stream.Write(fileContent, 0, fileContent.Length);
    }

我想要的是,如果我切换到我的 amazon s3 虚拟路径提供程序,我希望这段代码可以直接工作而不做任何更改,所以总结一下,我如何使用 amazon s3 sdk 解决这个问题以及我应该如何实现我的 Open(FileMode我的亚马逊 s3 虚拟路径提供程序中的 fileMode) 方法?

4

1 回答 1

1

嘿,我也支持这个问题,我通过实现流解决了它。

这是我的做法,也许它有帮助:

public static Stream OpenStream(S3TransferUtility transferUtility, string key)
    {                     
        byte[] buffer  = new byte[Buffersize + Buffersize/2];

        S3CopyMemoryStream s3CopyStream =
            new S3CopyMemoryStream(key, buffer, transferUtility)
            .WithS3CopyFileStreamEvent(CreateMultiPartS3Blob);

        return s3CopyStream;
    }

我的带有构造函数的 Stream 覆盖了 close 和 write(array, offset, count) 方法,并将流部分上传到 amazon s3。

public class S3CopyMemoryStream : MemoryStream
    {

        public S3CopyMemoryStream WithS3CopyFileStreamEvent(StartUploadS3CopyFileStreamEvent doing)
        {
            S3CopyMemoryStream s3CopyStream = new S3CopyMemoryStream(this._key, this._buffer, this._transferUtility);

            s3CopyStream.StartUploadS3FileStreamEvent = new S3CopyMemoryStream.StartUploadS3CopyFileStreamEvent(CreateMultiPartS3Blob);

            return s3CopyStream;
        }

        public S3CopyMemoryStream(string key, byte[] buffer, S3TransferUtility transferUtility)
            : base(buffer)
        {
            if (buffer.LongLength > int.MaxValue)
                throw new ArgumentException("The length of the buffer may not be longer than int.MaxValue", "buffer");

            InitiatingPart = true;
            EndOfPart = false;
            WriteCount = 1;
            PartETagCollection = new List<PartETag>();

            _buffer = buffer;
            _key = key;
            _transferUtility = transferUtility;
        }

事件 StartUploadS3FileStreamEvent 调用启动、上传部分和完成上传的调用。

或者,您可以实现一个更容易的 FileStream,因为您可以使用

TransferUtilityUploadRequest request =
            new TransferUtilityUploadRequest()
            .WithAutoCloseStream(false).WithBucketName(
                transferUtility.BucketName)
                .WithKey(key)
                .WithPartSize(stream.PartSize)
                .WithInputStream(stream) as TransferUtilityUploadRequest;

        transferUtility.Upload(request);

在被覆盖的 FileStream 的 close 方法中。缺点是你必须先将整个数据写入磁盘,然后才能上传。

于 2012-11-30T09:53:05.347 回答