1

我试图创建一个用于将图像上传到 Google App Engine blobstore 的页面。我正在使用 angularjs 和 ng-flow 来实现这一点。

上传部分似乎工作正常,除了所有 blob 都存储为“application/octet-stream”并命名为“blob”。如何让 blobstore 识别文件名和内容类型?

这是我用来上传文件的代码。

FlowEventsCtrl 内部:

$scope.$on('flow::filesSubmitted', function (event, $flow, files) {
            $http.get('/files/upload/create').then(function (resp) {
                $flow.opts.target = resp.data.url;
                $flow.upload();
            });
        });

内部view.html:

<div flow-init="{testChunks:false, singleFile:true}" 
     ng-controller="FlowEventsCtrl">
    <div class="panel">
        <span flow-btn>Upload File</span>
    </div>
    <div class="show-files">...</div>
</div>

服务器端在blobstore 文档中指定。

谢谢

4

1 回答 1

5

我已经解决了我的问题,回想起来答案似乎很明显。Flow.js 和 Blobstore 上传 URL 做不同的事情。我将在下面留下我的解释,让人们犯下和我一样天真的错误。

Blobstore 需要一个包含文件的字段。该字段包含上传数据的文件名和内容类型。此数据作为文件存储在 blobstore 中。默认情况下,此字段名为“文件”。

Flow 以块的形式上传数据,并包含许多用于文件名和其他数据的字段。实际的块数据上传到指定文件名为“blob”和内容类型为“application/octet-stream”的字段中。预计服务器会存储块并重新组合到文件中。因为它只是文件的一部分而不是整个文件,所以它既不是以文件命名,也不是相同的内容类型。默认情况下,此字段名为“文件”。

所以问题的答案是:文件存储为“application/octet-stream”并命名为“blob”,因为我存储的是块而不是实际文件。我能够存储一些东西似乎是因为两者都使用了相同的字段默认名称。

因此,解决方案是为 Flow 请求编写我自己的处理程序:

class ImageUploadHandler(webapp2.RequestHandler):
    def post(self):
        chunk_number = int(self.request.params.get('flowChunkNumber'))
        chunk_size = int(self.request.params.get('flowChunkSize'))
        current_chunk_size = int(self.request.params.get('flowCurrentChunkSize'))
        total_size = int(self.request.params.get('flowTotalSize'))
        total_chunks = int(self.request.params.get('flowTotalChunks'))
        identifier = str(self.request.params.get('flowIdentifier'))
        filename = str(self.request.params.get('flowFilename'))
        data = self.request.params.get('file')

        f = ImageFile(filename, identifier, total_chunks, chunk_size, total_size)
        f.write_chunk(chunk_number, current_chunk_size, data)

        if f.ready_to_build():
            info = f.build()
            if info:
                self.response.headers['Content-Type'] = 'application/json'
                self.response.out.write(json.dumps(info.as_dict()))
            else:
                self.error(500)
        else:
            self.response.headers['Content-Type'] = 'application/json'
            self.response.out.write(json.dumps({
                'chunkNumber': chunk_number,
                'chunkSize': chunk_size,
                'message': 'Chunk ' + str(chunk_number) + ' written'
            }))

其中 ImageFile 是一个写入谷歌云存储的类。

编辑:

在 ImageFile 类下面。唯一缺少的是 FileInfo 类,它是一个简单的模型,用于存储生成的带有文件名的 url。

class ImageFile:
    def __init__(self, filename, identifier, total_chunks, chunk_size, total_size):
        self.bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name())
        self.original_filename = filename
        self.filename = '/' + self.bucket_name + '/' + self.original_filename
        self.identifier = identifier
        self.total_chunks = total_chunks
        self.chunk_size = chunk_size
        self.total_size = total_size
        self.stat = None
        self.chunks = []
        self.load_stat()
        self.load_chunks(identifier, total_chunks)

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def load_chunks(self, identifier, number_of_chunks):
        for n in range(1, number_of_chunks + 1):
            self.chunks.append(Chunk(self.bucket_name, identifier, n))

    def exists(self):
        return not not self.stat

    def content_type(self):
        if self.filename.lower().endswith('.jpg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.jpeg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.png'):
            return 'image/png'
        elif self.filename.lower().endswith('.git'):
            return 'image/gif'
        else:
            return 'binary/octet-stream'

    def ready(self):
        return self.exists() and self.stat.st_size == self.total_size

    def ready_chunks(self):
        for c in self.chunks:
            if not c.exists():
                return False
        return True

    def delete_chunks(self):
        for c in self.chunks:
            c.delete()

    def ready_to_build(self):
        return not self.ready() and self.ready_chunks()

    def write_chunk(self, chunk_number, current_chunk_size, data):
        chunk = self.chunks[int(chunk_number) - 1]
        chunk.write(current_chunk_size, data)

    def build(self):
        try:
            log.info('File \'' + self.filename + '\': assembling chunks.')
            write_retry_params = gcs.RetryParams(backoff_factor=1.1)
            gcs_file = gcs.open(self.filename,
                                'w',
                                content_type=self.content_type(),
                                options={'x-goog-meta-identifier': self.identifier},
                                retry_params=write_retry_params)
            for c in self.chunks:
                log.info('Writing chunk ' + str(c.chunk_number) + ' of ' + str(self.total_chunks))
                c.write_on(gcs_file)
            gcs_file.close()
        except Exception, e:
            log.error('File \'' + self.filename + '\': Error during assembly - ' + e.message)
        else:
            self.delete_chunks()
            key = blobstore.create_gs_key('/gs' + self.filename)
            url = images.get_serving_url(key)
            info = ImageInfo(name=self.original_filename, url=url)
            info.put()
            return info

块类:

class Chunk:
    def __init__(self, bucket_name, identifier, chunk_number):
        self.chunk_number = chunk_number
        self.filename = '/' + bucket_name + '/' + identifier + '-chunk-' + str(chunk_number)
        self.stat = None
        self.load_stat()

    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None

    def exists(self):
        return not not self.stat

    def write(self, size, data):
        write_retry_params = gcs.RetryParams(backoff_factor=1.1)
        gcs_file = gcs.open(self.filename, 'w', retry_params=write_retry_params)
        for c in data.file:
            gcs_file.write(c)
        gcs_file.close()
        self.load_stat()

    def write_on(self, stream):
        gcs_file = gcs.open(self.filename)

        try:
            data = gcs_file.read()
            while data:
                stream.write(data)
                data = gcs_file.read()
        except gcs.Error, e:
            log.error('Error writing data to chunk: ' + e.message)
        finally:
            gcs_file.close()

    def delete(self):
        try:
            gcs.delete(self.filename)
            self.stat = None
        except gcs.NotFoundError:
            pass
于 2014-07-21T14:10:29.007 回答