4

具体来说,这个问题是关于如何提高或取消指定配额,而不是如何在现有配额限制内提高效率。

在 GAE 上运行 MapReduce 作业时,我达到了下面列出的配额限制。限制是每天 100GB 的“收到的文件字节数”,据我所知,这是从 Blobstore 收到的文件字节数。增加我的预算对 100Gb/天的配额限制没有影响。我希望完全取消限制,并有能力为我使用的东西付费。

日志中的输出:

The API call file.Open() required more quota than is available.
Traceback (most recent call last):
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
    rv = self.handle_exception(request, response, e)
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
    rv = self.router.dispatch(request, response)
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
    return route.handler_adapter(request, response)
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
    return handler.dispatch()
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
    return self.handle_exception(e, self.app.debug)
  File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
    return method(*args, **kwargs)
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/base_handler.py", line 68, in post
    self.handle()
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/handlers.py", line 168, in handle
    for entity in input_reader:
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/mapreduce_pipeline.py", line 109, in __iter__
    for binary_record in super(_ReducerReader, self).__iter__():
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/input_readers.py", line 1615, in __iter__
    record = self._reader.read()
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 335, in read
    (chunk, record_type) = self.__try_read_record()
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 292, in __try_read_record
    header = self.__reader.read(HEADER_LENGTH)
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 569, in read
    with open(self._filename, 'r') as f:
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 436, in open
    exclusive_lock=exclusive_lock)
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 269, in __init__
    self._open()
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 393, in _open
    self._make_rpc_call_with_retry('Open', request, response)
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 397, in _make_rpc_call_with_retry
    _make_call(method, request, response)
  File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 243, in _make_call
    rpc.check_success()
  File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 558, in check_success
    self.__rpc.CheckSuccess()
  File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
    raise self.exception
OverQuotaError: The API call file.Open() required more quota than is available.
4

2 回答 2

2

看来您需要直接与 Google 交谈:在配额页面上有一个请求增加配额的表单链接:http: //support.google.com/code/bin/request.py ?&contact_type=AppEngineCPURequest

于 2012-04-26T20:26:48.853 回答
0

我也遇到了这个错误。我们正在使用 appengine 的“实验备份”功能。这又会运行一个 map reduce 来将所有 appengine 数据备份到 google-cloud-storage。但是,当前备份失败并出现以下错误:

OverQuotaError:API 调用 file.Open() 需要的配额超过了可用配额。

在配额仪表板中,我们看到:

Other Quotas With Warnings
These quotas are only shown when they have warnings
File Bytes Sent 100%    107,374,182,400 of 107,374,182,400  Limited

所以显然我们已经达到了一个隐藏的配额“发送的文件字节”。但它没有记录在任何地方,我们可能永远不会知道我们会击中它......现在我们被困住了

于 2014-09-02T12:26:37.543 回答