具体来说,这个问题是关于如何提高或取消指定配额,而不是如何在现有配额限制内提高效率。
在 GAE 上运行 MapReduce 作业时,我达到了下面列出的配额限制。限制是每天 100GB 的“收到的文件字节数”,据我所知,这是从 Blobstore 收到的文件字节数。增加我的预算对 100Gb/天的配额限制没有影响。我希望完全取消限制,并有能力为我使用的东西付费。
日志中的输出:
The API call file.Open() required more quota than is available.
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__
rv = self.handle_exception(request, response, e)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/base_handler.py", line 68, in post
self.handle()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/handlers.py", line 168, in handle
for entity in input_reader:
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/mapreduce_pipeline.py", line 109, in __iter__
for binary_record in super(_ReducerReader, self).__iter__():
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/input_readers.py", line 1615, in __iter__
record = self._reader.read()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 335, in read
(chunk, record_type) = self.__try_read_record()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/records.py", line 292, in __try_read_record
header = self.__reader.read(HEADER_LENGTH)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 569, in read
with open(self._filename, 'r') as f:
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 436, in open
exclusive_lock=exclusive_lock)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 269, in __init__
self._open()
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 393, in _open
self._make_rpc_call_with_retry('Open', request, response)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 397, in _make_rpc_call_with_retry
_make_call(method, request, response)
File "/base/data/home/apps/s~utest-appgraph/69.358421800203055451/mapreduce/lib/files/file.py", line 243, in _make_call
rpc.check_success()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 558, in check_success
self.__rpc.CheckSuccess()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
OverQuotaError: The API call file.Open() required more quota than is available.