5

似乎 gae 为模型分配了非常高的 ID。当我下载我的实体时,我会得到一些非常大的条目。这些是首先自动生成的。以 csv 格式下载它们是没有问题的。但是删除现有数据并重新上传相同的数据会引发异常。

Exceeded maximum allocated IDs

痕迹:

Traceback (most recent call last):
  File "/opt/eclipse/plugins/org.python.pydev_2.7.5.2013052819/pysrc/pydevd.py", line 1397, in <module>
    debugger.run(setup['file'], None, None)
  File "/opt/eclipse/plugins/org.python.pydev_2.7.5.2013052819/pysrc/pydevd.py", line 1090, in run
    pydev_imports.execfile(file, globals, locals) #execute the script
  File "/home/kave/workspace/google_appengine/appcfg.py", line 171, in <module>
    run_file(__file__, globals())
  File "/home/kave/workspace/google_appengine/appcfg.py", line 167, in run_file
    execfile(script_path, globals_)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 4247, in <module>
    main(sys.argv)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 4238, in main
    result = AppCfgApp(argv).Run()
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 2396, in Run
    self.action(self)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3973, in __call__
    return method()
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3785, in PerformUpload
    run_fn(args)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/appcfg.py", line 3676, in RunBulkloader
    sys.exit(bulkloader.Run(arg_dict))
  File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 4379, in Run
    return _PerformBulkload(arg_dict)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 4244, in _PerformBulkload
    loader.finalize()
  File "/home/kave/workspace/google_appengine/google/appengine/ext/bulkload/bulkloader_config.py", line 384, in finalize
    self.increment_id(high_id_key)
  File "/home/kave/workspace/google_appengine/google/appengine/tools/bulkloader.py", line 1206, in IncrementId
    unused_start, end = datastore.AllocateIds(high_id_key, max=high_id_key.id())
  File "/home/kave/workspace/google_appengine/google/appengine/api/datastore.py", line 1965, in AllocateIds
    return AllocateIdsAsync(model_key, size, **kwargs).get_result()
  File "/home/kave/workspace/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 612, in get_result
    return self.__get_result_hook(self)
  File "/home/kave/workspace/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1863, in __allocate_ids_hook
    self.check_rpc_success(rpc)
  File "/home/kave/workspace/google_appengine/google/appengine/datastore/datastore_rpc.py", line 1236, in check_rpc_success
    raise _ToDatastoreError(err)
google.appengine.api.datastore_errors.BadRequestError: Exceeded maximum allocated IDs

通常我的身份证都在附近,26002但几天前的新身份证和4948283361329150. 这些现在正在引起问题。(如果我将它们更改为较低的值,一切都很好,但我没有首先生成这些 id)为什么 GAE 自己生成的 id 有这样的问题?

非常感谢

4

1 回答 1

2

这是一个已知问题,已在 1.8.2 或更高版本的 SDK 中修复。

请注意,如果您对开发应用服务器使用 bulkloader,那么这些 SDK(1.8.2、1.8.3)很遗憾在该用例中存在单独的 bulkloader 问题(请参阅appcfg-py-upload-data-fails-in-google-app-engine -sdk-1-8-2)但不在生产中。

于 2013-08-09T01:50:58.247 回答