3

通过 python api 查询 BigQuery 时,使用:

service.jobs().getQueryResults

我们发现第一次尝试效果很好 - 所有预期结果都包含在响应中。但是,如果查询在第一次之后不久(大约在 5 分钟内)运行第二次,则几乎立即返回一小部分结果(以 2 的幂),没有错误。

请参阅我们的完整代码: https ://github.com/sean-schaefer/pandas/blob/master/pandas/io/gbq.py

关于可能导致这种情况的任何想法?

4

1 回答 1

1

看起来问题是我们为 query() 和 getQueryResults() 返回了不同的默认行数。因此,根据您的查询是否快速完成(因此您不必使用 getQueryResults()),您将获得更多或更少的行。

我已经提交了一个错误,我们应该尽快修复。

解决方法(总体上是一个好主意)是为查询和 getQueryResults 调用设置 maxResults。如果您想要很多行,您可能希望使用返回的页面令牌对结果进行分页。

下面是一个从已完成的查询作业中读取一页数据的示例。它将包含在 bq.py 的下一个版本中:

class _JobTableReader(_TableReader):
  """A TableReader that reads from a completed job."""

  def __init__(self, local_apiclient, project_id, job_id):
    self.job_id = job_id
    self.project_id = project_id
    self._apiclient = local_apiclient

  def ReadSchemaAndRows(self, max_rows=None):
    """Read at most max_rows rows from a table and the schema.

    Args:
      max_rows: maximum number of rows to return.

    Raises:
      BigqueryInterfaceError: when bigquery returns something unexpected.

    Returns:
      A tuple where the first item is the list of fields and the
      second item a list of rows.
    """
    page_token = None
    rows = []
    schema = {}
    max_rows = max_rows if max_rows is not None else sys.maxint
    while len(rows) < max_rows:
      (more_rows, page_token, total_rows, current_schema) = self._ReadOnePage(
          max_rows=max_rows - len(rows),
          page_token=page_token)
      if not schema and current_schema:
        schema = current_schema.get('fields', {})

      max_rows = min(max_rows, total_rows)
      for row in more_rows:
        rows.append([entry.get('v', '') for entry in row.get('f', [])])
      if not page_token and len(rows) != max_rows:
          raise BigqueryInterfaceError(
            'PageToken missing for %r' % (self,))
      if not more_rows and len(rows) != max_rows:
        raise BigqueryInterfaceError(
            'Not enough rows returned by server for %r' % (self,))
    return (schema, rows)

  def _ReadOnePage(self, max_rows, page_token=None):
    data = self._apiclient.jobs().getQueryResults(
        maxResults=max_rows,
        pageToken=page_token,
        # Sets the timeout to 0 because we assume the table is already ready.
        timeoutMs=0,
        projectId=self.project_id,
        jobId=self.job_id).execute()
    if not data['jobComplete']:
      raise BigqueryError('Job %s is not done' % (self,))
    page_token = data.get('pageToken', None)
    total_rows = int(data['totalRows'])
    schema = data.get('schema', None)
    rows = data.get('rows', [])
    return (rows, page_token, total_rows, schema)
于 2013-07-19T21:02:06.967 回答