我在从临时配置单元表中获取行数时遇到了一些麻烦。我不确定究竟是什么导致了这个错误,因为当我对较小的测试集群运行相同的查询集时,我得到了预期的结果。我只在针对大型配置单元集群运行时看到这一点。
代码类似于
with hive.connect() as conn:
conn.execute(f"CREATE TEMPORARY TABLE new_users (uuid String)")
conn.execute(f"""INSERT INTO new_users (uuid)
SELECT uuid FROM big_user_table WHERE <some conditions> """
resp = conn.execute(f"""SELECT COUNT(*) FROM
(SELECT DISTINCT uuid FROM new_users) new_usrs""").fetchone()
我已经尝试了一些变化来获得计数,但实际上.fetchone()
是抛出错误。
如果有人想要整个 hive 堆栈跟踪,我可以添加它,但现在这里只是 python 方面
File "/home/ec2-user/myproject/report.py", line 88, in run_metrics
(SELECT DISTINCT uuid FROM new_users) new_usrs""").fetchone()
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1276, in fetchone
e, None, None, self.cursor, self.context
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1466, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 383, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 128, in reraise
raise value.with_traceback(tb)
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1268, in fetchone
row = self._fetchone_impl()
File "/home/ec2-user/.local/lib/python3.7/site-packages/sqlalchemy/engine/result.py", line 1148, in _fetchone_impl
return self.cursor.fetchone()
File "/home/ec2-user/.local/lib/python3.7/site-packages/pyhive/common.py", line 105, in fetchone
self._fetch_while(lambda: not self._data and self._state != self._STATE_FINISHED)
File "/home/ec2-user/.local/lib/python3.7/site-packages/pyhive/common.py", line 45, in _fetch_while
self._fetch_more()
File "/home/ec2-user/.local/lib/python3.7/site-packages/pyhive/hive.py", line 387, in _fetch_more
_check_status(response)
File "/home/ec2-user/.local/lib/python3.7/site-packages/pyhive/hive.py", line 495, in _check_status
raise OperationalError(response)
最终的 hive 错误说明了有关 Premature EOF 的内容
'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:459'], sqlState=None, errorCode=0, errorMessage='java.io.IOException: java.io.EOFException: Premature EOF from inputStream'), hasMoreRows=None, results=None)
考虑到此 COUNT 之前的大型 SELECT/INSERT 查询的数量,我很难相信这是一个内存问题,但我目前也没有其他想法。
谢谢。