我让 Spark 到 HAWQ JDBC 连接正常工作,但现在两天后从表中提取数据出现问题。Spark 配置没有任何变化...
简单的步骤 #1 - 从 HAWQ 中的一个简单表中打印模式 我可以创建一个 SQLContext DataFrame 并连接到 HAWQ db:
df = sqlContext.read.format('jdbc').options(url=db_url, dbtable=db_table).load()
df.printSchema()
哪个打印:
root
|-- product_no: integer (nullable = true)
|-- name: string (nullable = true)
|-- price: decimal (nullable = true)
但是当实际尝试提取数据时:
df.select("product_no").show()
弹出这些错误...
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost):
org.postgresql.util.PSQLException: ERROR: could not write 3124 bytes to temporary file: No space left on device (buffile.c:408) (seg33 adnpivhdwapda04.gphd.local:40003 pid=544124) (cdbdisp.c:1571)
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2182)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1911)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:173)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:615)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:465)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:350)
at org.apache.spark.sql.jdbc.JDBCRDD$$anon$1.<init>(JDBCRDD.scala:372)
at org.apache.spark.sql.jdbc.JDBCRDD.compute(JDBCRDD.scala:350)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:248)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:208)
我尝试过的事情(但如果有更精确的步骤愿意再试一次):
- 在 HAWQ 主节点上尝试了“df -i”,利用率只有 1%
- 在 HAWQ 数据库上尝试了 dbvacuum(不建议在 HAWQ 上使用 VACUUM ALL)
- 尝试创建这个很小的新数据库(使用单个表,3 列),没有运气
这不可能是实际的内存不足,那么在哪里以及是什么导致了这个问题?