1

从 MS-SQL 环境进入也具有 Spark 访问权限的 HIVE 环境。尝试使用 RStudio 和 R(有时使用 rPython 的 python)来替换我曾经使用 T-SQL 的一些东西以及我以前从未做过的一大堆事情是正确的。

为了使其工作,我需要能够读取和写入 HIVE DB。

我已经使用 spark 和 R 包 sparklyr 连接,并且可以使用带有 spark 连接的 R 包 DBI 连接到我们的 HIVE 集群,并将数据拉入 R 数据帧就好了:

sc <- spark_connect(master = "yarn-client", spark_home="/usr/hdp/current/spark-client", config = config)
result3 <- dbGetQuery(sc, "select * from sampledb.sampletable limit 100")

上面的代码每次都有效。我还可以使用 dbGetQuery 在引用的 sql 语句的上下文中在数据库中创建表而不会出现问题,因此它不是写权限问题。

但是,当我尝试将数据从 R 帧写回 HIVE 集群时,如下所示:

dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)

它运行没有错误,但表没有显示,我无法查询它。

如果我再次尝试写表,我会收到此错误:

> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
Error in .local(conn, name, value, ...) : 
Table sampledb.rsparktest3 already exists

任何想法可能会发生什么?除了 DBI 之外,我还有更好的方法吗?

提前感谢您的帮助!

下面是我运行这些语句时的整个 RStudio 控制台日志:

> result3 <- dbGetQuery(sc, "select * from sampledb.sampletable limit 100")
> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
> result3y <- dbGetQuery(sc, "select * from sampledb.rsparktest3 limit 2")
Error: org.apache.spark.sql.AnalysisException: Table not found: sampledb.rsparktest3; line 1 pos 35
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:54)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:121)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:120)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:120)
at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:50)
at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:44)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sparklyr.Invoke$.invoke(invoke.scala:102)
at sparklyr.StreamHandler$.handleMethodCall(stream.scala:97)
at sparklyr.StreamHandler$.read(stream.scala:62)
at sparklyr.BackendHandler.channelRead0(handler.scala:52)
at sparklyr.BackendHandler.channelRead0(handler.scala:14)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:745)
> dbWriteTable(conn = sc, name = "sampledb.rsparktest3", value = result3)
Error in .local(conn, name, value, ...) : 
Table sampledb.rsparktest3 already exists
4

2 回答 2

0

使用 sparklyr 连接使用 spark_write_table 而不是 dbWriteTable 写回 Hive

于 2017-09-02T19:06:34.133 回答
0

使用 Sparklyr 将 Spark 表写入配置单元:

加载本地数据框以触发:

iris_spark_table <- copy_to(sc, iris, overwrite = TRUE)
sdf_copy_to(sc, iris_spark_table)

在 hive 中创建表(如有必要,附加数据库名称):

DBI::dbGetQuery(sc, "create table iris_hive as SELECT * FROM iris_spark_table")
于 2018-02-01T16:19:20.160 回答