0

尝试HBase databaseWindows.

集群是secured using Kerberos身份验证,所以它没有连接到 Hbase 数据库。

每次我们创建 jar 文件并在集群中运行时。但这对开发和调试没有用。

我如何设置hbase-site.xml类路径?

我下载*site.xml的文件尝试添加hbase-site.xml, core-site.xml and hdfs-site.xmlassource文件夹并尝试将此文件添加为项目构建路径中的外部类文件夹,但没有任何效果。我如何使它工作?

无论如何我们可以hbase-site.xml在 sqlContext 中设置,因为我使用 sqlContext 来使用 HortonWorks 连接器读取 Hbase 表。

错误日志是:

Exception in thread "main" java.io.IOException: java.lang.reflect.InvocationTargetException
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.init(HBaseResources.scala:93)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.liftedTree1$1(HBaseResources.scala:57)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.acquire(HBaseResources.scala:54)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.acquire(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.ReferencedResource$class.releaseOnException(HBaseResources.scala:74)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.releaseOnException(HBaseResources.scala:88)
       at org.apache.spark.sql.execution.datasources.hbase.RegionResource.<init>(HBaseResources.scala:108)
       at org.apache.spark.sql.execution.datasources.hbase.HBaseTableScanRDD.getPartitions(HBaseTableScan.scala:60)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
       at scala.Option.getOrElse(Option.scala:120)
       at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
       at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
       at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
       at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
       at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
       at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
       at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
       at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
       at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
       at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
       at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311)
       at org.apache.spark.sql.DataFrame.show(DataFrame.scala:319)
       at scb.HBaseBroadcast$.main(HBaseBroadcast.scala:106)
       at scb.HBaseBroadcast.main(HBaseBroadcast.scala)
Caused by: java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
       at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
       ... 44 more
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg/apache/hadoop/io/retry/FailoverProxyProvider$ProxyInfo;
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:73)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:64)
       at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
       at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:147)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
       at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
       at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
       at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
       at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
       at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:241)
       at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
       at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
       at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:879)
       at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
       ... 49 more
4

1 回答 1

1

你有一个 hadoop-dfs 冲突。请检查服务器上的版本与您的开发路径上的版本。

于 2016-11-18T14:08:44.673 回答