0

我正在尝试使用 Apache Spark 从 HBase 读取数据。我只想扫描一个特定的列。我正在创建我的 HBase 数据的 RDD,如下所示

SparkConf sparkConf = new SparkConf().setAppName("HBaseRead").setMaster("local[2]");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "localhost:2181");

String tableName = "myTable";

conf.set(TableInputFormat.INPUT_TABLE, tableName);
conf.set(TableInputFormat.SCAN_COLUMN_FAMILY, "myCol");


 JavaPairRDD<ImmutableBytesWritable, Result> hBaseRDD = sc.newAPIHadoopRDD(conf, TableInputFormat.class,
        ImmutableBytesWritable.class, Result.class);

这是我想将字符串转换JavaPairRDD为的地方。JavaRDD

JavaRDD<String> rdd = ...

我怎样才能做到这一点?

4

1 回答 1

0

您可以JavaRDD<String>使用map如下功能。

import org.apache.spark.api.java.function.Function;
import org.apache.hadoop.hbase.util.Bytes;
import scala.Tuple2;

JavaRDD<String> javaRDD = javaPairRdd.map(new Function<Tuple2<ImmutableBytesWritable,Result>, String>() {
    @Override
    public String call(Tuple2<ImmutableBytesWritable, Result> tuple) throws Exception {
        Result result = tuple._2;
        String rowKey = Bytes.toString(result.getRow());//row key
        String fName = Bytes.toString(result.getValue(Bytes.toBytes("myColumnFamily"), Bytes.toBytes("firstName")));//firstName column 
        return fName;
    }       
});
于 2017-12-21T12:09:47.437 回答