我使用 python-snappy 压缩了一个文件并将其放在我的 hdfs 存储中。我现在正试图像这样阅读它,但我得到了以下回溯。我找不到如何读取文件的示例,因此我可以处理它。我可以很好地阅读文本文件(未压缩)版本。我应该使用 sc.sequenceFile 吗?谢谢!
I first compressed the file and pushed it to hdfs
python-snappy -m snappy -c gene_regions.vcf gene_regions.vcf.snappy
hdfs dfs -put gene_regions.vcf.snappy /
I then added the following to spark-env.sh
export SPARK_EXECUTOR_MEMORY=16G
export HADOOP_HOME=/usr/local/hadoop
export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:$HADOOP_HOME/lib/native
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native
export SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:$HADOOP_HOME/lib/native
export SPARK_CLASSPATH=$SPARK_CLASSPATH:$HADOOP_HOME/lib/lib/snappy-java-1.1.1.8-SNAPSHOT.jar
I then launch my spark master and slave and finally my ipython notebook where I am executing the code below.
a_file = sc.textFile("hdfs://master:54310/gene_regions.vcf.snappy")
a_file.first()
ValueError Traceback (最近一次调用最后一次) in () ----> 1 a_file.first()
/home/user/Software/spark-1.3.0-bin-hadoop2.4/python/pyspark/rdd.pyc in first(self) 1244 if rs: 1245 return rs[0] -> 1246 raise ValueError("RDD is空") 1247 1248 def isEmpty(self):
值错误:RDD 为空
Working code (uncompressed) text file
a_file = sc.textFile("hdfs://master:54310/gene_regions.vcf")
a_file.first()
输出:u'##fileformat=VCFv4.1'