我编写了一个 python 代码来通过 Apache-Spark 从 Amazon Web Service (AWS) S3 加载文件。data
具体来说,代码创建 RDD 并ruofan-bucket
使用SparkContext().wholeTextFiles("s3n://ruofan-bucket/data")
. 代码如下所示:
import os, sys, inspect
### Current directory path.
curr_dir = os.path.split(inspect.getfile(inspect.currentframe()))[0]
### Setup the environment variables
spark_home_dir = os.path.realpath(os.path.abspath(os.path.join(curr_dir, "../spark-1.4.0")))
python_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
os.environ["SPARK_HOME"] = spark_home_dir
os.environ["PYTHONPATH"] = python_dir
### Setup pyspark directory path
pyspark_dir = os.path.realpath(os.path.abspath(os.path.join(spark_home_dir, "./python")))
sys.path.append(pyspark_dir)
### Import the pyspark
from pyspark import SparkConf, SparkContext
def main():
### Initialize the SparkConf and SparkContext
conf = SparkConf().setAppName("ruofan").setMaster("local")
sc = SparkContext(conf = conf)
### Create a RDD containing metadata about files in directory "data"
datafile = sc.wholeTextFiles("s3n://ruofan-bucket/data") ### Read data directory from S3 storage.
### Collect files from the RDD
datafile.collect()
if __name__ == "__main__":
main()
在我运行我的代码之前,我已经导出了环境变量:AWS_SECRET_ACCESS_KEY
和AWS_ACCESS_KEY_ID
. 但是当我运行我的代码时,它会显示错误:
IOError: [Errno 2] No such file or directory: 's3n://ruofan-bucket/data/test1.csv'
我确定我在 AWS S3 上有目录和文件,但我不知道这个错误。如果有人帮助我解决问题,我真的很感激。