我使用以下代码从 Amazon S3 加载数据:
from ingest import Connectors
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
S3loadoptions = {
Connectors.AmazonS3.ACCESS_KEY : 'AKIAJYCJAFZYENNPACNA',
Connectors.AmazonS3.SECRET_KEY : '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',
Connectors.AmazonS3.SOURCE_BUCKET : 'ngpconnector',
Connectors.AmazonS3.SOURCE_FILE_NAME : 'addresses3.csv',
Connectors.AmazonS3.SOURCE_INFER_SCHEMA : '1',
Connectors.AmazonS3.SOURCE_FILE_FORMAT : 'csv'}
S3DF = sqlContext.read.format('com.ibm.spark.discover').options(**S3loadoptions).load()
S3DF.printSchema()
S3DF.show(5)
但是当我运行这个代码片段时,我得到了以下错误。当我从另一个数据源(例如 dashDB)加载时,我收到了类似的错误消息。
AttributeErrorTraceback (most recent call last)
<ipython-input-1-9da344857d7e> in <module>()
4
5 S3loadoptions = {
----> 6 Connectors.AmazonS3.ACCESS_KEY : 'AKIAJYCJAFZYENNPACNA',
7 Connectors.AmazonS3.SECRET_KEY : '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',
8 Connectors.AmazonS3.SOURCE_BUCKET : 'ngpconnector',
AttributeError: 'NoneType' object has no attribute 'AmazonS3'