我正在尝试使用 hadoop 2.7.2 和 allxuio 从 AWS 上的 spark 2.1.0 独立集群连接到 redshift,这给了我这个错误:Exception in thread "main" java.lang.NoSuchMethodError: com.amazonaws.services.s3.transfer.TransferManager <init(Lcom/amazonaws/services/s3/AmazonS3;Ljava/util/concurrent/ThreadPoolExecutor;)V at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:287)
据我了解,问题出在:
关于 Amazon SDK 依赖项的说明:此库声明了对 AWS Java SDK 组件的提供依赖项。在大多数情况下,这些库将由您的部署环境提供。但是,如果您获得 Amazon SDK 类的 ClassNotFoundExceptions,那么您需要在构建/运行时配置中添加对 com.amazonaws.aws-java-sdk-core 和 com.amazonaws.aws-java-sdk-s3 的显式依赖项. 有关更多详细信息,请参阅 project/SparkRedshiftBuild.scala 中的注释。
如spark-redshift-databricks中所述,我尝试了所有可能的类路径 jar 组合,但错误相同。我放置所有罐子的火花提交如下:
/usr/local/spark/bin/spark-submit --class com.XX.XX.app.Test --driver-memory 2G --total-executor-cores 40 --verbose --jars /home/ubuntu/aws-java-sdk-s3-1.11.79.jar,/home/ubuntu/aws-java-sdk-core-1.11.79.jar,/home/ubuntu/postgresql-9.4.1207.jar,/home/ubuntu/alluxio-1.3.0-spark-client-jar-with-dependencies.jar,/usr/local/alluxio/core/client/target/alluxio-core-client-1.3.0-jar-with-dependencies.jar --master spark://XXX.eu-west-1.compute.internal:7077 --executor-memory 4G /home/ubuntu/QAe.jar qa XXX.eu-west-1.compute.amazonaws.com 100 --num-executors 10 --conf spark.executor.extraClassPath=/home/ubuntu/aws-java-sdk-s3-1.11.79.jar:/home/ubuntu/aws-java-sdk-core-1.11.79.jar --driver-class-path /home/ubuntu/aws-java-sdk-s3-1.11.79.jar:/home/ubuntu/aws-java-sdk-core-1.11.79.jar:/home/ubuntu/postgresql-9.4.1207.jar --driver-library-path /home/ubuntu/aws-java-sdk-s3-1.11.79.jar:/home/ubuntu/aws-java-sdk-core-1.11.79.jar --driver-library-path com.amazonaws.aws-java-sdk-s3:com.amazonaws.aws-java-sdk-core.jar --packages databricks:spark-redshift_2.11:3.0.0-preview1,com.amazonaws:aws-java-sdk-s3:1.11.79,com.amazonaws:aws-java-sdk-core:1.11.79
我的内置.sbt:
libraryDependencies += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.8.4"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-core" % "1.11.79"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-s3" % "1.11.79"
libraryDependencies += "org.apache.avro" % "avro-mapred" % "1.8.1"
libraryDependencies += "com.amazonaws" % "aws-java-sdk-redshift" % "1.11.78"
libraryDependencies += "com.databricks" % "spark-redshift_2.11" % "3.0.0-preview1"
libraryDependencies += "org.alluxio" % "alluxio-core-client" % "1.3.0"
libraryDependencies += "com.taxis99" %% "awsscala" % "0.7.3"
libraryDependencies += "org.apache.hadoop" % "hadoop-aws" % "2.7.3"
libraryDependencies += "org.apache.spark" %% "spark-core" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-sql" % sparkVersion
libraryDependencies += "org.apache.spark" %% "spark-mllib" % sparkVersion
只需从 postgresql 读取代码并写入 redshift:
val df = spark.read.jdbc(url_read,"public.test", prop).as[Schema.Message.Raw]
.filter("message != ''")
.filter("from_id >= 0")
.limit(100)
df.write
.format("com.databricks.spark.redshift")
.option("url", "jdbc:redshift://test.XXX.redshift.amazonaws.com:5439/test?user=test&password=testXXXXX")
.option("dbtable", "table_test")
.option("tempdir", "s3a://redshift_logs/")
.option("forward_spark_s3_credentials", "true")
.option("tempformat", "CSV")
.option("jdbcdriver", "com.amazon.redshift.jdbc42.Driver")
.mode(SaveMode.Overwrite)
.save()
我也列出了 /home/ubuntu/ 下所有集群节点上的所有 jar 文件。
有谁知道如何在 com.amazonaws.aws-java-sdk-core 和 com.amazonaws.aws-java-sdk-s3 上添加显式依赖项作为 Spark 中构建/运行时配置的一部分?还是罐子本身的问题:我是否使用了错误的版本 1.11.80 或 .. 79 等?我需要从 build.sbt 中排除这些库吗?将hadoop更改为2.8会解决问题吗?
以下是我基于测试的有用链接: 使用 Sparkere 进行依赖管理,将 jars 添加到 Spark 作业 - spark-submit