实际上scala进程在spark上下文之外工作,所以为了成功运行s3-dist-cp命令,我所做的就是在启动包含s3-dist-cp命令的scala进程之前停止spark上下文,完整的工作代码如下:
logger.info("Moving ORC files from HDFS to S3 !!")
import scala.sys.process._
logger.info("stopping spark context..##")
val spark = IngestionContext.sparkSession
spark.stop()
logger.info("spark context stopped..##")
logger.info("sleeping for 10 secs")
Thread.sleep(10000) // this sleep is not required, this was just for debugging purpose, you can remove this in your final code.
logger.info("woke up after sleeping for 10 secs")
try {
/**
* following is the java version, off course you need take care of few imports
*/
//val pb = new java.lang.ProcessBuilder("s3-dist-cp", "--src", INGESTED_ORC_DIR, "--dest", "s3:/" + paramMap(Storage_Output_Path).substring(4) + "_temp", "--srcPattern", ".*\\.orc")
//val pb = new java.lang.ProcessBuilder("hadoop", "jar", "/usr/share/aws/emr/s3-dist-cp/lib/s3-dist-cp.jar", "--src", INGESTED_ORC_DIR, "--dest", "s3:/" + paramMap(Storage_Output_Path).substring(4) + "_temp", "--srcPattern", ".*\\.orc")
//pb.directory(new File("/tmp"))
//pb.inheritIO()
//pb.redirectErrorStream(true)
//val process = pb.start()
//val is = process.getInputStream()
//val isr = new InputStreamReader(is)
//val br = new BufferedReader(isr)
//var line = ""
//logger.info("printling lines:")
//while (line != null) {
// line = br.readLine()
// logger.info("line=[{}]", line)
//}
//logger.info("process goes into waiting state")
//logger.info("Waited for: " + process.waitFor())
//logger.info("Program terminated!")
/**
* following is the scala version
*/
val S3_DIST_CP = "s3-dist-cp"
val INGESTED_ORC_DIR = S3Util.getSaveOrcPath()
// listing out all the files
//val s3DistCpCmd = S3_DIST_CP + " --src " + INGESTED_ORC_DIR + " --dest " + paramMap(Storage_Output_Path).substring(4) + "_temp --srcPattern .*\\.orc"
//-Dmapred.child.java.opts=-Xmx1024m -Dmapreduce.job.reduces=2
val cmd = S3_DIST_CP + " --src " + INGESTED_ORC_DIR + " --dest " + "s3:/" + paramMap(Storage_Output_Path).substring(4) + "_temp --srcPattern .*\\.orc"
//val cmd = "hdfs dfs -cp -f " + INGESTED_ORC_DIR + "/* " + "s3:/" + paramMap(Storage_Output_Path).substring(4) + "_temp/"
//val cmd = "hadoop distcp " + INGESTED_ORC_DIR + "/ s3:/" + paramMap(Storage_Output_Path).substring(4) + "_temp_2/"
logger.info("full hdfs to s3 command : [{}]", cmd)
// command execution
val exitCode = (stringToProcess(cmd)).!
logger.info("s3_dist_cp command exit code: {} and s3 copy got " + (if (exitCode == 0) "SUCCEEDED" else "FAILED"), exitCode)
} catch {
case ex: Exception =>
logger.error(
"there was an exception while copying orc file to s3 bucket. {} {}",
"", ex.getMessage, ex)
throw new IngestionException("s3 dist cp command failure", null, Some(StatusEnum.S3_DIST_CP_CMD_FAILED))
}
尽管上面的代码完全按预期工作,但也有其他观察结果,如下所示:
而不是使用这个
val exitCode = (stringToProcess(cmd)).!
如果你用这个
val exitCode = (stringToProcess(cmd)).!!
注意单身的区别!和双!!,作为单!只返回退出代码,而 double !! 返回流程执行的输出
所以在单身的情况下!上面的代码工作得很好,如果是 double !!,它也可以工作,但是它在 S3 存储桶中生成了太多的文件和副本,而不是原始文件的数量。
至于 spark-submit 命令,无需担心 --driver-class-path 甚至 --jars 选项,因为我没有传递任何依赖项。