我需要触发 Spark 作业以使用 API 调用从 JSON 文件聚合数据。我使用 spring-boot 来创建资源。因此,解决方案的步骤如下:
- 用户使用 json 文件作为输入发出 POST 请求
- JSON 文件存储在与 dataproc 集群关联的 google 存储桶中。
- 使用指定的 jars、类从 REST 方法中触发聚合 Spark 作业,参数是 json 文件链接。
我希望使用 Dataproc 的 Java 客户端而不是控制台或命令行来触发作业。你怎么做呢?
我需要触发 Spark 作业以使用 API 调用从 JSON 文件聚合数据。我使用 spring-boot 来创建资源。因此,解决方案的步骤如下:
我希望使用 Dataproc 的 Java 客户端而不是控制台或命令行来触发作业。你怎么做呢?
We're hoping to have a more thorough guide shortly on the official documentation, but to get started, visit the following API overview: https://developers.google.com/api-client-library/java/apis/dataproc/v1
It includes links to the Dataproc javadocs; if your server is making calls on behalf of your own project and not on behalf of your end-users' Google projects, then you probably want the keyfile-based service-account auth explained here to create the Credential
object you use to initialize the Dataproc
client stub.
As for the dataproc-specific parts, this just means you add the following dependency to your Maven pomfile if using Maven:
<project>
<dependencies>
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-dataproc</artifactId>
<version>v1-rev4-1.21.0</version>
</dependency>
</dependencies>
</project>
And then you'll have code like:
Dataproc dataproc = new Dataproc.Builder(new NetHttpTransport(), new JacksonFactory(), credential)
.setApplicationName("my-webabb/1.0")
.build();
dataproc.projects().regions().jobs().submit(
projectId, "global", new SubmitJobRequest()
.setJob(new Job()
.setPlacement(new JobPlacement()
.setClusterName("my-spark-cluster"))
.setSparkJob(new SparkJob()
.setMainClass("FooSparkJobMain")
.setJarFileUris(ImmutableList.of("gs://bucket/path/to/your/spark-job.jar"))
.setArgs(ImmutableList.of(
"arg1", "arg2", "arg3")))))
.execute();
Since different intermediary servers may do low-level retries or your request may throw an IOException where you don't know whether the job-submission succeeded or not, an addition step you may want to take is to generate your own jobId
; then you know what jobId to poll on to figure out if it got submitted even if your request times out or throws some unknown exception:
import java.util.UUID;
...
Dataproc dataproc = new Dataproc.Builder(new NetHttpTransport(), new JacksonFactory(), credential)
.setApplicationName("my-webabb/1.0")
.build();
String curJobId = "json-agg-job-" + UUID.randomUUID().toString();
Job jobSnapshot = null;
try {
jobSnapshot = dataproc.projects().regions().jobs().submit(
projectId, "global", new SubmitJobRequest()
.setJob(new Job()
.setReference(new JobReference()
.setJobId(curJobId))
.setPlacement(new JobPlacement()
.setClusterName("my-spark-cluster"))
.setSparkJob(new SparkJob()
.setMainClass("FooSparkJobMain")
.setJarFileUris(ImmutableList.of("gs://bucket/path/to/your/spark-job.jar"))
.setArgs(ImmutableList.of(
"arg1", "arg2", "arg3")))))
.execute();
} catch (IOException ioe) {
try {
jobSnapshot = dataproc.projects().regions().jobs().get(
projectId, "global", curJobId).execute();
logger.info(ioe, "Despite exception, job was verified submitted");
} catch (IOException ioe2) {
// Handle differently; if it's a GoogleJsonResponseException you can inspect the error
// code, and if it's a 404, then it means the job didn't get submitted; you can add retry
// logic in that case.
}
}
// We can poll on dataproc.projects().regions().jobs().get(...) until the job reports being
// completed or failed now.