TL;博士
我使用这个示例构建了一个简单的应用程序,该应用程序使用 Spring Batch(远程分区)和 Spring Cloud 数据流在 Kubernetes 上部署工作 pod。
查看在 Kubernetes 上创建的“partitionedJob”pod 的日志,我看到工作步骤(pod)正在按顺序启动。启动一个工作 pod 所需的时间大约为 10-15 秒(有时会高达 2 分钟,如下所示)。因此,worker pod 会以 10-15 秒的间隔一个接一个地启动。
日志:
[info 2021/06/26 14:30:29.089 UTC <main> tid=0x1] Job: [SimpleJob: [name=job]] launched with the following parameters: [{maxWorkers=40, chunkSize=5000, run.id=13, batch.worker-app=docker://docker-myhost.artifactrepository.net/my-project/myjob:0.1, grideSize=40}]
[info 2021/06/26 14:30:29.155 UTC <main> tid=0x1] The job execution id 26 was run within the task execution 235
[info 2021/06/26 14:30:29.184 UTC <main> tid=0x1] Executing step: [masterStep]
2021-06-26 14:30:29 INFO AuditRecordPartitioner:51 - Creating partitions. [gridSize=40]
[info 2021/06/26 14:32:41.128 UTC <main> tid=0x1] Using Docker entry point style: exec
[info 2021/06/26 14:34:51.560 UTC <main> tid=0x1] Using Docker image: docker-myhost.artifactrepository.net/myproject/myjob:0.1
[info 2021/06/26 14:34:51.560 UTC <main> tid=0x1] Using Docker entry point style: exec
[info 2021/06/26 14:36:39.464 UTC <main> tid=0x1] Using Docker image: docker-myhost.artifactrepository.net/myproject/myjob:0.1
[info 2021/06/26 14:36:39.464 UTC <main> tid=0x1] Using Docker entry point style: exec
[info 2021/06/26 14:38:34.203 UTC <main> tid=0x1] Using Docker image: docker-myhost.artifactrepository.net/myproject/myjob:0.1
[info 2021/06/26 14:38:34.203 UTC <main> tid=0x1] Using Docker entry point style: exec
[info 2021/06/26 14:40:44.544 UTC <main> tid=0x1] Using Docker image: docker-myhost.artifactrepository.net/myproject/myjob:0.1
[info 2021/06/26 14:40:44.544 UTC <main> tid=0x1] Using Docker entry point style: exec
在 Kubernetes 上创建 40 个 Pod 大约需要 7-8 分钟。(有时这个数字高达 20 分钟)理想的做法是一次性异步启动所有分区步骤(工作 pod)。
问题:我们如何配置 Spring Cloud Data Flow /Spring Batch 以异步/并行而不是顺序启动工作容器(分区步骤)?如果 SCDF 确实是一口气创建了 40 个分区,为什么实际上 master 作业正在以非常慢的速度逐个创建这些分区?(如日志中所示)。我不认为这是一个基础设施问题,因为我能够使用Task DSL快速启动任务
相关代码:
@EnableTask
@EnableBatchProcessing
@SpringBootApplication
public class BatchApplication {
public static void main(String[] args) {
SpringApplication.run(BatchApplication.class, args);
}
}
/**
*
* Main job controller
*
*
*/
@Profile("master")
@Configuration
public class MasterConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(MasterConfiguration.class);
@Autowired
private ApplicationArguments applicationArguments;
@Bean
public Job job(JobBuilderFactory jobBuilderFactory) {
LOGGER.info("Creating job...");
SimpleJobBuilder jobBuilder = jobBuilderFactory.get("job").start(masterStep(null, null, null));
jobBuilder.incrementer(new RunIdIncrementer());
return jobBuilder.build();
}
@Bean
public Step masterStep(StepBuilderFactory stepBuilderFactory, Partitioner partitioner,
PartitionHandler partitionHandler) {
LOGGER.info("Creating masterStep");
return stepBuilderFactory.get("masterStep").partitioner("workerStep", partitioner)
.partitionHandler(partitionHandler).build();
}
@Bean
public DeployerPartitionHandler partitionHandler(@Value("${spring.profiles.active}") String activeProfile,
@Value("${batch.worker-app}") String resourceLocation,
@Value("${spring.application.name}") String applicationName, ApplicationContext context,
TaskLauncher taskLauncher, JobExplorer jobExplorer, ResourceLoaderResolver resolver) {
ResourceLoader resourceLoader = resolver.get(resourceLocation);
Resource resource = resourceLoader.getResource(resourceLocation);
DeployerPartitionHandler partitionHandler = new DeployerPartitionHandler(taskLauncher, jobExplorer, resource,
"workerStep");
List<String> commandLineArgs = new ArrayList<>();
commandLineArgs.add("--spring.profiles.active=" + activeProfile.replace("master", "worker"));
commandLineArgs.add("--spring.cloud.task.initialize.enable=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
commandLineArgs.addAll(Arrays.stream(applicationArguments.getSourceArgs()).filter(
x -> !x.startsWith("--spring.profiles.active=") && !x.startsWith("--spring.cloud.task.executionid="))
.collect(Collectors.toList()));
commandLineArgs.addAll(applicationArguments.getNonOptionArgs());
partitionHandler.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler.setEnvironmentVariablesProvider(new NoOpEnvironmentVariablesProvider());
List<String> nonOptionArgs = applicationArguments.getNonOptionArgs();
partitionHandler.setMaxWorkers(Integer.valueOf(getNonOptionArgValue(nonOptionArgs, 3)));
partitionHandler.setGridSize(Integer.valueOf(getNonOptionArgValue(nonOptionArgs, 4)));
partitionHandler.setApplicationName(applicationName);
return partitionHandler;
}
@Bean("auditRecordPartitioner")
public Partitioner auditRecordPartitioner() {
return new AuditRecordPartitioner<>());
}
private String getNonOptionArgValue(List<String> nonOptionArgs, int index) {
return nonOptionArgs.get(index).split("=")[1];
}
}
@Profile("worker")
@Configuration
public class WorkerConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(WorkerConfiguration.class);
@Autowired
public JobBuilderFactory jobBuilderFactory;
@Autowired
public StepBuilderFactory stepBuilderFactory;
@Autowired
private ApplicationArguments applicationArguments;
@Bean
public DeployerStepExecutionHandler stepExecutionHandler(ApplicationContext context, JobExplorer jobExplorer,
JobRepository jobRepository) {
LOGGER.info("stepExecutionHandler...");
return new DeployerStepExecutionHandler(context, jobExplorer, jobRepository);
}
@Bean
public Step workerStep(StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("workerStep").tasklet(workerTasklet(null)).build();
}
@Bean
@StepScope
public WorkerTasklet workerTasklet(@Value("#{stepExecutionContext['key']}") String key) {
return new WorkerTasklet(key);
}
}
请注意,我将 gridSize 和 maxWorkers 作为输入参数传递给主步骤(在启动任务时从 SCDF UI)。