3

ParallelRunStep我试图在 AzureML 管道中运行多个序列。为此,我使用以下帮助程序创建了一个步骤:

def create_step(name, script, inp, inp_ds):
    out = pip_core.PipelineData(name=f"{name}_out", datastore=dstore, is_directory=True)
    out_ds = out.as_dataset()
    out_ds_named = out_ds.as_named_input(f"{name}_out")

    config = cont_steps.ParallelRunConfig(
        source_directory="src",
        entry_script=script,
        mini_batch_size="1",
        error_threshold=0,
        output_action="summary_only",
        compute_target=compute_target,
        environment=component_env,
        node_count=2,
        logging_level="DEBUG"
    )

    step = cont_steps.ParallelRunStep(
        name=name,
        parallel_run_config=config,
        inputs=[inp_ds],
        output=out,
        arguments=[],
        allow_reuse=False,
    )

    return step, out, out_ds_named

作为一个例子,我创建了两个这样的步骤

step1, out1, out1_ds_named = create_step("step1", "demo_s1.py", input_ds, named_input_ds)
step2, out2, out2_ds_named = create_step("step2", "demo_s2.py", out1, out1_ds_named)

创建试验并将其提交到现有工作区和 Azure ML 计算群集可以正常工作。第一步也step1使用input_ds运行它的脚本demo_s1.py(它产生它的输出文件,并成功完成。

然而,第二步step2永远不会开始。

在此处输入图像描述

还有一个最后的例外

The experiment failed. Finalizing run...
Cleaning up all outstanding Run operations, waiting 300.0 seconds
2 items cleaning up...
Cleanup took 0.16968441009521484 seconds
Starting the daemon thread to refresh tokens in background for process with pid = 394
Traceback (most recent call last):
  File "driver/amlbi_main.py", line 52, in <module>
    main()
  File "driver/amlbi_main.py", line 44, in main
    JobStarter().start_job()
  File "/mnt/batch/tasks/shared/LS_root/jobs/pipeline/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/mounts/workspaceblobstore/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/driver/job_starter.py", line 48, in start_job
    job.start()
  File "/mnt/batch/tasks/shared/LS_root/jobs/pipeline/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/mounts/workspaceblobstore/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/driver/job.py", line 70, in start
    master.start()
  File "/mnt/batch/tasks/shared/LS_root/jobs/pipeline/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/mounts/workspaceblobstore/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/driver/master.py", line 174, in start
    self._start()
  File "/mnt/batch/tasks/shared/LS_root/jobs/pipeline/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/mounts/workspaceblobstore/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/driver/master.py", line 149, in _start
    self.wait_for_input_init()
  File "/mnt/batch/tasks/shared/LS_root/jobs/pipeline/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/mounts/workspaceblobstore/azureml/08a1e1e1-7c3f-4c5a-84ad-ca99b8a6cb31/driver/master.py", line 124, in wait_for_input_init
    raise exc
exception.FirstTaskCreationTimeout: Unable to create any task within 600 seconds.
Load the datasource and read the first row locally to see how long it will take.
Set the advanced argument '--first_task_creation_timeout' to a larger value in arguments in ParallelRunStep.

我的印象是,第二步是等待一些数据。然而,第一步创建提供的输出目录和一个文件。

import argparse
import os

def init():
    pass

def run(parallel_input):
    print(f"*** Running {os.path.basename(__file__)} with input {parallel_input}")

    parser = argparse.ArgumentParser(description="Data Preparation")
    parser.add_argument('--output', type=str, required=True)
    args, unknown_args = parser.parse_known_args()

    out_path = os.path.join(args.output, "1.data")
    os.makedirs(args.output, exist_ok=True)
    open(out_path, "a").close()

    return [out_path]

我不知道如何进一步调试。有人有想法吗?

4

1 回答 1

0

您可以检查此笔记本以进行并行运行,并确保您使用的是相同的包。 https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/parallel-run/tabular-dataset-inference-iris.ipynb

于 2021-08-16T14:57:41.857 回答