3

我正在尝试使用 ML Notebook 部署 Azure ML AutoML 生成的模型(为简洁起见,脚本被缩短):

automl_settings = {
    "experiment_timeout_minutes": 20,
    "primary_metric": 'AUC_weighted',
    "max_concurrent_iterations": 8, 
    "max_cores_per_iteration": -1,
    "enable_dnn": False,
    "enable_early_stopping": True,
    "validation_size": 0.3,
    "verbosity": logging.INFO,
}

automl_config = AutoMLConfig(task = 'classification',
                             debug_log = 'automl_errors.log',
                             compute_target=compute_target,
                             blacklist_models=['LogisticRegression','MultinomialNaiveBayes','BernoulliNaiveBayes','LinearSVM','DecisionTree','RandomForest','ExtremeRandomTrees','LightGBM','KNN','SVM','StackEnsemble','VotingEnsemble'],
                             training_data=train_dataset,
                             label_column_name=target_column_name,
                             **automl_settings
                            )

automl_run = experiment.submit(automl_config, show_output=True)

best_run, fitted_model = automl_run.get_output()
best_run_metrics = best_run.get_metrics()

children = list(automl_run.get_children(recursive=True))
summary_df = pd.DataFrame(index=['run_id', 'run_algorithm',
                                    'primary_metric', 'Score'])
goal_minimize = False
for run in children:
    if('run_algorithm' in run.properties and 'score' in run.properties):
        summary_df[run.id] = [run.id, run.properties['run_algorithm'],
                                run.properties['primary_metric'],
                                float(run.properties['score'])]
        if('goal' in run.properties):
            goal_minimize = run.properties['goal'].split('_')[-1] == 'min'

    summary_df = summary_df.T.sort_values(
        'Score',
        ascending=goal_minimize).drop_duplicates(['run_algorithm'])
    summary_df = summary_df.set_index('run_algorithm')
    
    best_dnn_run_id = summary_df['run_id'].iloc[0]
    best_dnn_run = Run(experiment, best_dnn_run_id)

model_dir = 'Model' # Local folder where the model will be stored temporarily
if not os.path.isdir(model_dir):
    os.mkdir(model_dir)
    
best_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')

# Register the model
model_name = best_run.properties['model_name']

model_path=os.path.join("./outputs",'model.pkl')

description = 'My Model'
model = best_run.register_model(model_name=model_name, model_path=model_path, model_framework='AutoML', description = description, tags={'env': 'sandbox'})

# Deploy the Model

service_name = 'my-ml-service'
service = Model.deploy(ws, service_name, [model], overwrite=True)

service.wait_for_deployment(show_output=True)

在我尝试部署模型之前,一切似乎都运行良好:

--------------------------------------------------------------------------- UserErrorException                        Traceback (most recent call last) <ipython-input-48-5c72d1613c28> in <module>
      3 service_name = 'my-service'
      4 
----> 5 service = Model.deploy(ws, service_name, [model], overwrite=True)
      6 
      7 

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/model.py in deploy(workspace, name, models, inference_config, deployment_config, deployment_target, overwrite)    1577               logger=module_logger)    1578 
-> 1579             return Model._deploy_no_code(workspace, name, models, deployment_config, deployment_target, overwrite)    1580     1581         # Environment-based webservice.

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/model.py in _deploy_no_code(workspace, name, models, deployment_config, deployment_target, overwrite)    1795         :rtype: azureml.core.Webservice    1796         """
-> 1797         environment_image_request = build_and_validate_no_code_environment_image_request(models)    1798   1799         return Model._deploy_with_environment_image_request(workspace, name, environment_image_request,

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/_model_management/_util.py in build_and_validate_no_code_environment_image_request(models)    1180         raise UserErrorException('You must provide an InferenceConfig when deploying a model with model_framework '    1181  'set to {}. Default environments are only provided for these frameworks: {}.'
-> 1182                                  .format(model.model_framework, Model._SUPPORTED_FRAMEWORKS_FOR_NO_CODE_DEPLOY))    1183     1184    
# Only specify the model IDs; MMS will provide the environment, driver program, etc.

UserErrorException: UserErrorException:     Message: You must provide an InferenceConfig when deploying a model with model_framework set to AutoML. Default environments are only provided for these frameworks: ['Onnx', 'ScikitLearn', 'TensorFlow'].     InnerException None     ErrorResponse  {
    "error": {
        "code": "UserError",
        "message": "You must provide an InferenceConfig when deploying a model with model_framework set to AutoML. Default environments are only provided for these frameworks: ['Onnx', 'ScikitLearn', 'TensorFlow']."
    }

从 Azure 机器学习工作室部署 AutoML 生成的模型时,系统不会提示我提供入口脚本或依赖项文件(或 InferenceConfig)。有没有办法使用 Python SDK 进行配置,以便我可以“无代码部署”AutoML 生成的模型?我的代码有问题吗?希望你能帮忙。

4

1 回答 1

3

我认为您不能在您的场景中依赖“无代码”部署,因为 AutoML 可能会发现最佳解决方案来自“无代码”部署尚不支持的框架。

如果有帮助,您可以InferenceConfig从您的Run

environment = best_run.get_context().get_environment()
inference_config = InferenceConfig(entry_script='score.py', environment=environment)
于 2020-07-16T23:30:19.870 回答