diff --git a/code/scoring/conda_dependencies.yml b/code/scoring/conda_dependencies.yml index 41a05694..9c5505e2 100644 --- a/code/scoring/conda_dependencies.yml +++ b/code/scoring/conda_dependencies.yml @@ -28,7 +28,7 @@ dependencies: - azureml-model-management-sdk==1.0.1b6.post1 - azureml-sdk==1.0.74 - scipy==1.3.1 - - scikit-learn==0.21.3 + - scikit-learn==0.22 - pandas==0.25.3 - numpy==1.17.3 - joblib==0.14.0 diff --git a/docs/getting_started.md b/docs/getting_started.md index b24545f4..e6f544a1 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -182,6 +182,8 @@ specified). * The second stage of the pipeline, **Train model**, triggers the run of the ML Training Pipeline. The training pipeline will train, evaluate, and register a new model. The actual computation is performed in an [Azure Machine Learning Compute cluster](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute). In Azure DevOps, this stage runs an agentless job that waits for the completion of the Azure ML job, so it can wait for training completion for hours or even days without using agent resources. +**Note:** If the model evaluation determines that the new model does not perform better than the previous one then the new model will not be registered and the pipeline will be cancelled. + * The third stage of the pipeline, **Deploy to ACI**, deploys the model to the QA environment in [Azure Container Instances](https://azure.microsoft.com/en-us/services/container-instances/). It then runs a *smoke test* to validate the deployment, i.e. sends a sample query to the scoring web service and verifies that it returns a response in the expected format. Wait until the pipeline finished and make sure there is a new model in the **ML Workspace**: