Skip to content

Commit

Permalink
Add Bootstrap code (#186)
Browse files Browse the repository at this point in the history
-bootstrap script
-directory structure
  • Loading branch information
sudivate authored Feb 11, 2020
1 parent e187158 commit 977593a
Show file tree
Hide file tree
Showing 5 changed files with 209 additions and 34 deletions.
17 changes: 7 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,36 +11,33 @@ description: "Code which demonstrates how to set up and operationalize an MLOps

# MLOps with Azure ML


[![Build Status](https://aidemos.visualstudio.com/MLOps/_apis/build/status/microsoft.MLOpsPython?branchName=master)](https://aidemos.visualstudio.com/MLOps/_build/latest?definitionId=151&branchName=master)


MLOps will help you to understand how to build the Continuous Integration and Continuous Delivery pipeline for a ML/AI project. We will be using the Azure DevOps Project for build and release/deployment pipelines along with Azure ML services for model retraining pipeline, model management and operationalization.
MLOps will help you to understand how to build the Continuous Integration and Continuous Delivery pipeline for a ML/AI project. We will be using the Azure DevOps Project for build and release/deployment pipelines along with Azure ML services for model retraining pipeline, model management and operationalization.

![ML lifecycle](/docs/images/ml-lifecycle.png)

This template contains code and pipeline definition for a machine learning project demonstrating how to automate an end to end ML/AI workflow. The build pipelines include DevOps tasks for data sanity test, unit test, model training on different compute targets, model version management, model evaluation/model selection, model deployment as realtime web service, staged deployment to QA/prod and integration testing.


## Prerequisite

- Active Azure subscription
- At least contributor access to Azure subscription

## Getting Started:
## Getting Started

To deploy this solution in your subscription, follow the manual instructions in the [getting started](docs/getting_started.md) doc


## Architecture Diagram

This reference architecture shows how to implement continuous integration (CI), continuous delivery (CD), and retraining pipeline for an AI application using Azure DevOps and Azure Machine Learning. The solution is built on the scikit-learn diabetes dataset but can be easily adapted for any AI scenario and other popular build systems such as Jenkins and Travis.
This reference architecture shows how to implement continuous integration (CI), continuous delivery (CD), and retraining pipeline for an AI application using Azure DevOps and Azure Machine Learning. The solution is built on the scikit-learn diabetes dataset but can be easily adapted for any AI scenario and other popular build systems such as Jenkins and Travis.

![Architecture](/docs/images/main-flow.png)


## Architecture Flow

### Train Model

1. Data Scientist writes/updates the code and push it to git repo. This triggers the Azure DevOps build pipeline (continuous integration).
2. Once the Azure DevOps build pipeline is triggered, it performs code quality checks, data sanity tests, unit tests, builds an [Azure ML Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines) and publishes it in an [Azure ML Service Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace).
3. The [Azure ML Pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-ml-pipelines) is triggered once the Azure DevOps build pipeline completes. All the tasks in this pipeline runs on Azure ML Compute. Following are the tasks in this pipeline:
Expand All @@ -56,13 +53,13 @@ This reference architecture shows how to implement continuous integration (CI),
Once you have registered your ML model, you can use Azure ML + Azure DevOps to deploy it.

The [Azure DevOps multi-stage pipeline](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml) packages the new model along with the scoring file and its python dependencies into a [docker image](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#image) and pushes it to [Azure Container Registry](https://docs.microsoft.com/en-us/azure/container-registry/container-registry-intro). This image is used to deploy the model as [web service](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-azure-machine-learning-architecture#web-service) across QA and Prod environments. The QA environment is running on top of [Azure Container Instances (ACI)](https://azure.microsoft.com/en-us/services/container-instances/) and the Prod environment is built with [Azure Kubernetes Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes).


### Repo Details

You can find the details of the code and scripts in the repository [here](/docs/code_description.md)

### References

- [Azure Machine Learning(Azure ML) Service Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/overview-what-is-azure-ml)
- [Azure ML CLI](https://docs.microsoft.com/en-us/azure/machine-learning/service/reference-azure-machine-learning-cli)
- [Azure ML Samples](https://docs.microsoft.com/en-us/azure/machine-learning/service/samples-notebooks)
Expand All @@ -73,7 +70,7 @@ You can find the details of the code and scripts in the repository [here](/docs/

This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.microsoft.com.
the rights to use your contribution. For details, visit <https://cla.microsoft.com.>

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
Expand Down
9 changes: 9 additions & 0 deletions bootstrap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Bootstrap from MLOpsPython repository

To use this existing project structure and scripts for your new ML project, you can quickly get started from the existing repository, bootstrap and create a template that works for your ML project. Bootstraping will prepare a similar directory structure for your project which includes renaming files and folders, deleting and cleaning up some directories and fixing imports and absolute path based on your project name. This will enable reusing various resources like pre-built pipelines and scripts for your new project.

To bootstrap from the existing MLOpsPython repository clone this repository and run bootstrap.py script as below

>python bootstrap.py --d [dirpath] --n [projectname]
Where [dirpath] is the absolute path to the root of your directory where MLOps repo is cloned and [projectname] is the name of your ML project
135 changes: 135 additions & 0 deletions bootstrap/bootstrap.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
import os
import sys
import argparse
# from git import Repo


class Helper:

def __init__(self, project_directory, project_name):
self._project_directory = project_directory
self._project_name = project_name
self._git_repo = "https://github.com/microsoft/MLOpsPython.git"

@property
def project_directory(self):
return self._project_directory

@property
def project_name(self):
return self._project_name

@property
def git_repo(self):
return self._git_repo

# def clonerepo(self):
# # Download MLOpsPython repo from git
# Repo.clone_from(
# self._git_repo, self._project_directory, branch="master", depth=1) # NOQA: E501
# print(self._project_directory)

def renamefiles(self):
# Rename all files starting with diabetes_regression with project name
strtoreplace = "diabetes_regression"
dirs = [".pipelines", r"ml_service\pipelines"]
for dir in dirs:
dirpath = os.path.join(self._project_directory, dir)
for filename in os.listdir(dirpath):
if(filename.find(strtoreplace) != -1):
src = os.path.join(self._project_directory, dir, filename)
dst = os.path.join(self._project_directory,
dir, filename.replace(strtoreplace, self._project_name, 1)) # NOQA: E501
os.rename(src, dst)

def renamedir(self):
# Rename any directory with diabetes_regression with project name
dirs = ["diabetes_regression"]
for dir in dirs:
src = os.path.join(self._project_directory, dir)
dst = os.path.join(self._project_directory, self._project_name)
os.rename(src, dst)

def deletedir(self):
# Delete unwanted directories
dirs = ["docs", r"diabetes_regression\training\R"]
for dir in dirs:
os.system(
'rmdir /S /Q "{}"'.format(os.path.join(self._project_directory, dir))) # NOQA: E501

def replaceprojectname(self):
# Replace instances of diabetes_regression within files
dirs = [r".env.example",
r".pipelines\azdo-base-pipeline.yml",
r".pipelines\azdo-pr-build-train.yml",
r".pipelines\diabetes_regression-ci-build-train.yml",
r".pipelines\diabetes_regression-ci-image.yml",
r".pipelines\diabetes_regression-template-get-model-version.yml", # NOQA: E501
r".pipelines\diabetes_regression-variables.yml",
r"environment_setup\Dockerfile",
r"environment_setup\install_requirements.sh",
r"ml_service\pipelines\diabetes_regression_build_train_pipeline_with_r_on_dbricks.py", # NOQA: E501
r"ml_service\pipelines\diabetes_regression_build_train_pipeline_with_r.py", # NOQA: E501
r"ml_service\pipelines\diabetes_regression_build_train_pipeline.py", # NOQA: E501
r"ml_service\pipelines\diabetes_regression_verify_train_pipeline.py", # NOQA: E501
r"ml_service\util\create_scoring_image.py",
r"diabetes_regression\azureml_environment.json",
r"diabetes_regression\conda_dependencies.yml",
r"diabetes_regression\evaluate\evaluate_model.py",
r"diabetes_regression\training\test_train.py"] # NOQA: E501

for file in dirs:
fin = open(os.path.join(self._project_directory, file),
"rt", encoding="utf8")
data = fin.read()
data = data.replace("diabetes_regression", self.project_name)
fin.close()
fin = open(os.path.join(self._project_directory, file),
"wt", encoding="utf8")
fin.write(data)
fin.close()

def cleandir(self):
# Clean up directories
dirs = ["data", "experimentation"]
for dir in dirs:
for root, dirs, files in os.walk(os.path.join(self._project_directory, dir)): # NOQA: E501
for file in files:
os.remove(os.path.join(root, file))

def validateargs(self):
# Validate arguments
if (os.path.isdir(self._project_directory) is False):
raise Exception(
"Not a valid directory. Please provide absolute directory path") # NOQA: E501
# if (len(os.listdir(self._project_directory)) > 0):
# raise Exception("Directory not empty. PLease empty directory")
if(len(self._project_name) < 3 or len(self._project_name) > 15):
raise Exception("Project name should be 3 to 15 chars long")


def main(args):
parser = argparse.ArgumentParser(description='New Template')
parser.add_argument("--d", type=str,
help="Absolute path to new project direcory")
parser.add_argument(
"--n", type=str, help="Name of the project[3-15 chars] ")
try:
args = parser.parse_args()
project_directory = args.d
project_name = args.n
helper = Helper(project_directory, project_name)
helper.validateargs()
# helper.clonerepo()
helper.cleandir()
helper.replaceprojectname()
helper.deletedir()
helper.renamefiles()
helper.renamedir()
except Exception as e:
print(e)
return 0


if '__main__' == __name__:
sys.exit(main(sys.argv))
34 changes: 32 additions & 2 deletions docs/code_description.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,34 @@
## Repo Details

### Directory Structure

High level directory structure for this repository:

```bash
├── .pipelines <- Azure DevOps YAML pipelines for CI, PR and model training and deployment.
├── bootstrap <- Python script to initialize this repository with a custom project name.
├── charts <- Helm charts to deploy resources on Azure Kubernetes Service(AKS).
├── data <- Initial set of data to train and evaluate model.
├── diabetes_regression <- The top-level folder for the ML project.
│ ├── evaluate <- Python script to evaluate trained ML model.
│ ├── register <- Python script to register trained ML model with Azure Machine Learning Service.
│ ├── scoring <- Python score.py to deploy trained ML model.
│ ├── training <- Python script to train ML model.
│ ├── R <- R script to train R based ML model.
│ ├── util <- Python script for various utility operations specific to this ML project.
├── docs <- Extensive markdown documentation for entire project.
├── environment_setup <- The top-level folder for everything related to infrastructure.
│ ├── arm-templates <- Azure Resource Manager(ARM) templates to build infrastructure needed for this project.
├── experimentation <- Jupyter notebooks with ML experimentation code.
├── ml_service <- The top-level folder for all Azure Machine Learning resources.
│ ├── pipelines <- Python script that builds Azure Machine Learning pipelines.
│ ├── util <- Python script for various utility operations specific to Azure Machine Learning.
├── .env.example <- Example .env file with environment for local development experience.
├── .gitignore <- A gitignore file specifies intentionally un-tracked files that Git should ignore.
├── LICENSE <- License document for this project.
├── README.md <- The top-level README for developers using this project.
```

### Environment Setup

- `environment_setup/install_requirements.sh` : This script prepares a local conda environment i.e. install the Azure ML SDK and the packages specified in environment definitions.
Expand All @@ -8,7 +37,7 @@

- `environment_setup/Dockerfile` : Dockerfile of a build agent containing Python 3.6 and all required packages.

- `environment_setup/docker-image-pipeline.yml` : An AzDo pipeline for building and pushing [microsoft/mlopspython](https://hub.docker.com/_/microsoft-mlops-python) image.
- `environment_setup/docker-image-pipeline.yml` : An AzDo pipeline for building and pushing [microsoft/mlopspython](https://hub.docker.com/_/microsoft-mlops-python) image.

### Pipelines

Expand Down Expand Up @@ -37,10 +66,11 @@
- `diabetes_regression/evaluate/evaluate_model.py` : an evaluating step of an ML training pipeline which registers a new trained model if evaluation shows the new model is more performant than the previous one.
- `diabetes_regression/evaluate/register_model.py` : (LEGACY) registers a new trained model if evaluation shows the new model is more performant than the previous one.
- `diabetes_regression/training/R/r_train.r` : training a model with R basing on a sample dataset (weight_data.csv).
- `diabetes_regression/training/R/train_with_r.py` : a python wrapper (ML Pipeline Step) invoking R training script on ML Compute
- `diabetes_regression/training/R/train_with_r.py` : a python wrapper (ML Pipeline Step) invoking R training script on ML Compute
- `diabetes_regression/training/R/train_with_r_on_databricks.py` : a python wrapper (ML Pipeline Step) invoking R training script on Databricks Compute
- `diabetes_regression/training/R/weight_data.csv` : a sample dataset used by R script (r_train.r) to train a model

### Scoring

- `diabetes_regression/scoring/score.py` : a scoring script which is about to be packed into a Docker Image along with a model while being deployed to QA/Prod environment.
- `diabetes_regression/scoring/inference_config.yml`, deployment_config_aci.yml, deployment_config_aks.yml : configuration files for the [AML Model Deploy](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.private-vss-services-azureml&ssr=false#overview) pipeline task for ACI and AKS deployment targets.
Loading

0 comments on commit 977593a

Please sign in to comment.