Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update get_started.md(master) #1950

Merged
merged 14 commits into from
Apr 4, 2023
83 changes: 63 additions & 20 deletions .github/workflows/prebuild.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ jobs:
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
- name: Build MMDeploy
run: |
source activate mmdeploy-3.6
Expand All @@ -51,17 +52,55 @@ jobs:
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort;trt' \
--system linux --output config.yml --device cuda --build-sdk --build-sdk-monolithic \
--build-sdk-python --sdk-dynamic-net
--build-sdk-python --sdk-dynamic-net --onnxruntime-dir=$ONNXRUNTIME_GPU_DIR
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
mkdir -p /__w/mmdeploy/prebuild/$OUTPUT_DIR
cp -r pack/* /__w/mmdeploy/prebuild/$OUTPUT_DIR

linux_build_cxx11abi:
runs-on: [self-hosted, linux-3090]
container:
image: openmmlab/mmdeploy:build-ubuntu18.04-cuda11.3
options: "--gpus=all --ipc=host"
volumes:
- /data2/actions-runner/prebuild:/__w/mmdeploy/prebuild
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
submodules: recursive
- name: Get mmdeploy version
run: |
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Build sdk cpu backend
run: |
mkdir pack; cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort' \
--system linux --output config.yml --device cpu --build-sdk --build-sdk-monolithic \
--sdk-dynamic-net --cxx11abi
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Build sdk cuda backend
run: |
cd pack
python ../tools/package_tools/generate_build_config.py --backend 'ort;trt' \
--system linux --output config.yml --device cuda --build-sdk --build-sdk-monolithic \
--sdk-dynamic-net --cxx11abi --onnxruntime-dir=$ONNXRUNTIME_GPU_DIR --cudnn-dir /usr
RunningLeon marked this conversation as resolved.
Show resolved Hide resolved
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
mkdir -p /__w/mmdeploy/prebuild/$MMDEPLOY_VERSION
rm -rf /__w/mmdeploy/prebuild/$MMDEPLOY_VERSION/*
mv pack/* /__w/mmdeploy/prebuild/$MMDEPLOY_VERSION
mkdir -p /__w/mmdeploy/prebuild/$OUTPUT_DIR
cp -r pack/* /__w/mmdeploy/prebuild/$OUTPUT_DIR

linux_test:
runs-on: [self-hosted, linux-3090]
needs: linux_build
needs:
- linux_build
- linux_build_cxx11abi
container:
image: openmmlab/mmdeploy:ubuntu20.04-cuda11.3
options: "--gpus=all --ipc=host"
Expand All @@ -76,13 +115,14 @@ jobs:
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Test python
run: |
cd /__w/mmdeploy/prebuild/$MMDEPLOY_VERSION
cd /__w/mmdeploy/prebuild/$OUTPUT_DIR
bash $GITHUB_WORKSPACE/tools/package_tools/test/test_sdk_python.sh
- name: Test c/cpp
run: |
cd /__w/mmdeploy/prebuild/$MMDEPLOY_VERSION
cd /__w/mmdeploy/prebuild/$OUTPUT_DIR
bash $GITHUB_WORKSPACE/tools/package_tools/test/test_sdk.sh

linux_upload:
Expand All @@ -100,20 +140,21 @@ jobs:
export MMDEPLOY_VERSION=$(python3 -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$MMDEPLOY_VERSION" >> $GITHUB_ENV
echo "OUTPUT_DIR=$MMDEPLOY_VERSION-$GITHUB_RUN_ID" >> $GITHUB_ENV
- name: Upload mmdeploy
run: |
cd $PREBUILD_DIR/$MMDEPLOY_VERSION/mmdeploy
cd $PREBUILD_DIR/$OUTPUT_DIR/mmdeploy
pip install twine
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Upload mmdeploy_runtime
run: |
cd $PREBUILD_DIR/$MMDEPLOY_VERSION/mmdeploy_runtime
cd $PREBUILD_DIR/$OUTPUT_DIR/mmdeploy_runtime
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Zip mmdeploy sdk
run: |
cd $PREBUILD_DIR/$MMDEPLOY_VERSION/sdk
cd $PREBUILD_DIR/$OUTPUT_DIR/sdk
for folder in *
do
tar czf $folder.tar.gz $folder
Expand All @@ -122,7 +163,7 @@ jobs:
uses: softprops/action-gh-release@v1
with:
files: |
$PREBUILD_DIR/$MMDEPLOY_VERSION/sdk/*.tar.gz
$PREBUILD_DIR/$OUTPUT_DIR/sdk/*.tar.gz


windows_build:
Expand All @@ -138,6 +179,7 @@ jobs:
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Build MMDeploy
run: |
. D:\DEPS\cienv\prebuild_gpu_env.ps1
Expand Down Expand Up @@ -166,9 +208,8 @@ jobs:
python ../tools/package_tools/mmdeploy_builder.py --config config.yml
- name: Move artifact
run: |
New-Item "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION" -ItemType Directory -Force
Remove-Item "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION/*" -Force -Recurse
Move-Item pack/* "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION"
New-Item "D:/DEPS/ciartifact/$env:OUTPUT_DIR" -ItemType Directory -Force
Move-Item pack/* "D:/DEPS/ciartifact/$env:OUTPUT_DIR"

windows_test:
runs-on: [self-hosted, win10-3080]
Expand All @@ -182,15 +223,16 @@ jobs:
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Test python
run: |
cd "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION"
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR"
. D:\DEPS\cienv\prebuild_cpu_env.ps1
conda activate ci-test
& "$env:GITHUB_WORKSPACE/tools/package_tools/test/test_sdk_python.ps1"
- name: Test c/cpp
run: |
cd "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION"
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR"
. D:\DEPS\cienv\prebuild_cpu_env.ps1
& "$env:GITHUB_WORKSPACE/tools/package_tools/test/test_sdk.ps1"

Expand All @@ -208,21 +250,22 @@ jobs:
$env:MMDEPLOY_VERSION=(python -c "import sys; sys.path.append('mmdeploy');from version import __version__;print(__version__)")
echo $env:MMDEPLOY_VERSION
echo "MMDEPLOY_VERSION=$env:MMDEPLOY_VERSION" >> $env:GITHUB_ENV
echo "OUTPUT_DIR=$env:MMDEPLOY_VERSION-$env:GITHUB_RUN_ID" >> $env:GITHUB_ENV
- name: Upload mmdeploy
run: |
cd "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION/mmdeploy"
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/mmdeploy"
conda activate mmdeploy-3.8
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Upload mmdeploy_runtime
run: |
cd "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION/mmdeploy_runtime"
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/mmdeploy_runtime"
conda activate mmdeploy-3.8
# twine upload * --repository testpypi -u __token__ -p ${{ secrets.test_pypi_password }}
twine upload * -u __token__ -p ${{ secrets.pypi_password }}
- name: Zip mmdeploy sdk
run: |
cd "D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION/sdk"
cd "D:/DEPS/ciartifact/$env:OUTPUT_DIR/sdk"
$folders = $(ls).Name
foreach ($folder in $folders) {
Compress-Archive -Path $folder -DestinationPath "$folder.zip"
Expand All @@ -231,4 +274,4 @@ jobs:
uses: softprops/action-gh-release@v1
with:
files: |
D:/DEPS/ciartifact/$env:MMDEPLOY_VERSION/sdk/*.zip
D:/DEPS/ciartifact/$env:OUTPUT_DIR/sdk/*.zip
91 changes: 31 additions & 60 deletions docs/en/02-how-to-run/prebuilt_package_windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,26 +21,27 @@

______________________________________________________________________

This tutorial takes `mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1.zip` and `mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip` as examples to show how to use the prebuilt packages.
This tutorial takes `mmdeploy-0.13.0-windows-amd64.zip` and `mmdeploy-0.13.0-windows-amd64-cuda11.3.zip` as examples to show how to use the prebuilt packages. The former support onnxruntime cpu inference, the latter support onnxruntime-gpu and tensorrt inference.
irexyc marked this conversation as resolved.
Show resolved Hide resolved

The directory structure of the prebuilt package is as follows, where the `dist` folder is about model converter, and the `sdk` folder is related to model inference.

```
.
|-- dist
`-- sdk
|-- bin
|-- example
|-- include
|-- lib
`-- python
├── build_sdk.ps1
├── example
├── include
├── install_opencv.ps1
├── lib
├── README.md
├── set_env.ps1
└── thirdparty
```

## Prerequisite

In order to use the prebuilt package, you need to install some third-party dependent libraries.

1. Follow the [get_started](../get_started.md) documentation to create a virtual python environment and install pytorch, torchvision and mmcv-full. To use the C interface of the SDK, you need to install [vs2019+](https://visualstudio.microsoft.com/), [OpenCV](https://github.com/opencv/opencv/releases).
1. Follow the [get_started](../get_started.md) documentation to create a virtual python environment and install pytorch, torchvision and mmcv. To use the C interface of the SDK, you need to install [vs2019+](https://visualstudio.microsoft.com/), [OpenCV](https://github.com/opencv/opencv/releases).

:point_right: It is recommended to use `pip` instead of `conda` to install pytorch and torchvision

Expand Down Expand Up @@ -80,9 +81,8 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API).

```bash
# download mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1.zip
pip install .\mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\dist\mmdeploy-0.13.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\sdk\python\mmdeploy_runtime-0.13.0-cp38-none-win_amd64.whl
pip install mmdeploy==0.13.0
pip install mmdeploy-runtime==0.13.0
```

:point_right: If you have installed it before, please uninstall it first.
Expand All @@ -100,16 +100,17 @@ In order to use `ONNX Runtime` backend, you should also do the following steps.
![sys-path](https://user-images.githubusercontent.com/16019484/181463801-1d7814a8-b256-46e9-86f2-c08de0bc150b.png)
:exclamation: Restart powershell to make the environment variables setting take effect. You can check whether the settings are in effect by `echo $env:PATH`.

8. Download SDK C/cpp Library mmdeploy-0.13.0-windows-amd64.zip

### TensorRT

In order to use `TensorRT` backend, you should also do the following steps.

5. Install `mmdeploy` (Model Converter) and `mmdeploy_runtime` (SDK Python API).

```bash
# download mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0.zip
pip install .\mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\dist\mmdeploy-0.13.0-py38-none-win_amd64.whl
pip install .\mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\python\mmdeploy_runtime-0.13.0-cp38-none-win_amd64.whl
pip install mmdeploy==0.13.0
pip install mmdeploy-runtime-gpu==0.13.0
```

:point_right: If you have installed it before, please uninstall it first.
Expand All @@ -128,6 +129,8 @@ In order to use `TensorRT` backend, you should also do the following steps.

7. Install pycuda by `pip install pycuda`

8. Download SDK C/cpp Library mmdeploy-0.13.0-windows-amd64-cuda11.3.zip

## Model Convert

### ONNX Runtime Example
Expand All @@ -138,7 +141,7 @@ After preparation work, the structure of the current working directory should be

```
..
|-- mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1
|-- mmdeploy-0.13.0-windows-amd64
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -186,7 +189,7 @@ After installation of mmdeploy-tensorrt prebuilt package, the structure of the c

```
..
|-- mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0
|-- mmdeploy-0.13.0-windows-amd64-cuda11.3
|-- mmclassification
|-- mmdeploy
`-- resnet18_8xb32_in1k_20210831-fbbb1da6.pth
Expand Down Expand Up @@ -299,7 +302,7 @@ python .\mmdeploy\demo\python\image_classification.py cpu .\work_dir\onnx\resnet

#### TensorRT

```
```bash
python .\mmdeploy\demo\python\image_classification.py cuda .\work_dir\trt\resnet\ .\mmclassification\demo\demo.JPEG
```

Expand All @@ -309,71 +312,39 @@ The following describes how to use the SDK's C API for inference

#### ONNXRuntime

1. Build examples

Under `mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\sdk\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\sdk\lib\cmake\MMDeploy `
-DONNXRUNTIME_DIR=C:\Deps\onnxruntime\onnxruntime-win-gpu-x64-1.8.1

cmake --build . --config Release
```
1. Add environment variables

2. Add environment variables or copy the runtime libraries to the same level directory of exe
Refer to the README.md in sdk folder

:point_right: The purpose is to make the exe find the relevant dll
2. Build examples

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\sdk\bin`) to the `PATH`.

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).
Refer to the README.md in sdk folder

3. Inference:

It is recommended to use `CMD` here.

Under `mmdeploy-0.13.0-windows-amd64-onnxruntime1.8.1\\sdk\\example\\build\\Release` directory:
Under `mmdeploy-0.13.0-windows-amd64\\example\\cpp\\build\\Release` directory:

```
.\image_classification.exe cpu C:\workspace\work_dir\onnx\resnet\ C:\workspace\mmclassification\demo\demo.JPEG
```

#### TensorRT

1. Build examples

Under `mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example` directory

```
// Path should be modified according to the actual location
mkdir build
cd build
cmake ..\cpp -A x64 -T v142 `
-DOpenCV_DIR=C:\Deps\opencv\build\x64\vc15\lib `
-DMMDeploy_DIR=C:\workspace\mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8 2.3.0\sdk\lib\cmake\MMDeploy `
-DTENSORRT_DIR=C:\Deps\tensorrt\TensorRT-8.2.3.0 `
-DCUDNN_DIR=C:\Deps\cudnn\8.2.1
cmake --build . --config Release
```

2. Add environment variables or copy the runtime libraries to the same level directory of exe
1. Add environment variables

:point_right: The purpose is to make the exe find the relevant dll
Refer to the README.md in sdk folder

If choose to add environment variables, add the runtime libraries path of `mmdeploy` (`mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\sdk\bin`) to the `PATH`.
2. Build examples

If choose to copy the dynamic libraries, copy the dll in the bin directory to the same level directory of the just compiled exe (build/Release).
Refer to the README.md in sdk folder

3. Inference

It is recommended to use `CMD` here.

Under `mmdeploy-0.13.0-windows-amd64-cuda11.1-tensorrt8.2.3.0\\sdk\\example\\build\\Release` directory
Under `mmdeploy-0.13.0-windows-amd64-cuda11.3\\example\\cpp\\build\\Release` directory

```
.\image_classification.exe cuda C:\workspace\work_dir\trt\resnet C:\workspace\mmclassification\demo\demo.JPEG
Expand Down
Loading