diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml index 0b05c42a64..8dd84baa1a 100644 --- a/.github/workflows/lint.yml +++ b/.github/workflows/lint.yml @@ -16,7 +16,9 @@ jobs: python -m pip install pre-commit pre-commit install - name: Linting - run: pre-commit run --all-files + run: | + pre-commit run --all-files + git diff - name: Format c/cuda codes with clang-format uses: DoozyX/clang-format-lint-action@v0.11 with: diff --git a/docs/en/01-how-to-build/linux-x86_64.md b/docs/en/01-how-to-build/linux-x86_64.md index 5aeb0ad385..028fd9fce0 100644 --- a/docs/en/01-how-to-build/linux-x86_64.md +++ b/docs/en/01-how-to-build/linux-x86_64.md @@ -395,3 +395,30 @@ You can also activate other engines after the model. make -j$(nproc) && make install ``` + +- cuda + TensorRT + onnxruntime + openvino + ncnn + + If the [ncnn auto-install script](../../../tools/scripts/build_ubuntu_x64_ncnn.py) is used, protobuf will be installed in mmdeploy-dep/pbinstall in the same directory as mmdeploy. + + ```Bash + export PROTO_DIR=/path/to/mmdeploy-dep/pbinstall + cmake .. \ + -DCMAKE_CXX_COMPILER=g++-7 \ + -DMMDEPLOY_BUILD_SDK=ON \ + -DMMDEPLOY_BUILD_EXAMPLES=ON \ + -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ + -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ + -DMMDEPLOY_TARGET_BACKENDS="trt;ort;ncnn;openvino" \ + -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ + -DTENSORRT_DIR=${TENSORRT_DIR} \ + -DCUDNN_DIR=${CUDNN_DIR} \ + -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ + -DInferenceEngine_DIR=${OPENVINO_DIR}/runtime/cmake \ + -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn \ + -DProtobuf_LIBRARIES=${PROTO_DIR}/lib/libprotobuf.so \ + -DProtobuf_PROTOC_EXECUTABLE=${PROTO_DIR}/bin/protoc \ + -DProtobuf_INCLUDE_DIR=${PROTO_DIR}/pbinstall/include + ``` + +``` +``` diff --git a/docs/zh_cn/01-how-to-build/linux-x86_64.md b/docs/zh_cn/01-how-to-build/linux-x86_64.md index edc89dbfca..21ef40830c 100644 --- a/docs/zh_cn/01-how-to-build/linux-x86_64.md +++ b/docs/zh_cn/01-how-to-build/linux-x86_64.md @@ -335,7 +335,7 @@ mim install -e . #### 编译 SDK 和 Demos -下文展示2个构建SDK的样例,分别用 ONNXRuntime 和 TensorRT 作为推理引擎。您可以参考它们,激活其他的推理引擎。 +下文展示一些构建 SDK 的样例。您可以参考它们,激活其他的推理引擎。 - cpu + ONNXRuntime @@ -390,3 +390,30 @@ mim install -e . make -j$(nproc) && make install ``` + +- cuda + TensorRT + onnxruntime + openvino + ncnn + + 如果使用了 [ncnn 自动安装脚本](../../../tools/scripts/build_ubuntu_x64_ncnn.py), protobuf 会安装在 mmdeploy 同级目录的 mmdeploy-dep/pbinstall 中。 + + ```Bash + export PROTO_DIR=/path/to/mmdeploy-dep/pbinstall + cmake .. \ + -DCMAKE_CXX_COMPILER=g++-7 \ + -DMMDEPLOY_BUILD_SDK=ON \ + -DMMDEPLOY_BUILD_EXAMPLES=ON \ + -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ + -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ + -DMMDEPLOY_TARGET_BACKENDS="trt;ort;ncnn;openvino" \ + -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ + -DTENSORRT_DIR=${TENSORRT_DIR} \ + -DCUDNN_DIR=${CUDNN_DIR} \ + -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ + -DInferenceEngine_DIR=${OPENVINO_DIR}/runtime/cmake \ + -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn \ + -DProtobuf_LIBRARIES=${PROTO_DIR}/lib/libprotobuf.so \ + -DProtobuf_PROTOC_EXECUTABLE=${PROTO_DIR}/bin/protoc \ + -DProtobuf_INCLUDE_DIR=${PROTO_DIR}/pbinstall/include + ``` + +``` +```