-
cmake
Make sure cmake version >= 3.14.0. The below script shows how to install cmake 3.20.0. You can find more versions here.
wget https://github.com/Kitware/CMake/releases/download/v3.20.0/cmake-3.20.0-linux-x86_64.tar.gz tar -xzvf cmake-3.20.0-linux-x86_64.tar.gz sudo ln -sf $(pwd)/cmake-3.20.0-linux-x86_64/bin/* /usr/bin/
-
GCC 7+
MMDeploy requires compilers that support C++17.
# Add repository if ubuntu < 18.04 sudo add-apt-repository ppa:ubuntu-toolchain-r/test sudo apt-get update sudo apt-get install gcc-7 sudo apt-get install g++-7
NAME | INSTALLATION |
---|---|
conda | Please install conda according to the official guide. Create a conda virtual environment and activate it.
|
PyTorch (>=1.8.0) |
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.
|
mmcv | Install mmcv as follows. Refer to the guide for details.
|
You can skip this chapter if you are only interested in the model converter.
NAME | INSTALLATION |
---|---|
OpenCV (>=3.0) |
On Ubuntu >=18.04,
|
pplcv | A high-performance image processing library of openPPL. It is optional which only be needed if cuda platform is required.
|
Both MMDeploy's model converter and SDK share the same inference engines.
You can select you interested inference engines and do the installation by following the given commands.
NAME | PACKAGE | INSTALLATION |
---|---|---|
ONNXRuntime | onnxruntime (>=1.8.1) |
1. Install python package
|
TensorRT |
TensorRT |
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT. 2. Here is an example of installing TensorRT 8.2 GA Update 2 for Linux x86_64 and CUDA 11.x that you can refer to. First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:
|
cuDNN |
1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive. In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2 2. Extract the compressed file and set the environment variables
|
|
PPL.NN | ppl.nn |
1. Please follow the guide to build ppl.nn and install pyppl .2. Export pplnn's root path to environment variable
|
OpenVINO | openvino | 1. Install OpenVINO package
|
ncnn | ncnn | 1. Download and build ncnn according to its wiki.
Make sure to enable -DNCNN_PYTHON=ON in your build command. 2. Export ncnn's root path to environment variable
|
TorchScript | libtorch |
1. Download libtorch from here. Please note that only Pre-cxx11 ABI and version 1.8.1+ on Linux platform are supported by now. For previous versions of libtorch, you can find them in the issue comment. 2. Take Libtorch1.8.1+cu111 as an example. You can install it like this:
|
Ascend | CANN |
1. Install CANN follow official guide. 2. Setup environment
|
TVM | TVM |
1. Install TVM follow official guide. 2. Setup environment
|
Note:
If you want to make the above environment variables permanent, you could add them to ~/.bashrc
. Take the ONNXRuntime for example,
echo '# set env for onnxruntime' >> ~/.bashrc
echo "export ONNXRUNTIME_DIR=${ONNXRUNTIME_DIR}" >> ~/.bashrc
echo "export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
source ~/.bashrc
cd /the/root/path/of/MMDeploy
export MMDEPLOY_DIR=$(pwd)
If one of inference engines among ONNXRuntime, TensorRT, ncnn and libtorch is selected, you have to build the corresponding custom ops.
-
ONNXRuntime Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} .. make -j$(nproc) && make install
-
TensorRT Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=trt -DTENSORRT_DIR=${TENSORRT_DIR} -DCUDNN_DIR=${CUDNN_DIR} .. make -j$(nproc) && make install
-
ncnn Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=${NCNN_DIR}/build/install/lib/cmake/ncnn .. make -j$(nproc) && make install
-
TorchScript Custom Ops
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake -DCMAKE_CXX_COMPILER=g++-7 -DMMDEPLOY_TARGET_BACKENDS=torchscript -DTorch_DIR=${Torch_DIR} .. make -j$(nproc) && make install
Please check cmake build option.
cd ${MMDEPLOY_DIR}
mim install -e .
Note
- Some dependencies are optional. Simply running
pip install -e .
will only install the minimum runtime requirements. To use optional dependencies, install them manually withpip install -r requirements/optional.txt
or specify desired extras when callingpip
(e.g.pip install -e .[optional]
). Valid keys for the extras field are:all
,tests
,build
,optional
. - It is recommended to install patch for cuda10, otherwise GEMM related errors may occur when model runs
MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively. You can also activate other engines after the model.
-
cpu + ONNXRuntime
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_BUILD_EXAMPLES=ON \ -DMMDEPLOY_TARGET_DEVICES=cpu \ -DMMDEPLOY_TARGET_BACKENDS=ort \ -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} make -j$(nproc) && make install
-
cuda + TensorRT
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_BUILD_EXAMPLES=ON \ -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ -DMMDEPLOY_TARGET_BACKENDS=trt \ -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ -DTENSORRT_DIR=${TENSORRT_DIR} \ -DCUDNN_DIR=${CUDNN_DIR} make -j$(nproc) && make install
-
pplnn
cd ${MMDEPLOY_DIR} mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_EXAMPLES=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ -DMMDEPLOY_TARGET_BACKENDS=pplnn \ -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/cuda-build/install/lib/cmake/ppl \ -Dpplnn_DIR=${PPLNN_DIR}/pplnn-build/install/lib/cmake/ppl make -j$(nproc) && make install