Skip to content

Latest commit

 

History

History
289 lines (248 loc) · 10.4 KB

windows.md

File metadata and controls

289 lines (248 loc) · 10.4 KB

Build for Windows


Build From Source

All the commands listed in the following chapters are verified on Windows 10.

Install Toolchains

  1. Download and install Visual Studio 2019
  2. Add the path of cmake to the environment variable PATH, i.e., "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin"
  3. Install cuda toolkit if NVIDIA gpu is available. You can refer to the official guide.

Install Dependencies

Install Dependencies for Model Converter

NAME INSTALLATION
conda Please install conda according to the official guide.
After installation, open anaconda powershell prompt under the Start Menu as the administrator, because:
1. All the commands listed in the following text are verified in anaconda powershell
2. As an administrator, you can install the thirdparty libraries to the system path so as to simplify MMDeploy build command
Note: if you are familiar with how cmake works, you can also use anaconda powershell prompt as an ordinary user.
PyTorch
(>=1.8.0)
Install PyTorch>=1.8.0 by following the official instructions. Be sure the CUDA version PyTorch requires matches that in your host.

pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
mmcv Install mmcv as follows. Refer to the guide for details.

$env:cu_version="cu111"
$env:torch_version="torch1.8.0"
pip install -U openmim
mim install "mmcv>=2.0.0rc1"

Install Dependencies for SDK

You can skip this chapter if you are only interested in the model converter.

NAME INSTALLATION
OpenCV
(>=3.0)
1. Find and download OpenCV 3+ for windows from here.
2. You can download the prebuilt package and install it to the target directory. Or you can build OpenCV from its source.
3. Find where OpenCVConfig.cmake locates in the installation directory. And export its path to the environment variable PATH like this,
$env:path = "\the\path\where\OpenCVConfig.cmake\locates;" + "$env:path"
pplcv A high-performance image processing library of openPPL.
It is optional which only be needed if cuda platform is required.

git clone https://github.com/openppl-public/ppl.cv.git
cd ppl.cv
git checkout tags/v0.7.0 -b v0.7.0
$env:PPLCV_DIR = "$pwd"
mkdir pplcv-build
cd pplcv-build
cmake .. -G "Visual Studio 16 2019" -T v142 -A x64 -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=install -DHPCC_USE_CUDA=ON -DPPLCV_USE_MSVC_STATIC_RUNTIME=OFF
cmake --build . --config Release -- /m
cmake --install . --config Release
cd ../..

Install Inference Engines for MMDeploy

Both MMDeploy's model converter and SDK share the same inference engines. You can select your interested inference engines and do the installation by following the given commands.

Currently, MMDeploy only verified ONNXRuntime and TensorRT for windows platform. As for the rest, MMDeploy will support them in the future.

NAME PACKAGE INSTALLATION
ONNXRuntime onnxruntime
(>=1.8.1)
1. Install python package
pip install onnxruntime==1.8.1
2. Download the windows prebuilt binary package from here. Extract it and export environment variables as below:

Invoke-WebRequest -Uri https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-win-x64-1.8.1.zip -OutFile onnxruntime-win-x64-1.8.1.zip
Expand-Archive onnxruntime-win-x64-1.8.1.zip .
$env:ONNXRUNTIME_DIR = "$pwd\onnxruntime-win-x64-1.8.1"
$env:path = "$env:ONNXRUNTIME_DIR\lib;" + $env:path
TensorRT
TensorRT
1. Login NVIDIA and download the TensorRT tar file that matches the CPU architecture and CUDA version you are using from here. Follow the guide to install TensorRT.
2. Here is an example of installing TensorRT 8.2 GA Update 2 for Windows x86_64 and CUDA 11.x that you can refer to.
First of all, click here to download CUDA 11.x TensorRT 8.2.3.0 and then install it and other dependency like below:

cd \the\path\of\tensorrt\zip\file
Expand-Archive TensorRT-8.2.3.0.Windows10.x86_64.cuda-11.4.cudnn8.2.zip .
pip install $env:TENSORRT_DIR\python\tensorrt-8.2.3.0-cp37-none-win_amd64.whl
$env:TENSORRT_DIR = "$pwd\TensorRT-8.2.3.0"
$env:path = "$env:TENSORRT_DIR\lib;" + $env:path
pip install pycuda
cuDNN 1. Download cuDNN that matches the CPU architecture, CUDA version and TensorRT version you are using from cuDNN Archive.
In the above TensorRT's installation example, it requires cudnn8.2. Thus, you can download CUDA 11.x cuDNN 8.2
2. Extract the zip file and set the environment variables

cd \the\path\of\cudnn\zip\file
Expand-Archive cudnn-11.3-windows-x64-v8.2.1.32.zip .
$env:CUDNN_DIR="$pwd\cuda"
$env:path = "$env:CUDNN_DIR\bin;" + $env:path
PPL.NN ppl.nn TODO
OpenVINO openvino TODO
ncnn ncnn TODO

Build MMDeploy

cd \the\root\path\of\MMDeploy
$env:MMDEPLOY_DIR="$pwd"

Build Model Converter

If one of inference engines among ONNXRuntime, TensorRT and ncnn is selected, you have to build the corresponding custom ops.

  • ONNXRuntime Custom Ops
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="ort" -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
  • TensorRT Custom Ops
mkdir build -ErrorAction SilentlyContinue
cd build
cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_TARGET_BACKENDS="trt" -DTENSORRT_DIR="$env:TENSORRT_DIR" -DCUDNN_DIR="$env:CUDNN_DIR"
cmake --build . --config Release -- /m
cmake --install . --config Release
  • ncnn Custom Ops

    TODO

Please check cmake build option.

Install Model Converter

cd $env:MMDEPLOY_DIR
pip install -e .

Note

  • Some dependencies are optional. Simply running pip install -e . will only install the minimum runtime requirements. To use optional dependencies, install them manually with pip install -r requirements/optional.txt or specify desired extras when calling pip (e.g. pip install -e .[optional]). Valid keys for the extras field are: all, tests, build, optional.

Build SDK and Demos

MMDeploy provides two recipes as shown below for building SDK with ONNXRuntime and TensorRT as inference engines respectively. You can also activate other engines after the model.

  • cpu + ONNXRuntime

    cd $env:MMDEPLOY_DIR
    mkdir build -ErrorAction SilentlyContinue
    cd build
    cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
        -DMMDEPLOY_BUILD_SDK=ON `
        -DMMDEPLOY_BUILD_EXAMPLES=ON `
        -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
        -DMMDEPLOY_TARGET_DEVICES="cpu" `
        -DMMDEPLOY_TARGET_BACKENDS="ort" `
        -DONNXRUNTIME_DIR="$env:ONNXRUNTIME_DIR"
    
    cmake --build . --config Release -- /m
    cmake --install . --config Release
  • cuda + TensorRT

    cd $env:MMDEPLOY_DIR
    mkdir build -ErrorAction SilentlyContinue
    cd build
    cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 `
      -DMMDEPLOY_BUILD_SDK=ON `
      -DMMDEPLOY_BUILD_EXAMPLES=ON `
      -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
      -DMMDEPLOY_TARGET_DEVICES="cuda" `
      -DMMDEPLOY_TARGET_BACKENDS="trt" `
      -Dpplcv_DIR="$env:PPLCV_DIR/pplcv-build/install/lib/cmake/ppl" `
      -DTENSORRT_DIR="$env:TENSORRT_DIR" `
      -DCUDNN_DIR="$env:CUDNN_DIR"
    
    cmake --build . --config Release -- /m
    cmake --install . --config Release

Note

  1. Release / Debug libraries can not be mixed. If MMDeploy is built with Release mode, all its dependent thirdparty libraries have to be built in Release mode too and vice versa.