Note: OpenVINO backend is beta quality. As a result you may encounter performance and functional issues that will be resolved in future releases.
The Triton backend for the
OpenVINO. You
can learn more about Triton backends in the backend
repo. Ask
questions or report problems in the main Triton issues
page. The backend
is designed to run models in Intermediate Representation (IR). See here for instruction to convert a model to IR format. The backend is implemented using openVINO C++ API. Auto completion of the model config is not supported in the backend and complete config.pbtxt
must be provided with the model.
OpenVINO backend currently supports inference only on CPU devices using OpenVINO CPU plugin. Note the CPU plugin does not support iGPU.
Cmake 3.17 or higher is required. First install the required dependencies.
$ apt-get install patchelf rapidjson-dev python3-dev
Follow the steps below to build the backend shared library.
$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install -DTRITON_BUILD_OPENVINO_VERSION=2021.2.200 -DTRITON_BUILD_CONTAINER_VERSION=20.12 ..
$ make install
The following required Triton repositories will be pulled and used in the build. By default the "main" branch/tag will be used for each repo but the listed CMake argument can be used to override.
- triton-inference-server/backend: -DTRITON_BACKEND_REPO_TAG=[tag]
- triton-inference-server/core: -DTRITON_CORE_REPO_TAG=[tag]
- triton-inference-server/common: -DTRITON_COMMON_REPO_TAG=[tag]
Configuration of OpenVINO for a model is done through the Parameters section of the model's 'config.pbtxt' file. The parameters and their description are as follows.
CPU_EXTENSION_PATH
: Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.CPU_THREADS_NUM
: Number of threads to use for inference on the CPU. Should be a non-negative number.ENFORCE_BF16
: Enforcing of floating point operations execution in bfloat16 precision on platforms with native bfloat16 support. Possible values areYES
orNO
.CPU_BIND_THREAD
: Enable threads->cores (YES
, default), threads->(NUMA)nodes (NUMA
) or completely disable (NO
) CPU threads pinning for CPU-involved inference.CPU_THROUGHPUT_STREAMS
: Number of streams to use for inference on the CPU. Default value is determined automatically for a device. Please note that although the automatic selection usually provides a reasonable performance, it still may be non-optimal for some cases, especially for very small networks. Also, using nstreams>1 is inherently throughput-oriented option, while for the best-latency estimations the number of streams should be set to 1.SKIP_OV_DYNAMIC_BATCHSIZE
: The topology of some models do not support openVINO dynamic batch sizes. Set the value of this parameter toYES
, in order to skip the dynamic batch sizes in backend.ENABLE_BATCH_PADDING
: By default an error will be generated if backend receives a request with batch size less than max_batch_size specified in the configuration. This error can be avoided at a cost of performance by specifyingENABLE_BATCH_PADDING
parameter asYES
.RESHAPE_IO_LAYERS
: By setting this parameter asYES
, the IO layers are reshaped to the dimensions provided in model configuration. By default, the dimensions in the model is used.
The section of model config file specifying these parameters will look like:
.
.
.
parameters: {
key: "CPU_THROUGHPUT_STREAMS"
value: {
string_value:"auto"
}
}
parameters: {
key: "CPU_THREADS_NUM"
value: {
string_value:"5"
}
}
.
.
.
-
Not all models support dynamic batch sizes.
-
As of now, the Openvino backend does not support variable shaped tensors. However, the dynamic batch sizes in the model are supported. See
SKIP_OV_DYNAMIC_BATCHSIZE
andENABLE_BATCH_PADDING
parameters for more details. -
Openvino does not support CPU execution for FP16.