You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OnnxRuntime have support for trt_build_heuristics_enable with TensorRT optimization
We observed that some of the inference request take extremely long time, when the user traffic change, without using the TensorRT optimization, we set the default onnxruntime with { key: "cudnn_conv_algo_search" value: { string_value: "1" } } to enable heuristic search, however when move to use TensorRT, this setting will be ignored, ort provides an alternative setting "trt_build_heuristics_enable" here https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#configurations for TRT that we would like to try with Triton. , which is not supported in the Triton model config.
The text was updated successfully, but these errors were encountered:
I would recommend rather enabling the timing cache. That will accelerate engine builds drastically. An engine cache will further help with not rebuilding the engine each time the same model is requested.
when the user traffic change
What exactly do you mean by that ? Dynamic shapes or different models ?
OnnxRuntime have support for trt_build_heuristics_enable with TensorRT optimization
We observed that some of the inference request take extremely long time, when the user traffic change, without using the TensorRT optimization, we set the default onnxruntime with { key: "cudnn_conv_algo_search" value: { string_value: "1" } } to enable heuristic search, however when move to use TensorRT, this setting will be ignored, ort provides an alternative setting "trt_build_heuristics_enable" here https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#configurations for TRT that we would like to try with Triton. , which is not supported in the Triton model config.
The text was updated successfully, but these errors were encountered: