Replies: 1 comment 1 reply
-
Hi @haimat, if you are using NVIDIA GPU, you may not even need to export the model and use
Hope this makes sense. If not, feel free to shoot some other questions :) |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have just started out with Anomalib, thanks for the great work on this guys. Our main task is to run fast inferencing on Nvidia GPUs. I have seen there are three export options for trainied models - OpenVino, ONNX, Torch. In the context of inference times on the GPU, which of these three peforms best?
Beta Was this translation helpful? Give feedback.
All reactions