-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Installation fails due to TensorRT 8.6.1 compatibility issues #2
Comments
Can you show your GPU, driver version, and CUDA version? According to my experience, the model is not very sensitive to the version of the dependent library, as long as tensorrt can be imported normally. So you can also try to install the corresponding library according to the actual situation, and then re-execute the tensorRT model conversion script (script/cvt_onnx_to_trt.py). |
@nguyenphuvinhtoan The possible reason for the environment problem is that the pytorch version in environment.yaml is incompatible with your environment. You can install the following libraries based on any version of pytorch in cuda12 to see if it can be installed normally. In addition, you can confirm whether you have manually configured the pip and conda sources. If you still cannot find the 8.x version of tensorRT, you can try to install other versions, such as 9.x. |
Sam error, tensorrt==8.6.1 not found. |
9.x is also not supported, so only 10.x which gives above error. |
pls try install from nvidia source |
Successfully Set Up the Environment with Tesla T4I’ve successfully set up my environment to work with Tesla T4. Below are the steps I followed:1. Firstly, I do a clean install cuda-toolkit-11.8 from source2. Secondly, I install cuDNN 8.9.0
3. Thirdly, I install tensorRT 8.6.1.6 from source: https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-861/install-guide/index.html4. Finally, I install tensorrt-libs 8.6.1 from https://pypi.nvidia.com/ (You can refer this link to find out the best match with your cuda version)pip install --no-cache-dir --extra-index-url https://pypi.nvidia.com tensorrt-cu11-libs==8.6.1 Requirements FileBelow is the requirements.txt file that works well with Tesla T4: requirements.txt If you not have GPU with Ampere architecture, remember to convert ONNX to TensorRT follow the ❗Note part of author |
@nguyenphuvinhtoan Great, thanks for sharing your solution. |
@nitinmukesh |
try to use the docker images nvcr.io/nvidia/tensorrt:24.01-py3 and pip install requirements as |
Did Anyone get this error when importing the tensorrt ? `>>> import tensorrt ImportError: libcudnn.so.8: cannot open shared object file: No such file or directory` |
@WeizhenEricFang |
Description
When trying to create the conda environment using
environment.yaml
, the installation fails due to TensorRT dependency issues. Specifically, the following errors occur:Unable to find compatible
tensorrt-libs==8.6.1
Python version compatibility warnings for various dependencies
Error Log
ERROR: Could not find a version that satisfies the requirement tensorrt-libs==8.6.1
(from versions: 9.0.0.post11.dev1, 9.0.0.post12.dev1, 9.0.1.post11.dev4, 9.0.1.post12.dev4, 9.1.0.post11.dev4, etc.)
ERROR: No matching distribution found for tensorrt-libs==8.6.1
Environment Details
conda env create -f environment.yaml
Questions
Additional Context
The installation process successfully downloads and prepares most dependencies but fails specifically at the TensorRT installation step. This appears to be due to version availability in the current package repositories.
Image error
The text was updated successfully, but these errors were encountered: