-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX and Torch inference outputs are not the same #108
Comments
My code is almost the same as yours. After converting it to ONNX, did you encounter this error during ONNX inference? |
CC @baudm Not breaking the loop in AR fixes the issue (only required if you want to export):
This can certainly be solved in a nicer way, but that should solve the problem first. |
How did you convert the model to onnx doesn't it raise the error of Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.?? |
lowering the version of toch,at the meaning time,you need update the parseq/model.py line-117 tensor.bool to tensor.float |
Hi,
I run the script below, and the outputs from both models are not the same.
The output and torch/onnxruntime version:
I have no idea why it happened.
I also tried this as others suggested, but it also failed to produce the same output.
The text was updated successfully, but these errors were encountered: