Skip to content

[WIP] LibTorch upgrade #1486

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 37 commits into
base: main
Choose a base branch
from

Conversation

alinpahontu2912
Copy link
Member

Upgrade Libtorch to 2.7.1 and Cuda 12.8

@phizch
Copy link

phizch commented Jul 9, 2025

Nice timing! I forked this repo 5 days ago to do the same upgrade. I managed to upgrade it, and all tests passed. But I couldn't figure out how to pack the libtorch packages. I've been trying to figure out how the last few days. But luckily it seems I can soon just update the official TorchSharp packages 😅.

Anyway, I noticed there are a couple of things I did that hasn't been done in this PR. But I don't know if I needed to do them either. But here they are, just in case:

  1. The Linux libtorch archive (libtorch-cxx11-abi-shared-with-deps-2.7.1%2Bcu128.zip) had a couple of .so files that aren't included in libtorch-cuda-12.8.proj. I left a comment there.
  2. I had to modify THSLinearAlgebra.cpp to get it to compile, even with the torch::linalg:: to torch::linalg_ changes. I had to use c10::string_view instead of the raw char* p value in THSLinalg_cond_str and THSLinalg_norm_str. Since yours compiled without this change it was probably dev-env differences.

@alinpahontu2912
Copy link
Member Author

alinpahontu2912 commented Jul 10, 2025

Hey @phizch, sorry for the delayed reply, I did not receive notification from this PR for some reason and have just seen your comments. I am currently facing some issue with the internal ci when packaging the new nugget files but am actively working on it! Thanks for the support though! If you want to attempt packing them on your device, you can inspect the azure-pipelines.yaml file and check the following steps: Windows_Native_Build_For_Packages, Build_libtorch_cuda_win_Packages and Build_TorchSharp_And_libtorch_cpu_Packages, you might be able to replicate them locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants